MT5 parallel backtests result in abnormally high disk usage even as memory utilization remains well below 100%

 

I am using the latest build - 3950.
If a single agent is enabled then no problems, there is no elevated memory usage and disk usage is near 0% as well.
If I enable 2 agents then disk utilization goes to 17%, and the transfer rate is about 500 MB/sec, which goes on as long as the backtest runs.
Memory utilization is around 42%.
If I enable 8 agents then disk utilization shoots up to like 90% and there is a sustained transfer rate of about 2590 MB/sec for as long as the backtest runs.
With 6 enabled agents the disk utilization is about 43% (2345 MB/sec sustained transfer rate) even as memory usage hovers below 70%.

See the 2 screenshots below - 8 agents and then 6 agents.
I repeat an important point - the sustained disk transfer rates do not diminish over time.

To those who will say that paging is to blame my question is why even with 2 agents the disk transfer rate goes to 500 MB/sec from approx 0 MB/sec?
My VM has 16 GB of RAM, dedicated to it (meaning that RAM is not allocated dynamically from the host).
Previously I allocated 40 GB of dedicated RAM with similar results.
So, no, I am not buying the proposition that paging is to blame for what's happening. There is no evidence to support that.
I don't want to fry my computer's disk, so I currently have only a single agent enabled instead of 20.
Any ideas?
Is this fixed in a beta build?
Are there any environment variables that can be set before launching MT5 that will change this?
For instance if the large spike in disk usage is due to excessive logging then I wish I could turn off that logging.
But I don't think it's logging.




 

If you are using a SSD, then you may need to temporarily disable TRIM on that drive.

Due to the large files that get continuously generated and deleted, TRIM may kick in multiple times and slow down, delay, or even halt disk I/O completely for a short while.

I seldom run optimisations, but when I do, I use the following batch files to control the TRIM during operations.

TRIM-Disable.cmd

fsutil behavior query DisableDeleteNotify
fsutil behavior set DisableDeleteNotify 1
fsutil behavior query DisableDeleteNotify
PAUSE

TRIM-Enable.cmd

fsutil behavior query DisableDeleteNotify
fsutil behavior set DisableDeleteNotify 0
fsutil behavior query DisableDeleteNotify
PAUSE

TRIM-Query.cmd

fsutil behavior query DisableDeleteNotify
PAUSE
 

Thanks for the comment.
Can you elaborate a little more on
```
Due to the large files that get continuously generated and deleted ...
```

Specifically, why this is not an issue with 1 agent but becomes an issue with 2+ agents?
I am just trying the grasp the bigger picture.

 
snikoz-cad #: Thanks for the comment. Can you elaborate a little more on?


"Due to the large files that get continuously generated and deleted ..."

Specifically, why this is not an issue with 1 agent but becomes an issue with 2+ agents?
I am just trying the grasp the bigger picture.

I don't know the specifics of your hardware. It's just the case on my computer because my SDD does not have parallel TRIM operations.

Each agent creates and deletes files, large ones at that, and the TRIM queue builds up quickly with many agents, and in my case, it causes the issue.

So, try it on your end and see if it resolves it for you or not.

 
Fernando Carreiro #:

I don't know the specifics of your hardware. It's just the case on my computer because my SDD does not have parallel TRIM operations.

Each agent creates and deletes files, large ones at that, and the TRIM queue builds up quickly with many agents, and in my case, it causes the issue.

So, try it on your end and see if it resolves it for you or not.

Ok, point taken.

Still I wouldn't want my SSD disk to endure this kind of abuse.
I am going to check if I could run backtests on a RAM disk because it's unlikely that the dev team will fix this issue any time soon.