metatester 5 agents manager самоудаляется в процессе лайвапдейта? - страница 14

 
igogo #:

I've raised this issue before, but I can't help but do it again.

The bases folder is almost 35 gigabytes! Inside, the first XPMT5-PRD folder created is dated 01/20/2022. But the last movement in the folder was recently 10/17/2022. It turns out that periodically clearing the bases (once every few months, as I have already been advised) seems like a so-so option?


Why is there so much anyway? And most importantly, what does it give developers? And what does it give me, as a renter of computing power? Saving Internet traffic does not interest me, but I would not refuse an extra 30 gigabytes on the disk now.

What will happen if you clear the bases, for example, daily?


Only 37.5 GB?  You're just getting started!  Mine is 406.6 GB:

Huge Bases folder

You ask good questions, but I'm afraid only the developers can fully answer them—and the solution may be for them to rethink the current implementation.  I have not noticed any adverse reaction from clearing out the Bases folder, although one could reasonably be expected.  My observation is that the software seems to accumulate this huge cache over time but fails to effectively use or manage it, constantly redownloading countless gigabytes of files that should be the same as what has already been downloaded before (e.g. downloading tick data for past years and brokers that was already downloaded and used in another job a days/weeks ago), causing testing to take longer and incurring significant Internet bandwidth each month regardless of the size of the cache.  It seems that the cache is most frequently used between agents (where one agent downloads bases while the agents handling other passes of the same job share), but hardly used between different testing jobs (where it often downloads preexisting database files all over again).  Sometimes, it deletes random files from the cache, but it doesn't seem to have an effective cleanup or inventory strategy in place.  In fact, it can completely run out the free space of a drive with hundreds of gigabytes of Temp files during testing, causing all the tests to fail hours into a job, and even still it won't thin out the Bases folder to maintain free space and keep things moving along.  After it grinds to a halt and fails due to lack of disk space, it still doesn't delete the Temp files or thin out the Bases folder, but gets stuck in a "disabled mode" where it's connected and appears to be working, but never gets any jobs due to lack of disk space—indefinitely.  The user has to realize what happened and manually delete Temp/Bases files to resume normal operation.

 
Кто-то сегодня мощную стратегию тестирует) Не находите, Господа?