All (not yet) about Strategy Tester, Optimization and Cloud - page 15

 
Ivan Titov #:

And one more glitch: local network agents do not work although they are launched:

And at the moment of starting the optimisation they are switched off for some reason:


It is my understanding that if you're running the "Fast genetic based algorithm" mode, it doesn't use agents under 200 PR because they'll slow down the decision making for what to optimize and what to leave out for the next generation.  Your i7-3770 CPU should have a PR around 150.  If you run the "Slow complete algorithm" it should use your agents the whole time as it doesn't need a generation to completely finish before starting the next generation.

 
Shalem Loritsch #:

As far as I understand, if you run the "Fast Genetic Algorithm" mode, it doesn't use agents with PR less than 200, as they slow down the decision making process on what to optimise and what to leave for the next generation. Your i7-3770 processor should have a PR around 150. If you run the "Slow Full Algorithm", it should use your agents all the time, as it doesn't need a generation to fully complete before running the next generation.

I have all agents with PRs less than 200, and yet quietly work on optimisation tasks in "Fast Genetic Algorithm" mode. Where do you get this information from?

Most likely, it is not the absolute value of PR, but the relative differences of PR values of different agents on different servers. In this case, we can really get a slowdown in optimisation, because faster agents, having completed their pool of tasks, can idle while waiting for slower ones to finish their work. But this happens only if all tasks of one generation are already distributed to agents. While there are still unallocated tasks in a generation, as soon as a faster agent is released, new tasks are added to it. The optimiser tries to balance the load: agents with lower PR are given fewer tasks. But this is not such an accurate prediction that agents of different capacities will always finish their tasks at the same time. That's why it is desirable to have agents with approximately the same PR in the network.

But I have not encountered such behaviour that slow agents are simply not assigned tasks.

 
Yuriy Bykov #:
But I have never encountered such behaviour that tasks are not distributed to slow agents.

Out of 7 network agents, 4 are agents from the same computer where the test is running. But for some reason all 7 of them are switched off.

Regarding the distribution of tasks among agents: if during one generation some agents finish faster, don't they take the remaining runs from the weaker agents?

And honestly I didn't think that optimising subsequent generations would be limited to one run per local agent. As a result, the criterion didn't improve after generation 0:

2024.07.29 18:24:18.595 Tester Best result 579.0199086609301 produced at generation 0. Next generation 1

2024.07.29 18:27:31.281 Tester Best result 579.0199086609301 produced at generation 0. Next generation 2

2024.07.29 18:29:12.607 Tester Best result 579.0199086609301 produced at generation 0. Next generation 3

2024.07.29 18:32:20.178 Tester Best result 579.0199086609301 produced at generation 0. Next generation 4

2024.07.29 18:34:32.076 Tester Best result 579.0199086609301 produced at generation 0. Next generation 5

2024.07.29 18:35:48.571 Tester Best result 579.0199086609301 produced at generation 0. Next generation 6

2024.07.29 18:36:49.944 Tester Best result 579.0199086609301 produced at generation 0. Next generation 7

2024.07.29 18:37:58.872 Tester Best result 579.0199086609301 produced at generation 0. Next generation 8

2024.07.29 18:39:02.691 Tester Best result 579.0199086609301 produced at generation 0. Next generation 9

2024.07.29 18:39:02.696 Tester Best result 579.0199086609301 produced at generation 0. Next generation 10

2024.07.29 18:39:02.706 Tester Best result 579.0199086609301 produced at generation 0. Next generation 11

2024.07.29 18:40:11.862 Tester Best result 579.0199086609301 produced at generation 0. Next generation 12

2024.07.29 18:40:11.867 Tester Best result 579.0199086609301 produced at generation 0. Next generation 13

2024.07.29 18:40:11.878 Tester Best result 579.0199086609301 produced at generation 0. Next generation 14

2024.07.29 18:40:11.878 Tester Best result 579.0199086609301 produced at generation 0. Next generation 15

2024.07.29 18:40:11.894 Tester Best result 579.0199086609301 produced at generation 0. Next generation 16


 
Ivan Titov #:

Of the 7 network agents, 4 are agents from the same computer where the test is running. But for some reason all 7 of them are disconnected.

It is desirable to remove these four agents from the local computer from the list of remote agents in the local network. I didn't think that someone would think of doing this. It can't benefit the speed of optimisation, but it can have side effects. The processor on the local computer from such an addition remains the same. That's why even if you manage to launch 12 MetaTester processes, they will still share the resources of four physical cores (or eight logical cores). There will be no gain in terms of execution time of a fixed number of tasks. Just either 4 processes with X speed work simultaneously, or 8 processes with X/2 speed work simultaneously, or 12 processes with X/3 speed work simultaneously.

Regarding task allocations to agents: if some agents finish faster during a generation, don't they take the remaining runs from the weaker agents?

No, if tasks are already allocated to a particular agent, then they won't be reallocated to the freed up agents in genetic optimisation. I can't say for sure (I don't use it practically), but there you can stop and start optimisation again at any moment. This will lead to a new redistribution of tasks. With genetic optimisation there is no point in doing this, as the information about the past generations will be lost.

And to be honest, I didn't think that optimisation of subsequent generations would be limited to one run for each local agent. As a result, the criterion did not improve after generation 0

It happens that several generations cannot beat the record of the criterion obtained in an earlier generation. There is degeneration of generations, when in new generations the number of individuals decreases sharply and in the end there is only one individual left, which generates itself. This is the end of optimisation. But the reasons for such behaviour lie not in the optimiser, but in the Expert Advisor and its optimised parameters.

 
Yuriy Bykov #:
I didn't think anyone would think to do that.

It's the manufacturer's recommendation.

 

Thanks, somehow I missed that. Although it explicitly says"If the processor cores still have some power reserve...". " So the increase can be only on this remaining reserve, which may be quite small. But the optimiser has difficulties with even distribution of tasks, which is illustrated by the picture in your first post: some part of time agents are idle, waiting for others. You can only determine what is more profitable by experimenting with time measurements. I have found that optimisation generally completes faster if I remove the slower agents, which make up about a fifth of the total number of available agents.

 
Frankly speaking, I don't understand what the problem is to automatically reallocate the remaining passes when a free agent appears, even with genetic optimisation (it is even possible to take into account the agent's speed). Can the developers of the tester explain it?
Yuriy Bykov #:
The reasons for such behaviour lie not in the optimiser, but in the Expert Advisor and its optimised parameters.

It is unlikely. I ran it several times on different EAs, and always after the 0th generation the result stopped growing, and the passes per generation were no more than the number of local agents (which should be orders of magnitude more). Although the first time I somehow managed to run a normal genetic optimisation with a good growing result on several generations. And then that was it...