All (not yet) about Strategy Tester, Optimization and Cloud - page 16

 
Ivan Titov #:

So you can't trust genetic optimisation. I ran it once: I found a more or less decent result. Then I ran it several times with other combinations of parameters, which necessarily included the result of the first optimisation.

How many possible combinations of parameters (in the last line on the "Parameters" tab) are shown before you run the optimisation, among which the genetic algorithm will select a small part for the first generation?

Have you checked whether changing each parameter affects the results of a single pass? If some parameters either do not affect the result, or affect it very weakly, or affect it only in a small region of values from the range of possible values specified by it, or affect it only at certain combinations of values of other parameters, then it may lead to degeneration of the population (=stuck in a local extremum)

Another question: why does the test agent manager let me add network agents only exactly half of the available local agents? I have a comp in the network running only for the test, nobody works on it. It turns out that half of the power is idle.

The answer is simple: your processor has 4 physical cores and 8 logical cores. Roughly speaking, a pair of logical cores uses one physical core. Local agents are created according to the number of logical cores. Network agents are created by default by the number of physical cores. Network agents will run about twice as fast as local agents (unless, of course, local agents are used while network agents are in use and vice versa). In the end, it all comes down to the resources of the available processor. In both cases it is used almost to its full capacity. It seems that a couple of years ago I was choosing how to organise testing on several computers and which of them to make the main one. It turned out that there were no advantages of using eight local agents compared to using this computer as a provider of four network agents.

If you want to use network agents only independently and not sell their power through MQL Cloud Network, then, if I remember correctly, you can add as many network agent services as you want. You can do 8, 12, 16.... You can sell the capacity of network agents only if it is not more than the number of physical cores of the processor.

 
Yuriy Bykov #:

How many possible combinations of parameters (in the last line on the "Parameters" tab) do you show before running the optimisation, among which the genetic algorithm will select a small fraction for the first generation?

Have you checked whether changing each parameter affects the results of a single pass? If some parameters either do not influence the result, or influence it very weakly, or influence it only in a small region of values from the range of possible values specified by it, or influence it only at certain combinations of values of other parameters, it may lead to degeneration of the population (=stuck in a local extremum).

You cannot equate population degeneration with being stuck in a local extremum. Population degeneration means absence of new solutions and clogging of the whole population with solutions with close FF value - genetics does not suffer from this, mutation periodically creates new solutions that shake up the population.

Another thing is that there may indeed be a problem with getting stuck in a local. Either artificial smoothing of the FF by some techniques or redesigning the FF formula in general, or using self-written algorithms can help here, as MQL5 allows you to do it.

 
A cure (not a panacea) for getting stuck may be to reduce the parameter step - reducing the discreteness of the problem by increasing the search space (so not a free cure).
 
I realise that there are such problems. But I never got an explanation as to why there are many passages in the zero generation (I have about 300), but in the subsequent ones there are consistently no more than the number of available agents (7 in my case). Isn't it more correct to cross the best of the 0th with another 300, etc.?
 
Ivan Titov #:
I realise that there are such problems. But I never got an explanation as to why there are many passages in the zero generation (I have about 300), but in the subsequent ones there are consistently no more than the number of available agents (7 in my case). Isn't it more correct to cross the best of the 0th with 300 more, etc.?
Yuriy Bykov #:

How many possible combinations of parameters (in the last line on the "Parameters" tab) do you show before running the optimisation, among which the genetic algorithm will choose a small part for the first generation?

 
How many possible combinations of parameters (in the last line on the "Parameters" tab) do you show before running the optimisation, among which the genetic algorithm will choose a small part for the first generation?

Hundreds of thousands.

 
Ivan Titov #:

Hundreds of thousands.


then it's unclear why this is happening, there are plenty of parameter options to hand out to agents.
 
Ivan Titov #:

Hundreds of thousands.

Try to reduce the ranges of parameters or increase the step sizes for parameters so that the number of combinations becomes smaller (~1000) and run a full search. In this case, will all passes give different results?

And by the way, what optimisation criterion are you using? Is it your custom one or one of the standard ones?
 
Yuriy Bykov #:
what optimisation criterion do you use?

Custom.

 
Yuriy Bykov #:
Try decreasing the ranges of the parameters or increasing the step sizes for the parameters

I tried to increase the ranges and step sizes. I think this will solve the problem of getting stuck: then I will run each of the found local extrema additionally with smaller ranges and smaller steps (and probably several times to defeat degeneracy).