Discussing the article: "Role of random number generator quality in the efficiency of optimization algorithms" - page 12

 
Andrey Dik #:
Is there no element in the circuit that is responsible for and/or affects robustness? What is this element?

There can be many ways of generalisation and it is desirable to try them all in order to identify the most effective one. From this point of view, the flowchart will be augmented differently depending on the approach.

For example, there may be a "Postprocessing" (for clustering) after the "Result" block (which should be renamed to "Results" because clustering needs all results, not just one).

Or some "Controller" between "Fitness function" and "Optimisation Algorithm", which can perform all sorts of functions, in particular, add random noise.

Also, the diagram clearly lacks "Input Data". Then it could be divided into IS/OOS and apply blocks to them in parallel, checking as we go.

Finally, there is the well-known Walk-Forward Optimisation approach (though specific for time-series, not optimisation in general). For him, the current scheme is only one stage of optimisation, which in fact should be prepared several by some external block "Manager" - for example, several 12 month optimisations with a month shift. And then a new dimension for research opens up - figuratively speaking, we see how the FF "breathes" over time and learn to predict its next form (how? - a new series of articles will be needed here). Or vice versa - we see that it changes so unpredictably that either the TS or the financial instrument are clearly not suitable for each other... or we need to reduce the forward step to 1 week to make it work. This is not only a question of stability, but also the duration of this stability.

 
Stanislav Korotky #:

There can be many ways of generalisation and it is desirable to try them all in order to identify the most effective one. From this point of view, the flowchart will be completed in different ways depending on the approach.

For example, there may be "Postprocessing" (for clustering) after the "Result" block (which should be renamed to "Results" because clustering requires all results, not just one).

Or some "Controller" between "Fitness function" and "Optimisation Algorithm", which can do all sorts of things, such as adding random noise.

Also, the diagram clearly lacks "Input Data". Then it could be divided into IS/OOS and apply blocks to them in parallel, checking them as we go along.

Finally, there is the well-known Walk-Forward Optimisation approach (though specific for time-series, not optimisation in general). For him, the current scheme is only one stage of optimisation, which in fact should be prepared several by some external block "Manager" - for example, several 12 month optimisations with a month shift. And then a new dimension for research opens up - figuratively speaking, we see how the FF "breathes" over time and learn to predict its next form (how? - a new series of articles will be needed here). Or vice versa - we see that it changes so unpredictably that either the TS or the financial instrument are clearly not suitable for each other... or we need to reduce the forward step to 1 week to make it work. This is not only a question of stability, but also the duration of this stability.

What you named has the right to be and in many cases should be, but it refers to "symptomatic treatment", does not address the reasons for robustness or non-robustness of the results obtained and is an external measure (like making a diagnosis by periodic measurements of the patient's temperature - this is neither bad nor good, it just may not give an opportunity to get an objective anamnesis).

In fact, everything that really affects the robustness of the results is already in this scheme.

 
Andrey Dik #:

What you named - has the right to be and in many cases should be, but it refers to "symptomatic treatment", does not consider the reasons for robustness or non-robustness of the obtained results and is an external measure (like making a diagnosis by periodic measurements of the patient's temperature - this is neither bad nor good, just may not give the opportunity to get an objective history).

In fact, everything that really affects the robustness of the results is already in this scheme.

Then we wait for explanations and demonstrations on tests.

 
Stanislav Korotky #:

Then we await clarification and demonstrations on the tests.

Ok. I would like to hear the vision of Saber and Andrei, additionally, as participants of the discussion of robustness.

If I publish a working method of obtaining robust results of systems on non-stationary processes, I will immediately get a Nobel Prize and a monument made of a solid piece of malachite, which is very attractive, but hardly feasible. But, at least, it is already good to understand what can influence robustness of results and what cannot.

 
Tough.) our motto is invincible.
 
Stanislav Korotky #:
The question is what to do next with the set of these sets with "hill tops". Earlier we had one global maximum as a solution of the optimisation algorithm, let's say now we have 50 of them. But they don't come close to solving the stability problem.

Let's say that the TC can catch a relatively stable pattern at certain settings. At the same time, the global maximum of OnTester does not fall within these settings, and we do not know which FF to choose to hit the desired bull's-eye.


If some pattern in prices can be reproduced by the TS, then the sought set will correspond to some local top of the FF. Some high tops will correspond to non-systematic white swans that were on Sample. Because of these, lower but potentially stable input sets are missed on classical AOs.


A simple statement that is easily tested in practice. Take almost any TS with a large number of non-overlapping positions. For example, 10 such positions per day. Find the global maximum of InputsMax1 for the whole year 2023 and InputsMax2 for the summer of 2023. Obviously, in the summer of 2023, no AO will find InputsMax1 even close. But you will find that among the local vertices for summer 2023 there is one that is very close to InputsMax1.


I return to the question. The 50 found vertices should be run on OOS. And if a conditional InputsMax1 is found among them, we dig further. If not - throw it away (change the symbol).

 
The first proof that no one needs the global maximum, nor is it super accurate to find it. This is checkmate for now. Checkmate will be further, although the stalemate situation appeared in the MO topic.

Although there the opponent confuses king with queen.
 

There is certainly nothing in the schematic that affects the rabustness

There's only stuff in the circuitry that affects the fit. The better the scheme, the better the fit.

If we are talking about FF, it has no effect on robustness.

 
I don't understand why you talk about hills and peaks as something static. The market is not static!
After all, it is absolutely obvious, if you get a new FF surface every day, for example, when "optimising" two parameters, and then glue the obtained frames together, you will get something like this:



So what if you even caught the right hill? It's a hill of history, but not of the future. What's the point?
So I agree with Dmitrievsky. Fitting to history is still fitting, even if you call it optimisation by the Macaca, Crane or Octopus method.

 

For example, an illustrative example.
We take the strategy of ten harmonics, which must be added up to get an extrapolation line to make a decision on opening trades.
Each harmonic is three parameters: amplitude, frequency, phase shift. Total 10*3=30 required parameters.
Of course, you can calculate them using Fast Fourier Transform in a few milliseconds, but we are not going to look for easy ways, and will choose the best combination of these 30 parameters using brute force optimisation, genetic algorithm and Dick's articles.
Hopefully, on a good supercomputer we will get the right combination of 30*10000 = 300000 combinations of all 30 parameters in a couple of days after genetic search, and so every week we will re-optimise this strategy on the weekend.
And here is what we will get on the weekly chart:



As you can see, the red extrapolation line will not help us much in trading despite the hundreds of kilowatts of energy burned during optimisation :))) Because the market is always changing.

The moral of this fable: It is necessary not to search for parameters, but to calculate them in the process of trading within the TS in order not to burn unnecessary energy. To be VERY brief

There is always a "fast Fourier transform" for any parameter.



Files:
2Fourier.mq5  16 kb