Discussing the article: "Role of random number generator quality in the efficiency of optimization algorithms" - page 15

 

Now that the dog is wagging its tail (empiricism by optimisation) and not the other way around, we can consider any optimisation algorithm for a conditionally stationary process.

In this case we can use the terminology of finding global and local minima.

But not for optimising unknowns and fitting to abstract minima or maxima.

But even in this case, AO tends to overtraining (pre-fitting), then validation techniques are used to determine the robustness of certain parameters from learning theory.

 
С
Yuriy Bykov #:
Unfortunately, it became even less clear what it was about.
fxsaber #:

Obtuse language+forum format = misunderstanding with high probability.

Those who wish to participate in constructive discussion of the problem of searching for robust solutions can write to me in private messages. We will organise private chat with participants by invitation.

And participation in conversations that do not imply constructive dialogue is not on my current task list.

 
If all this were written down somewhere, we wouldn't have to brainwash people with maxima and plateaus and other bullshit that has no meaning outside the context of the stationarity of the process.
 
Even when the conditions are met, brute force Monte Carlo works as well as the whole bunch of algorithms. That is, just choose random values of parameters n times and validate.