Discussing the article: "Population optimization algorithms: Simulated Annealing (SA) algorithm. Part I"

 

Check out the new article: Population optimization algorithms: Simulated Annealing (SA) algorithm. Part I.

The Simulated Annealing algorithm is a metaheuristic inspired by the metal annealing process. In the article, we will conduct a thorough analysis of the algorithm and debunk a number of common beliefs and myths surrounding this widely known optimization method. The second part of the article will consider the custom Simulated Isotropic Annealing (SIA) algorithm.

The Simulated Annealing algorithm was developed by Scott Kirkpatrick, George Gelatt and Mario Vecchi in 1983. When studying the properties of liquids and solids at high temperatures, it was found that the metal transforms into a liquid state and the particles are distributed randomly, while the state with minimum energy is achieved under the condition of a sufficiently high initial temperature and a sufficiently long cooling time. If this condition is not fulfilled, then the material will find itself in a metastable state with non-minimum energy - this is called hardening, which consists of sharp cooling of the material. In this case, the atomic structure has no symmetry (anisotropic state, or uneven properties of the material inside the crystal lattice).

During the slow annealing process, the material also turns into a solid state, but with organized atoms and with symmetry, so it was proposed to use this process to develop an optimization algorithm that could find a global optimum in complex problems. The algorithm has also been proposed as a method for solving combinatorial optimization problems.

Thus, the main idea of the algorithm is based on a mathematical analogue of the metal annealing process. During the annealing process, in order to evenly distribute its internal energy, the metal is heated to a high temperature and then slowly cooled, allowing the metal molecules to move and order into more stable states, while internal stresses in the metal are relieved and intercrystalline defects are removed. The term "annealing" is also associated with thermodynamic free energy, which is an attribute of the material and depends on its state.

The simulated annealing optimization algorithm uses a similar process. The algorithm applies operations similar to heating and cooling the material. The algorithm begins its work with an initial solution, which can be random or obtained from previous iterations. Then it applies operations to change the state of the solution, which can be random or controlled, to obtain a new state, even if it is worse than the current one. The probability of making a worse decision is determined by a "cooling" function, which reduces the probability of making a worse decision over time, allowing the algorithm to temporarily "jump out" of local optima and look for better solutions elsewhere in the search space.

Author: Andrey Dik

 
A good reference book on optimisation algorithms, Thanks!
 
fxsaber #:
A good reference book on optimisation algorithms, Thank you!

Thank you.

 
Bro phenomenal content, I love how you express the algorithm in such a compact manner that's easy to read at the same time.

Quick question I want to ask you related to the test objective function. How can we create an objective function that will return the historical profit or loss of our expert advisor under its current settings, that way we optimise the expert parameters for profit. I hope I expressed the question clearly. 
 
Gamuchirai Zororo Ndawana #:
Bro phenomenal content, I love how you express the algorithm in such a compact manner that's easy to read at the same time.

Quick question I want to ask you related to the test objective function. How can we create an objective function that will return the historical profit or loss of our expert advisor under its current settings, that way we optimise the expert parameters for profit. I hope I expressed the question clearly. 

If you don't mind to dig into a bit cryptic source codes from fxsaber then look into this implemenation published in fxsaber's blog (may require a language translation).

Optimization - самостоятельная оптимизация торгового советника.
Optimization - самостоятельная оптимизация торгового советника.
  • 2024.03.26
  • www.mql5.com
После появления своего тикового тестера логичным продолжением было применить его на множестве алгоритмов оптимизации . Другими словами, научиться оптимизировать торговые советники самостоятельно - без
 
Gamuchirai Zororo Ndawana #:
Phenomenal content, I love how you lay out the algorithm in such a compact manner that is easy to read at the same time.

I would like to ask you a question related to test objective function. How can we create an objective function that will return the historical profit or loss of our EA at its current settings, so we optimise the EA parameters for profit. I hope I have expressed the question clearly.

Thank you for your kind words, glad you like the article. I hope you were helped by @Stanislav Korotky's comment.

TesterStatistics () may be useful for compiling custom fitness functions for use in OnTester ().

 

is there an example of how to implement these algorithms in an EA?

Thank you

 
SergioTForex #:

is there an example of how to implement these algorithms in an EA?

Thanks

https://www.mql5.com/ru/articles/14183
Использование алгоритмов оптимизации для настройки параметров советника "на лету"
Использование алгоритмов оптимизации для настройки параметров советника "на лету"
  • www.mql5.com
В статье рассматриваются практические аспекты использования алгоритмов оптимизации для поиска наилучших параметров советников "на лету", виртуализация торговых операций и логики советника. Данная статья может быть использована как своеобразная инструкция для внедрения алгоритмов оптимизации в торгового советника.
 

As it is correctly pointed out, the main advantage of annealing is simplicity of implementation. Therefore, the population modification of this algorithm is just begging for parallelisation.

I remembered the author's statement about the ease of writing parallel algorithms in MQL5, but I haven't seen it confirmed in his articles yet. Correct me if I am wrong.

PS. I mean normal parallelisation like the one implemented in the standard optimizer, not tricks with launching several program instances. And, of course, implementation by means of MQL5 without using external dlls.

 
Aleksey Nikolayev external dlls.

If you need parallelisation at the code level, OpenCL is widely used - look at Gizlyk's articles on neural networks and his textbook.

If you need optimisation and parallelisation at the whole program level, as it happens in the standard optimiser, you can look at the example of Booster, which uses parallelisation of EA instances into agents (pure MQL5 is used without using external dlls).

Creating trites for separate functions as in C# and other forms of code parallelisation are not supported in MQL5.

Нейросети в алготрейдинге — практическое пособие по использованию машинного обучения в алгоритмическом трейдинге
Нейросети в алготрейдинге — практическое пособие по использованию машинного обучения в алгоритмическом трейдинге
  • www.mql5.com
В эпоху цифровых технологий и искусственного интеллекта алгоритмическая торговля преобразует финансовые рынки, предлагая новые стратегии для...
 
Andrey Dik #:
Creating trites for separate functions as in C# and other forms of code parallelisation are not supported in MQL5.
Exactly.
Andrey Dik #:
application of OpenCL

Extremely inconvenient technology both for coding and for subsequent use in practice. This is quite confirmed, for example, by the fact that the standard optimiser does not use it.

Andrey Dik #:
look at the example of Booster

This approach can hardly be applied when you need to perform multiple optimisations (an indefinite number of times and possibly with an indefinite set of parameters at each time). For example, it may be ensemble MO models.