Discussing the article: "Population optimization algorithms: Changing shape, shifting probability distributions and testing on Smart Cephalopod (SC)" - page 2

 
fxsaber uniform distribution.

Thanks.

In articles, if possible, I try to convey the basic meaning of strategies (those that are not designed for general type problems - I have to seriously rework, for example, those that were originally designed for the travelling salesman problem).

The combination of the strategy and the distribution used is very important; some strategies are better for some distributions and others for others.

 
fxsaber uniform.


Thanks to this series of articles it became clear that results can vary greatly not only from the search strategy, but also from its input parameter values. Plus setting distributions.

How to find the optimal one for your task is not quite clear. Because you need to optimise what is being optimised.

For the tests, I have deliberately chosen three test functions with completely different properties to cover as large a range of hypothetical tasks as possible. I chose the algorithm settings carefully to get the maximum possible aggregate result. That is, of course, it is possible to choose settings that will give better results on one test function, but then the results on other functions will fall and the total score will fall too. That's why you can safely use the default settings of the algorithms set by me, they are the best for each algorithm.

By the way, in the beginning I did so, I optimised the algorithm settings with another algorithm, then with experience I learned to quickly select the best parameters manually. The table is a straightforward freush, all the juice squeezed out of the algorithms. But, of course, there always remains a cake, which can be squeezed if you want.

 
fxsaber #:
Please show me how to measure the quality of MT5 GA on test functions.

The standard GA is incredibly cool, but it has disadvantages - the length of the chromosome is limited, hence the limitation on the step and the number of parameters (the step and the number of parameters are inversely related, if you increase one, the other decreases).

That is why it is difficult to compare with the standard GA, it does its job perfectly. And if you need sophisticated perversions - there is a series of articles on this topic.)))

One thing does not interfere with another, because in both cases our favourite MQL5 is used.

 
The article adopted amendments with the definition of momentum.
 

в реальных задачах существует неопределенность и случайность и именно здесь распределения вероятностей вступают в игру. Они позволяют учесть случайность и неопределенность в оптимизационных задачах.

Probability distributions are also actively used in evolutionary and population algorithms. In these algorithms, the random generation of new states in the search space is modelled using appropriate probability distributions. This allows the parameter space to be explored and optimal solutions to be found, taking into account randomness and diversity in the population.

More sophisticated optimisation methods use probability distributions to model uncertainty and approximate complex functions. They can efficiently explore the parameter space and find optimal solutions given the randomness and noise in the data.

I was trying to understand how it came to the idea of replacing uniform probability with other probabilities by adding more bias.


Do I understand correctly that in some complex optimisation method you encountered the use of non-uniform probability and then decided to generalise and investigate?

How did you get to the bias?


I realise that it didn't happen by accident, and feel a lot of things intuitively. It's just that my level of understanding is far away, to put it mildly. Now it looks like some kind of magic. I realise that I would not have reached such a variant even by accident with my current ideas.

 
fxsaber uniform probability with other probabilities by adding more bias.

Am I right to understand that in some complex optimisation method you encountered the use of non-uniform probability and then decided to generalise and investigate?

How did you get to the bias?

I realise that it didn't happen by accident, and feel a lot of things intuitively. It's just that my level of understanding is far away, to put it mildly. Now it looks like some kind of magic. I realise that I would not have reached such a variant even by accident with my current ideas.

The idea to use distributions other than uniform came in 2011-2012, when it seemed logical to investigate more carefully the neighbourhood of known coordinates and pay less attention to distant unknowns.

Later I learnt that some other algorithms use non-uniform distributions, but mostly normal distribution is used.

I also encountered edge effects of artefactual accumulation of frequency of appearance of new values at the boundaries of the acceptable range, it is an unnecessary waste of precious attempts, and therefore a waste of time and resources. After some time I realised that these artefacts arise precisely because the necessary distribution shift was not taken into account. I can't speak for all the existing algorithms in the world, but I haven't met such approaches anywhere before. This is if we are talking about shifting the distribution within specified boundaries.

If we talk about purposeful change of probabilities without using the distribution shift, the simplest example is roulette in genetic algorithms, in which an individual for crossbreeding is chosen randomly, but in proportion to its adaptability.

In general, the conscious application of distribution bias opens new horizons in machine learning and other fields (not referring to optimisation). Distributions can be moulded from several distributions in any way and in any combinations, and it is really a powerful tool apart from the search strategies themselves. That's why I thought it would be worthwhile to cover this topic separately.

Perhaps my articles do not correspond to a clear scientific narrative and are far from mathematical rigour, but I try to prefer practical aspects to theoretical ones.


PS. And for me many things in optimisation operating with random variables look like magic. It still seems incredible to be able to find something using random methods. I suspect that this is an area of knowledge that will still show itself in the study of AI in the world, since the thought processes of intelligent beings are, strangely enough, carried out by random processes.

 
Andrey Dik #:

I tend to favour the theoretical over the practical aspects.

Thank you for the detailed answer. In my endeavours similarly, I see the point of favouring the practical more.

That is why I am waiting for a wrapper to be able to apply these algorithms.