Machine learning in trading: theory, models, practice and algo-trading - page 2828

 
Andrey Dik #:
Yeah, I'll have a look at adam sometime at my leisure, do some tests.
The articles are top, just not qualified enough to argue anything :)
 
Maxim Dmitrievsky #:
The articles are top, just not qualified to object to anything :)

thanks))))

then I see the need to include algorithms traditionally used with neurons in the review too.

 
Andrey Dik #:

In practice, this means that the neuron will be undertrained.

Well, that's a bit of an afterthought.

there are different types of AO, local optimisation and global optimisation...

local is gradients, the same Adam, etc... global is genetics, etc...

networks are trained with local AO because it's fast, "there are a lot of weights".

and it's just not effective to train global AO...


And the main thing is that if you train a normal neuron, which is about a billion weights, with global AO, firstly, you will have to wait a long time, and secondly, you can in no way guarantee that you have found a global minimum....

So all this talk is a profanation of pure water, SUPER naive belief that those who created deep learning do not know about global optimisation algorithms and their properties, it's so obvious that it's not even funny....


You will learn to distinguish global optimisation algorithms from local optimisation algorithms, and then there is discrete optimisation, continuous optimisation, multi-criteria optimisation, etc....

And each of them has its own tasks, to pile everything in a heap and test something is profanation.

 
mytarmailS #:

Well, that's a bit of an afterthought.

there are different types of AO, local optimisation and global optimisation...

local is gradients, the same adam, etc. global is genetics, etc...

networks are trained locally because it's fast, "there are a lot of scales".

and it's just not efficient to train global AOs...


And the main thing is that if you train a normal neuron, which is about a billion weights, with global AO, firstly, you will have to wait a long time, and secondly, you can in no way guarantee that you have found the global minimum....

So all this talk is a profanation of pure water, SUPER naive belief that those who created deep learning do not know about global optimisation algorithms and their properties, it is so obvious that it is not even funny....

It's horrible.

there is no division of algorithms into "local" and "global". if an algorithm gets stuck in one of the local extrema, it is a flaw, not a feature.

There are highly specialised comparisons of traditional AOs for neurons, you can search for them. algorithms are usually used for specific tasks, but all algorithms without exception can be compared in terms of convergence quality.

 
Andrey Dik #:

thank you)))

then I see the need to include the algorithms traditionally used with neurons in the review too.

I read once that if the error does not change much for several cycles, i.e. it is around an extremum, then to check if it is local, a strong jump in parameters is made to jump out of this extremum. If it is local, it will not return to it on the next jumps, if it is global, it will return. You can repeat several times. In general, it is necessary to explore the space more widely.
 
Andrey Dik #:

that's terrible.

there is no division of algorithms into "local" and "global". if an algorithm gets stuck in one of the local extrema, it is a flaw, not a feature.

Gradient descent algorithms are used, which is in general, not for neurons, and which has a huge beard. Google it and don't ask childish questions, having learnt how gradient descent overcomes different types of traps of local extrema. This is something people have been doing specifically for years.

 
elibrarius #:
I read once that if the error does not change much for a few cycles, i.e. it revolves around an extremum, then to check if it is local, a strong jump is made in the parameters to jump out of this extremum. If it is local, it will not return to it on the next jumps, if it is global, it will return. You can repeat several times. In general, you need to explore the space more widely.
Yeah, that's right. That's one way of trying not to get stuck. By the way, I was looking at Levi's flight the other day, it's from this topic.
 
Here is an interesting comparison, for example, Adam vs genetics or ant colony, for example. A trader may have a dilemma: what to use, MT5 optimiser or NS. Even if he chooses something, he will want to take the most efficient algorithm
 
Andrey Dik #:

that's terrible.

there is no division of algorithms into "local" and "global". if an algorithm gets stuck in one of the local extrema, it is a flaw, not a feature.

There are highly specialised comparisons of traditional AOs for neurons, you can search for them. algorithms are usually used for specific tasks, but all algorithms without exception can be compared in terms of convergence quality.

Well it's all 5 )))))))))

Did you go to the same university as Maximka?

 
Certainly not at a ptu or with liberal values.