Machine learning in trading: theory, models, practice and algo-trading - page 2828
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Yeah, I'll have a look at adam sometime at my leisure, do some tests.
The articles are top, just not qualified to object to anything :)
thanks))))
then I see the need to include algorithms traditionally used with neurons in the review too.
In practice, this means that the neuron will be undertrained.
Well, that's a bit of an afterthought.
there are different types of AO, local optimisation and global optimisation...
local is gradients, the same Adam, etc... global is genetics, etc...
networks are trained with local AO because it's fast, "there are a lot of weights".
and it's just not effective to train global AO...
And the main thing is that if you train a normal neuron, which is about a billion weights, with global AO, firstly, you will have to wait a long time, and secondly, you can in no way guarantee that you have found a global minimum....
So all this talk is a profanation of pure water, SUPER naive belief that those who created deep learning do not know about global optimisation algorithms and their properties, it's so obvious that it's not even funny....
You will learn to distinguish global optimisation algorithms from local optimisation algorithms, and then there is discrete optimisation, continuous optimisation, multi-criteria optimisation, etc....
And each of them has its own tasks, to pile everything in a heap and test something is profanation.
Well, that's a bit of an afterthought.
there are different types of AO, local optimisation and global optimisation...
local is gradients, the same adam, etc. global is genetics, etc...
networks are trained locally because it's fast, "there are a lot of scales".
and it's just not efficient to train global AOs...
And the main thing is that if you train a normal neuron, which is about a billion weights, with global AO, firstly, you will have to wait a long time, and secondly, you can in no way guarantee that you have found the global minimum....
So all this talk is a profanation of pure water, SUPER naive belief that those who created deep learning do not know about global optimisation algorithms and their properties, it is so obvious that it is not even funny....
It's horrible.
there is no division of algorithms into "local" and "global". if an algorithm gets stuck in one of the local extrema, it is a flaw, not a feature.
There are highly specialised comparisons of traditional AOs for neurons, you can search for them. algorithms are usually used for specific tasks, but all algorithms without exception can be compared in terms of convergence quality.
thank you)))
then I see the need to include the algorithms traditionally used with neurons in the review too.
that's terrible.
there is no division of algorithms into "local" and "global". if an algorithm gets stuck in one of the local extrema, it is a flaw, not a feature.
Gradient descent algorithms are used, which is in general, not for neurons, and which has a huge beard. Google it and don't ask childish questions, having learnt how gradient descent overcomes different types of traps of local extrema. This is something people have been doing specifically for years.
I read once that if the error does not change much for a few cycles, i.e. it revolves around an extremum, then to check if it is local, a strong jump is made in the parameters to jump out of this extremum. If it is local, it will not return to it on the next jumps, if it is global, it will return. You can repeat several times. In general, you need to explore the space more widely.
that's terrible.
there is no division of algorithms into "local" and "global". if an algorithm gets stuck in one of the local extrema, it is a flaw, not a feature.
There are highly specialised comparisons of traditional AOs for neurons, you can search for them. algorithms are usually used for specific tasks, but all algorithms without exception can be compared in terms of convergence quality.
Well it's all 5 )))))))))
Did you go to the same university as Maximka?