Machine learning in trading: theory, models, practice and algo-trading - page 595
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
When you enter a building called "Statistics," it says"Garbage in, garbage out" above the entrance.
Any indicator is a function of price. Neural networks have a nonlinearity. It is capable of deriving any indicator formula by itself if the network is deep enough. If the network cannot learn on price data, it is not because of inputs but because of the fact that it is impossible to obtain output data on price.
Any indicator is a function of price. Neural networks have a nonlinearity. It is capable of deriving any indicator formula by itself, if the network is deep enough. If the network cannot learn from price data, then it is not about the inputs, but about the fact that you cannot get output data from price in principle.
Both you and SanSanych are right.
On the one hand, the NS will automatically build the necessary indicator and their combination. On the other hand, if the data is not clean, and there is too much noise in it, no NS will be able to learn anything. So, it's about the inputs too.
How important is it to mix the sample when training the NS? What are the mathematical justifications for this?
Is mixing relevant for all MO models or only for some specific ones?
How important is it to mix the sample when training the NS? What are the mathematical justifications for this?
Is mixing relevant for all MoM models or only for some specific ones?
It is necessary to mix it up so that the learning algorithm does not follow the same path with every cycle. We may get into and not get out of the local extremum.
I.e., you need to mix a few times, teach a few times and see how the results correlate?
I.e., do you need to mix and train a few times and see how the results correlate?
You need to shuffle after every few epochs of training. Unfortunately, many learning algorithms do not allow breaks (see Python - some packages (modules)) and start from scratch every time.
Stirring is also good to combine with annealing. But, again, it is difficult to do it on automatic. You always need to watch the intermediate results, and then plan your next steps.
It is necessary to shuffle after every few epochs of training. Unfortunately, many learning algorithms do not allow breaks (see Python - some packages (modules)) and start from scratch every time.
Stirring is also good to combine with annealing. But, again, it is difficult to do it on automatic. Because you always have to watch the intermediate results, and then plan further actions.
wow... that's it... i.e. just mix it up before training makes no sense
And you've got it now) The rattle is CatBoost.
---------
If you ever feel like fishing for a boson...
https://www.kaggle.com/c/higgs-boson
In Darch, the default is to mix before each epoch. I tried to turn it off - it didn't learn anything at all.
So I thought, if everything is shuffled, how can I make the fresh data have a stronger effect on learning?
In Darch, the default is to mix before each epoch. I tried to turn it off - it didn't learn anything at all.
So I thought, if everything is shuffled, how can I make the fresh data have a stronger effect on learning?