Machine learning in trading: theory, models, practice and algo-trading - page 1139
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Yes, an error should certainly be reported, do you mind if I use your report as an example?
The fit is all over the place, no matter how you look at it, the question is how to reduce it to an acceptable level.
Of course you can use it, there's nothing of value in this report!
About the fit, that's what I'm wondering what to do with it, on the one hand using a limited number of combinations obtained on the history in the form of leaves should reduce the effect of the fit, but on the other hand I see that 60% of the leaves working on the training sample stop working on the test sample. The question is to understand how the history has changed, what exactly happened, that what was working stopped working.
A large number of predictors is like a large number of stars in the sky, and because of their cluster can come up with a bunch of different constellations that cannot be seen from planets of other solar systems, and here the number of combinations exceeds the number of market entries - that's what makes the adjustment.
Of course you can use it, there's nothing valuable in this report!
On the one hand, using a limited number of combinations obtained on the history in the form of leaves should reduce the effect of fitting, but on the other hand, I see that 60% of the leaves working on the training sample stop working on the test sample. The question is to understand how the story has changed, what exactly happened, that what was working stopped working.
A large number of predictors is like a large number of stars in the sky, and because of their cluster you can make up a bunch of different constellations that cannot be seen from the planets of other solar systems, and the number of combinations exceeds the number of market entries - that is the reason for the adjustment.
Fitting in the training was and will be, no matter what anyone says, it just needs to be done qualitatively, and then there will be more working leaves.
Figuratively, you can compare trading in the training and jumping off a diving board, which would accelerate and inertially fly through the OOS.
If the equity diagram on this sample turns out to be curved and have bumps, then don't count on a good jump.)
On the one hand, using a limited number of combinations obtained on the history in the form of leaves should reduce the effect of fitting, but on the other hand, I see that 60% of the leaves working on the training sample stop working on the test sample. The question is to understand how the history has changed, what exactly happened, that what was working stopped working.
A large number of predictors is like a large number of stars in the sky, and because of their cluster can come up with a bunch of different constellations that cannot be seen from planets of other solar systems, and there are more combinations than the number of market entries - that's why there is a fitting.
IMHO any induction (generalization) is a fitting, MO is just statistics, out of control theory, engineering statistics, which does not disdain heuristics and crutches, and statistics is this or that averages (expectations), averages, averages of deviations from averages, etc., Well, we all remember the parable about "Russell chicken", the owner will feed the chicken a couple hundred times (a statistically significant number), but only on purpose, in order to slaughter it once ((( Perhaps Puppet does the same.
A question for reflection:
There are many strategies fitted to history, and there are many strategies that give good results on new data. How do these two sets of strategies differ from each other when you run them on history?
A question for reflection:
There are many strategies fitted to history, and there are many strategies that give good results on new data. How do these two sets of strategies differ from each other when you run them on history?
They don't. Both classes of strategies are fitted to history.
Perhaps the goals and means of such fitting are different, but we, being not the authors, cannot find out).
A question for reflection:
There are many strategies fitted to history, and there are many strategies that give good results on new data. How do these two sets of strategies differ from each other when you run them on history?
))))))))))))))))))))))))))))))))))))))).............
))))))))))))))))))))))))))))))))))))))).............
A question for reflection:
There are many strategies fitted to history, and there are many strategies that give good results on new data. How do these two sets of strategies differ from each other when you run them on history?
That's the right question.
Provided we teach the model ONE SINGLE THING, on training and control, the error curve will be something like this
Overtraining is when the divergence begins (in the picture from about the 19th n) hence the answer to your question, the history overtrained model will be MUCH BETTER than the one that is good on the control. Ideally, the graph of equity on the history and control (OOS) should be indistinguishable. That is, the local "guru" is totally rubbish, it's just the other way round, you can't see where the history is over and where the check starts.
Gentlemen of MO.
Who can share a link on how to create a neuronics for learning how to graph a function?
The question is correct.
Assuming we teach the model to the ONE and the same thing, in training and control, the error curve will be about this
Overtraining is when the divergence begins, (in the picture from about the 19th n) hence the answer to your question, the overtrained model will be MUCH better on history than the one that is good on control. Ideally, the graph of equity on the history and control (OOS) should be indistinguishable. That is, the statements of local "gurus" about "smooth springboard" - complete nonsense, all exactly the opposite, it should not be visible where the story ended and where control began.
Balancing 2 samples, and that's it. Will not save from failure on the 3rd, nevertheless the approach is correct