Machine learning in trading: theory, models, practice and algo-trading - page 3309
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Andrew is confusing NS training with optimising its parameters, I guess
both are kind of like optimisation, which is a bit disconcerting when a kitten has had a lot of food poured on it. there seems to be optimisation of food everywhere and it's not clear what to eat
so the statement "you don't need to look for a maximum, you need to look for a steady plateau" is inherently erroneous, speaking of the erroneous use of estimation.
Contrary to your assertion you have shown a plateau has been found by demonstrating estimation.
Where in practice can I apply this?
We are discussing overfitting, which is usually based on optimisation. in the tester, it is clear.
In MO, overfitting is revealed by running the model on different files. The variation of model performance is superfitting: no criteria are needed. There is also a package that is revealed by overfitting.
Come down from the sky, pardon me, from your extremes to the ground where things are different.
Contrary to your assertion you have shown plateau found by demonstrating appreciation.
Where in practice can I apply this?
We are discussing over fitting, which is usually based on optimisation. in the tester everything is clear.
In MO, overfitting is revealed by running the model on different files. The variation of model performance is superfitting: no criteria are needed. There is also a package that detects overfitting.
Come down from your extremes to the ground where things are different.
After all, the very phrase"we came up with" also implies some kind of thought process (iterations).
How does the final model know whether it was brain or computer iterations and whether there is a difference between the two?
The question arose after honouring Prado's article
P hacking is about stretching data to your wants. You take any FF and add as much data to the input to maximise it. If it maximises poorly, you add more data or choose a more accurate optimisation algorithm. That is, any FF can be maximised in this way. This is the most common case in TC optimisation. In this case, more data - more overtraining. No options. Global minima-maxima do not tell anything at all. The logical solution is to maximise the FF while minimising the number of features, as I wrote above. The least evil, so to speak. Baes - variant tradoff, in scientific words.
A barrel of honey with a spoon of tar, so the honey can be thrown away. As Stirlitz said, it's the last phrase that is memorable.
A barrel of honey with a spoonful of tar, so the honey can be discarded. As Stirlitz said, it's the last sentence that counts.
The optimisation process is a search for unknown parameters.
Started to reject such a thing - completely OOS (2023). In the second half, the character of the curve changes.