Machine learning in trading: theory, models, practice and algo-trading - page 360
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Interesting!!! But the problem is a little bit different. Suppose your TS has dropped by 20%. What's the question? Will it get out of the drawdown and earn from above or will it continue to drain???? How do you determine that the TS needs to be re-optimized???
Interesting!!! But the problem is a little bit different. Suppose your TS has dropped by 20%. What's the question? Will it get out of the drawdown and earn from above or will it continue to drain???? How can you tell if your TS needs to be re-optimized?
The TS should NOT be over-optimized - that's the whole point of creating a TS. Everything else is a numbers game
CU should NOT be retrained - that's the whole point of creating CU. All the rest is a game of numbers.
Retrained or not retrained, but sooner or later it will start to drain anyway. I think this wasMihail Marchukajtes question- how do you know when?
You do not understand the word "retrained.
First you have to be concerned that the TC is not retrained - to prove this fact. And then this proof must be repeated. If you can not prove that it is not retrained, you can not use it.
You don't understand the word "retrained."
First you have to be concerned that the TC is not retrained - to prove this fact. And then this proof must be repeated. If one cannot prove that it is not retrained, then one cannot use it.
I suppose I understand).
I think this is a somewhat simplified definition. So it is still not only possible, but perhaps even necessary to use. It all depends on the specifics.
We are using crude models, and this can also be interpreted as overtrained.
Interesting!!! But the problem is a little bit different. Let's assume that your TS has fallen by 20%. The question? Will it get out of the drawdown and earn from above, or will it continue to drain???? How do you know that your TS needs to be re-optimized?
If the newly trained model in the tester does not show 20% of drawdown for this period and the old model in the real account opened - then it's better to retrain it, the model has lost its relevance and should take into account the new laws. Why not retrain the model after each new trade? And give it an updated history of trades to enter.
If the newly trained model in the tester does not give a 20% drawdown for this period, and the old model in the real account did - then retrain definitely, the model has lost its relevance and needs to take into account the new patterns. Why not retrain the model after each new trade? And give it an updated history of deals for entering.
I suppose I understand.)
I think this is a somewhat simplified definition. So it is not only possible, but perhaps even necessary to use it. It all depends on the specifics.
We are using crude models, and this can also be interpreted as overlearning.
In the quote overlearning is too subtle a consideration of features, and you have coarsening is overlearning?!
You know better. This is not the first time.
If the newly trained model in the tester does not give a 20% drawdown for this period, and the old model in the real account did - then retrain definitely, the model has lost its relevance and needs to take into account the new patterns. Why not retrain the model after each new trade? And give it the updated history of deals for entering.
Training, retraining, and retraining (overfitting) are fundamentally different things.
All this training on each new bar - chewed and chewed on this forum and in general within the TA.
In the fight against overtraining (overfitting) I know two methods.
Clearing the set of predictors from predictors not related to the target variable - clearing the input set of predictors from noise. This question was considered in details at the first 100 milestones of this thread
Having the set of predictors cleaned from noise we start fitting the model with training sample and then with test and validation sample, which are random samples from one file. The error on all three sets should be approximately the same.
3. Then we take a file that is separate from the previous one and run the model on it. The error, again, should be about the same as the previous ones.
4. If these checks are done regularly, then your question: "a 20% slump is a signal for retraining" is not worth it at all, since as a result of the first three steps the slump is obtained as a model parameter and going beyond it says that the model is not working and everything should be started over again.
In the quote, overtraining is too fine a consideration of features, and your coarsening is overtraining?!
You know best. It's not the first time.
It's not the first time either. But why only coarsening. Another example right in the definition is that an overly complex model finds something that doesn't exist-apparent regularities.
You have a very simplistic or one-sided understanding of overtraining, imho.
Learning, retraining, and overtraining (overfitting) are fundamentally different things.
All this training on every new bar is chewed up and chewed over on this forum and in general within TA.
In the fight against overtraining (overfitting) I know two methods.
.....
4. If these checks are done regularly, then your question: "a 20% slump is a signal for retraining" is not worth it at all because as a result of the first three steps the slump is obtained as a model parameter and going beyond it says that the model is not functional and everything should be started over again.