Machine learning in trading: theory, models, practice and algo-trading - page 2594
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
God's Criticism Again
It's as much for you as it is for your forehead.
You make a point, not this crap - it will work, because it won't work
I see, even fantasy is completely absent. The OP won't cover the whole space of model variants, you'll have to choose from the one that she optimized and stopped at the best variant herself. Go to the factory, in short. You take on some things without even roughly understanding what you're working with. And in the case of boosting, it is impossible to build the OP, because the number of parameters grows at each iteration.
Yeah...
There are more interesting questions of using MO in trading. For example, the algorithm of determining which interval of history to take for training. Perhaps it can be set by some meta-parameters which are optimized by crossvalidation. I'll have to read Prado.)
I wanted to write that the more data the better, then I remembered one of my small experiments (however it was made without sufficient representativeness, therefore the result may well be random, but still). Namely: there are 2 markets - according to my subjective assessment 1 is more efficient, the second is less. The model trained on the more efficient market gave worse results in this market at OOS than the model trained on the less efficient market did in the same area.
Often models stop working at one moment, no matter how big the trait is. I trained on samples of different length, all of them stop working at a certain point of the previous history. Through it you can see that some pattern is missing or changing.
Then it turns out that it is necessary to train on the shortest possible section. So that after you change the pattern, the new pattern starts working faster.
For example, if training for 12 months, then after the change of the pattern in 6 months, the new and old patterns will be 50/50. And in about a year there will be training and trading on the new pattern. That is, for almost a whole year the pattern was trading on an outdated pattern and was most likely losing.
If you train for 1 month, the model will learn to work correctly again in a month.
It would be good to study for 1 week... But I don't have enough data.
Often models stop working at one moment, regardless of the trace size. I trained on samples of different length, all of them stop working at a certain point in the past history. Through it you can see that some pattern is missing or changing.
About the noise, yes. Haven't thought about it in terms of taking sections of history with and without noise though. And by the way, how do you understand that before model training? Like iteratively? Trained on the whole area, saw where the best performance, left only these areas and trained first only on these areas? Here comes the second question, which, pending experimental confirmation, may be called philosophical: is it better for the model to immediately see different areas, including noisy, but train on average on the noisier data, or to learn from cleaner, but, say, never see noisy data.
What's wrong with giant sizes? Aside from increasing the computation time?