Machine learning in trading: theory, models, practice and algo-trading - page 351
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Actually, this is not right, imho.
As the system becomes more complex, profitability and stability should increase at the same time. That is, as the system becomes more complex, its consumer properties should increase.
Absolutely NOT true.
Informational criteria, various acaics there, are aimed at minimizing the complexity of the model. Model coarsening is a very effective tool to fight the main evil of trading - overtraining (overfitting).
Maxim Dmitrievsky:
The hell with it.
Simply take the simplest one - a random forest. Usually we get classes as a result of learning. In reality, the algorithm gives probability of class from which we get a class. Usually divide the probability in half with two classes.
What if you divide it into classes like this: 0 - 0.1 is one class and 0.9 - 1.0 is another class? And the gap between 0.1 - 0.9 is out of the market?
This is what I saw in the article.
Absolutely NOT true.
Informational criteria, various acaics there, are aimed at minimizing the complexity of the model. Coarsening the model is a very effective tool to combat the main evil of trading - overtraining (overfitting).
I don't know why it's wrong). From the second sentence it follows that we are talking about the same thing.
There is text in my post that is not true. Next, you expose this position.
The general rule is: get a great system in terms of profitability, and then make it worse in terms of profitability, hoping to get much more important: sustainability in the future.
There is text in my post that is not correct. The following is your disclosure of that position.
The general rule is this: we get a great system in terms of profitability, and then make it worse in terms of profitability, in the hope of getting much more important: stability in the future.
Well, as stability grows, so does profitability, at least by reducing the number of losing trades. Profitable trades are affected to a lesser extent.
If it is not so, then there is something wrong with informativity of predictors. In any case, the profit/loss ratio should only grow with increasing complexity.
So, as stability grows, so does profitability, at least by reducing the number of losing trades. Profitable ones are affected to a lesser extent.
If it is not so, then there is something wrong with informativity of predictors. Anyway, the profit/loss ratio should only grow if it becomes more complex.
You know better, although the whole world is of the exact opposite opinion.
So, as stability grows, so does profitability, at least by reducing the number of losing trades. Profitable ones are affected to a lesser extent.
If it is not so, then there is something wrong with informativity of predictors. In any case, the profit/loss ratio should only grow with increasing complexity.
Faa writes the right idea, but he states it incorrectly.
You have a series and a set of predictors. You divide the series into three parts - training sample and forward (the simplest case).
You build, for example, 20 models.
The point is that selection of a model from the list is done not by the best on the training sample and not by the best on the forward one. A model is selected that gives nearly the same quality scores both on the training and forward.
You know better, although the whole world holds exactly the opposite view.
The essence - the selection of a model from the list is not by the criterion of the best on the training sample and not by the criterion of the best on the forward. A model is selected that gives approximately the same quality assessments both on the training and forward.
This is not in question. I meant only the real functioning or testing of the system.
Actually, this is not right, imho.
As the system becomes more complex, profitability and stability should increase at the same time. That is, as the system becomes more complex, its consumer properties should grow.
Let us consider an example of manual system development:
We take a bare trading idea and create a simple TS by optimizing the profit (we may disregard the losses).
2. Introduce restrictions that minimize the number of losing trades. Of course, a part of accidentally profitable trades will leave and in a part of profitable ones the profit will decrease, but drawdowns will also decrease and, as a result, the sum of profit and loss will increase.
Further complication leads only to the increase of profit, at least due to decrease of the number of losing trades.
If the amount of profit-loss does not increase as a result of complication, then there is something wrong. For example, we introduce inefficient conditions.
How wrong, you are creating a classification model. The larger the sample, the stronger the generalization, the model becomes more stable in general and less accurate in particularities, respectively less profit
If you train it on a small sample then it may be very accurate on a short sample, but unstable on a long one