Machine learning in trading: theory, models, practice and algo-trading - page 805
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Oh, the disinformer has downloaded a new book. We take out our notebooks and add the mentioned literature to the blacklist.
I have a slightly different association
Lately, we have become too trusting of the information on the Internet, and there is more and more of it....
Isn't it time to use your head?
For fans of crossvalidations, test samples, OOS, and other stuff, I will never tire of repeating myself:
SanSanych and Vladimir Perervenko in particular
Out-of-sample tests
This is the most popular and also abused validation method. Briefly, out-of-sample tests require setting aside a portion of the data to be used in testing the strategy after it is developed and obtaining an unbiased estimate of future performance. However, out-of-sample tests
reduce power of tests due to a smaller sample
results are biased if strategy is developed via multiple comparisons
In other words, out-of-sample tests are useful in the case of unique hypotheses only. Use of out-of-sample tests for strategies developed via data-mining shows lack of understanding of the process. In this case the test can be used to reject strategies but not to accept any. In this sense, the test is still useful but trading strategy developers know that good performance in out-of-samples for strategies developed via multiple comparisons is in most cases a random result.
A few methods have been proposed for correcting out-of-sample significance for the presence of multiple comparisons bias but in almost all real cases the result is a non significant strategy. However, as we show in Ref. 1 with two examples that correspond to two major market regimes, highly significant strategies even after corrections for bias are applied can also fail due to changing markets. Therefore, out-of-sample tests are unbiased estimates of future performance only if future returns are distributed in identical ways as past returns. In other words, non-stationarity may invalidate any results of out-of-sample testing.
Conclusion: Out-of-sample tests apply only to unique hypotheses and assume stationarity. In this case they are useful but if these conditions are not met, they can be quite misleading.
ROS can be used only for hypothesis cancellation or only for known stationary problems.
But not for search of strategies and selection of features/evaluation of system stability
Well said Victor Benedictovich! The purpose of CB is to make money out of it...
Oh, the disinformer downloaded a new book. We take out our notebooks and add the mentioned literature to the black list.
OOS can be used only for canceling hypotheses or only for known stationary problems.
But not for search of strategies and selection of features/evaluation of system stability
Let's assume that this is true and we have lost the main way of estimation.
Is any alternative solution suggested? Or is this a call that everything is useless and we can forget about MO?
Give me a link to the original, please.
Let's assume that this is the case and we are deprived of the main method of evaluation.
Is there an alternative solution offered? Or is this a call that everything is useless and we can forget about MO?
Give me a link to the original, please.
https://towardsdatascience.com/validation-methods-for-trading-strategy-development-1efea8284b02
I read a shitload of stuff, but I only throw in what I like or agree with, or have written about it myself before.
I wrote about it and cited an excerpt from the article to confirm my thoughts
If you doubt my cognitive abilities :D
You're constantly inserting quotes and pictures from somewhere, without even understanding the validity of the source, and the applicability of the method to the problem at hand; you're trying to just guess instead of thoughtfully answering. And often you don't guess, which leads to misinformation of other forum readers. While all this is easily googled and checked on the first page of the search, but you do not even bother to check. It is clear that all for the sake of rating under the avatar, you stuffed posts, but still check the information.
You're constantly inserting quotes and pictures from somewhere, without even understanding the validity of the source, and the applicability of the method to the problem at hand; you're trying to just guess instead of thoughtfully answering. And often you don't guess, which leads to misinformation of other forum readers. While all this is easily googled and checked on the first page of the search, but you do not even bother to check. It's understandable that it's all about ranking under your avatar, stuffing your posts, but do check the information.
lol, that's a nice touch, for the ranking)) i don't... don't care
stop smoking