Machine learning in trading: theory, models, practice and algo-trading - page 3345
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Thank you, a quality and interesting article with extensive literature.
It seems that they do not consider the kind of uncertainty that is interesting - probabilistic dependence of output on attributes. They study two other types of uncertainty - uncertainties related to inaccuracies of attributes and parameters. They are named beautifully - aleatoric and epistemic uncertainty) We should call our variant target uncertainty by analogy).
Imho, in our case "measurement errors" of attributes are absent in principle, and uncertainty of model parameters is poorly separable from our "target uncertainty".
It seemed to me that the sum of these uncertainties should give target uncertain ty. But I haven't really looked into it.
The approach is about the same as in kozula via meta lerners, but here we also propose a way to disassemble one model and use it as an ensemble of truncated classifiers, instead of an ensemble of several classifiers, for speed.
I don't understand where the R square estimate comes from?
I was previously under the impression that this estimate is applicable in regressions if all regression coefficients are significant. Otherwise R squared does not exist....
I don't understand where the R square score came from?
I was previously under the impression that this estimate is applicable in regressions if all regression coefficients are significant. Otherwise R squared does not exist....
It's just something the tester shows for quick comparisons of different balance curves.
It's not involved anywhere else.
They all work 50/50.
They all work 50/50.
It just seems like--
If you score a figure in the script and look at the statistics of the future, the distribution of up/down, both by the number of candles and by the number of points tends to 50/50.
This is what concerns figures from candlesticks (the ratio of HLC with each other), and I did not count timeless ones, because they are too few for statistics of at least 1000 figures.
And so, if in 2022 the figure showed a forward in 55% of candles up and the average value of candles is 5-10% higher than in Sel, then in 2023 the payoff will still be 50/50, without any favours.
If you score a figure in the script and look at the statistics of the future, the distribution of up/down, both by the number of candles and by the number of points tends to 50/50.
This is the case with candlestick figures (the ratio of HLC with each other), and I did not count timeless ones, because they are too few for statistics of at least 1000 figures.
And so, if in 2022 the figure showed a forward in 55% of candles up and the average value of candles is 5-10% higher than in Sel, then in 2023 the working off will still be 50/50, without any privileges.
And if you add an adequate Stop and Take, will it be 50/50 too?