Machine learning in trading: theory, models, practice and algo-trading - page 487
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
In theory, there should be little error in random forests, because during their construction all variables are used in decision trees, and there is no restriction on memory usage as in neural networks - the number of neurons. There you can only use separate operations to "blur" the result, such as level restriction, tree trimming or bagging. I don't know if there is clipping in MQ implementation of alglib, backgammon is
If this variable is smaller than 1, the error should increase.
I did, but the error was still showing average, as I described above... now it's normal
By the way, even with r decreasing by o.1 the error increases very much. Above r 0.9 below 0.8
At r = 0.66 (as in the classical version of RF)
And the results show that the multiplication table already solves badly
In theory, there should be little error in random forests, because during their construction all variables are used in decision trees, and there is no restriction on memory usage as in neural networks - the number of neurons. There you can only use separate operations to "blur" the result, such as level restriction, tree trimming or bagging. I don't know if there is pruning in MQ implementation of algib, but there is tagging
If you make this variable smaller than 1, the error should increase.
If you want to make one wrong deal per 5000000000000000000, it is impossible to make it in any instrument.
Sincerely.
For the error to be as small as @Maxim Dmitrievsky's
It's impossible to make 1 wrong deal at any instrument.
Sincerely.
What do trades have to do with it? I'm telling you that every decision tree practically remembers all the patterns and there may be no error at all in a training set with 100% sampling i.e. R=1.
Yes, it's overfitting, but that's how the algorithm works, which is why they use all sorts of tricks in random forests.
What does this have to do with deals, I'm telling you that every decision tree practically remembers all the patterns and there may be no error at all at 100% sampling i.e. R=1.
for this you need to look out of bag, to estimate the model, but then r=0.66 max put yes
What about deals, I'm telling you that every decision tree practically remembers all patterns and there can be no error at all at 100% sampling i.e. R=1.
Respectfully.
for this you need to look out of bag, to evaluate the model, but then r=0.66 max put yes
I guess it's necessary to pick up, but backgammon alone is not a very strong technology for prediction - IMHO
Well, so far so good. :) Later, if I connect a normal lib with dipling, I'll watch it
but the speed!
It is, but the error was still showing average, as described above... now it is normal
By the way, even with r decreasing by o.1 the error increases greatly. Above r 0.9 below 0.8
At r = 0.66 (as in the classical version of RF)
And the results show, that the multiplication table already solves very badly.
Respectfully.
I did not go into how the forest works. but from your words I understand that each tree memorizes a pattern, which subsequently may not repeat. in this case (since there is no repetition), we can not say what the probability of its performance in the plus and as an axiom take it for 1, instead of taking it for 0.5 as it is not known. from here we get that the forest is almost never wrong (from your words).
Respectfully.
When I increased the threshold for the signal NS compensated for this by increasing the number of inputs required, as a consequence the error decreased, but also the options for entry became less.
Sincerely.
Well, there is a question of the correct chips and targets, although it would seem that it could be simpler than the multiplication table, but there is not a small error