Machine learning in trading: theory, models, practice and algo-trading - page 1710
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Max! Remind me again what these models are called...
1) Model 1 is trained
2) model 2 is trained by the predictions on the test data of model 1 etc...
Stacking?
Yes, strange results. Don't they take the probability from the test sample involved in the training? But there seems to be an error.
And how many total units (lines of target) are in the sample?There are 891 rows in total in the dataset.
I think one of the formulas rms,rmse,cls or something else is used there. The main thing is that the result converges at 0%, 50%, and 100%. And in between they are curved. Splitting by class is usually done at 50%, and in this place there is a coincidence with the usual probability. So decided to leave the question unresolved.
Can I ask you a question?
Why ketbust? What does it have that analogues do not have?
There is no test sample.
There are 891 rows in total in the dataset.
I think one of the formulas rms,rmse,cls or something else is used there. The main thing is that the result converges at 0%, 50%, and 100%. And in between they are curved. Splitting by class is usually done at 50%, and in this place there is a coincidence with the usual probability. So decided to leave the question unresolved.
Yeah, you have to break the code to understand the depth of the idea. But it's interesting how they assign weights to leaves, taking into account the already existing ones.
Can I ask you a question?
Why ketbust? What does it have that analogues do not have?
I am interested in it for reasons:
1. Support - a lot of information and feedback from developers.
2) Fast learning - I want to use all cores of the processor.
3. Flexible settings for model building and retraining control - although there's a lot of room for improvement.
4. The possibility to use binary symmetric models after training in MQL5, but it is not my development.
Thanks
Maybe someone will be interested in
There is a new book on predicting time series in R, including examples of bitcoin prediction
https://ranalytics.github.io/tsa-with-r/
Yeah, you have to break the code to understand the depth of the idea. But it is interesting how they assign weights to leaves, taking into account already existing ones.
It seems that the weights are determined as usual - by probability.
But the split is apparently not just the best one, but the one that improves the overall result. But this is just a guess. It's impossible to look through the code since there are kilometers of listing. It's not 4000 strings from the alglib.
Why catbust? What does it have that analogues don't have?
I was just asking, I see how you are struggling with these trees from ketbust, there are some problems with output, crutches...
I have got into the subject of "rule induction" and I see that R has many packages for generating rules or rule ensembles...
1) rules are easy to output, one line
2) rules are easy to read for a human
3) there are a lot of types of rule generation, from trivial to genetic
4) the quality of prediction on the level of everything else
So I think maybe you should not bother with this kitbust. and get something more pleasant or something like that.
Max! Remind me again what these models are called...
1) Model 1 is trained
2) model 2 is trained on the predictions on the test data of model 1 etc...
stacking ?
meta labeling de prado