Machine learning in trading: theory, models, practice and algo-trading - page 1711
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
I was just asking, I see how you are struggling with these trees from ketbust, there are some problems with output, crutches...
I have got into the subject of "rule induction" and I see that R has many packages for generating rules or rule ensembles...
1) rules are easy to output, one line
2) rules are easy to read for a human
3) many types of rule generation, from trivial to genetic
4) the quality of prediction on the level of everything else
So I think maybe you should not bother with this kitbust. and get something more pleasant or something like that.
A forest/bush is an ensemble of trees = rules.
4) Are you sure? Have these packs participated in any MO contests? Did they beat the boost models? Can I get a link to the contest results?
Give an example of some of the winning packs for reference, preferably in Russian.
The tree is quite a rule generator. Easy to read.
Forest/bust is an ensemble of trees = rules.
4) Are you sure? Have these packages participated in any MO contests? Did they beat the boost models? Can I get a link to the contest results?
Give me an example of some of the winning packages for reference, preferably in Russian.
You do not understand what I mean, or I do not understand you, it seemed to me that you have a problem with the interpretation and implementation of catbust, if all is good then all is good)
4) I wrote "level" ... I compared it to RF with the same data, the difference is 1-3% for the worse.
You do not understand what I mean, or I do not understand you, it seemed to me that you have a problem with the interpretation and implementation of catbust, if all is good then all is good)
4) I wrote "level") ... I compared to RF on the same data, difference 1-3%
RFs from boosts are very far behind in both accuracy and speed. Especially if it is an RF from the Alglib library.
Not understanding the exact algorithm for calculating value doesn't really interfere with work. The main thing is that by typical 50% split, classic probability and catbust probability coincide.
RFs are very far behind boosts in both accuracy and speed. Especially if it is an RF from the Alglib library.
I understand.
Far as accuracy, it's 2-4%.
I use the rules to understand the process, not to predict in an ensemble...
understanding the process == a good chip.
one rule with a good chip can beat the hell out of any booster with 1000 woods that are trained on garbage
I understand.
Far as accuracy, it's 2-4%.
I use the rules to understand the process, not to predict in an ensemble...
understanding the process == a good chip.
one rule with a good chip can beat the hell out of any booster with 1000 woods that are trained on garbage
A good chip and one tree will find.
read carefully.
I understand...
Far as accuracy, it's 2-4%.
I use the rules to understand the process, not to predict in an ensemble...
understanding the process == a good rule.
one rule with a good chip can beat the hell out of any booster with 1000 woods that are trained on garbage
read carefully
And it's easy to understand the process of the tree too.
One tree is easy to read and understand.
Yeah, but who says otherwise?
Yeah, well, who says otherwise?
I understand the tree, I don't understand the algorithm for calculating probability. But it doesn't prevent me from working with the tree)
No one)
I understand the tree, I don't understand the algorithm for calculating probability. But it doesn't prevent me from working with the tree)
Well, that's fine, so I just thought you were not comfortable working with catbust.