Machine learning in trading: theory, models, practice and algo-trading - page 1181
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Google doesn't work either? :)
https://chrome.google.com/webstore/detail/google-translate/aapbdbdomjkkjkaonfhkkikfgjllcleb?hl=ru
The translator works. Either translate the whole page, or copy-paste into the translator.
But a word or a paragraph - nothing.
There are too many settings there, you need a lot of bottles to figure it out... :) maybe the sample is small because the tree-like ones are mostly designed for large, you need to tweak something
of course, for sure it's possible to tweak, I even guess, that sampling percentage goes to each tree reduced by default, but two times two is an indicator...)
translate one word at a time, through the Google translator plugin for chrome. Without engl. no way. Even if you read through 1-2 words, the meaning will be clear in general. I myself use when I forget the words. Just click on the word. You can highlight turns / sentences.
Of course it is stupid to translate all the text at once, so you will never remember the words and you will never understand the meaning of the text.
Thanks, I should try to translate using your method, maybe it will be even more productive than making up my own hypotheses, but I have a weakness with languages...
I do not understand why you may need to manually edit splits and leaves of the decision trees, yes I have all branches automatically converted to logical operators, but frankly I do not remember that I myself have ever corrected them.
Because what's the point of using leaves with less than 50-60% prediction probability? It's random - it's better the model won't react to the situation at all, rather than reacting to a guess.
And is it even worth digging into the CatBoost code, how can you be sure.
For example I've put above test on python my neural network with learning by multiplication table by two, and now took it for testing trees and forests (DecisionTree, RandomForest, CatBoost)
and here's what the result came out - you can clearly see that it's not in favor of CatBoost, like two times two is zero five...:)
true, if you take thousands of trees, the results improve.I'm not sure that the trees are better than neural networks, but trees require fewer resources to build them. For example right now I have about 400 predictors, and a network with 400 input neurons and (how many layers there are) would take too long to count.
I can reset my sample - maybe use it to see which method is better?
The settings, yes, make sense, and I'm digging into them right now and trying to figure them out.
I do not understand why it may be necessary to manually edit splits and leaves deciding trees, yes I have all branches automatically converted to logical operators, but honestly do not remember that I myself have ever corrected them.
And in general it's worth digging the code CatBoost, how to be sure.
For example I put above test on python my neural network with learning by multiplication table by two, and now took it for testing trees and forests (DecisionTree, RandomForest, CatBoost)
and here's the result - you can see, that it's not in favor of CatBoost, like two times two - zero five...:)
It's true, if you take thousands of trees, the results improve.I tweaked it a little and added gradient boosting, it works best out of the box
the rest is monda something of course...
But a word or paragraph - nothing at all.
https://www.mql5.com/ru/forum/86386/page1180#comment_9543249
there CatBoost at iterations=100 trees, not 10, and GBM is a beauty:)
Because what is the point of using sheets with less than 50-60% prediction probability? It's random-it's better for the model not to respond at all than to respond at a guess.
I'm not sure that trees are better than neural networks, but trees require fewer resources to build. For example, right now I have about 400 predictors, and a network with 400 input neurons and (how many layers there are) would take too long to count.
I can reset my sample - maybe use it to see which method is better?
And the settings yes - make sense - and I'm digging into them now and trying to understand their essence.
Sure dig in and choose as carefully as you can while it's still in the beginning stages.
Besides not understanding with two-two, try to disconnect obtrusive, at each startup, creation by CatBoost its temporary directories as from this in the protected environment it flies out.
And in general, these glitches he looks somehow not very professional, so if you can not beat them, then personally in my opinion, cheaper than free - from this product to give up immediately:)