Machine learning in trading: theory, models, practice and algo-trading - page 3337
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Speed of testing the model exported to the naive code (catbust)
And exported to ONNX
The internals of the two versions of the bot are almost similar, the results are the same.
Pay for universality.
It is a pity that CatBoost has significant limitations on model conversion.
The price to pay for versatility.
It is a pity that CatBoost has significant limitations on model conversion.
Started poking around more on object importance, there's a whole article suggested. I'll see what it has to offer.
I'm glad to see that you're still interested. Write about your progress in researching the usefulness of this approach.
I think I'll try to recreate the leaf estimation taking into account the stepwise error correction by doing a repartitioning after each leaf (tree).
But still, it doesn't seem to work the same way when categorising..... I don't understand the formulas very well.
As I understood that on the first iteration an approximated function of approximation of loglosses is built on the labels of the target, which should be approached with the help of trees, and the delta between the ideal function and the one obtained with the help of trees is written into the leaf after multiplication by the coefficient of learning rate.
It's just that if you mark an error by taking the approach literally, then what-is it necessary to mark an error in two different classes by one, let's say "1"?
Or what?
Glad I was able to get you interested after all. Write about advances in researching the usefulness of this approach.
I've been in this thread for a long time. There are other ways/packages. This feature was somehow missed, maybe they added it recently
You can watch a video on this topic
The values in the leaves that are summed to form the Y coordinate of the function.
To me, this is the answer or prediction of a leaf. I thought that you want to correct it by some coefficient.
It's just that if you mark an error by taking the approach literally, then should an error in two different classes be marked with one, let's say "1"?
Or, how?
In the training example from the article, only regression. I can't say for classification.
You can watch a video on this topic
In the training example from the article, only regression. I am not sure about the classification.
About classification kind of write here. But CatBoost has a slightly different formula, but maybe this is the cost of mathematical transformations....
And a link to a video from the same place, I think.
Interestingly, if you yourself have been doing MO for more or less a long time, you come to similar conclusions. Some natural process of evolution of the approach. This is how I came to kozul, statistical learning and reliable AI. If you google those words, you can find useful stuff.
Yes, it's a normal process - a common information field. History knows discoveries a couple of years apart, and works describing them are published late - after checks, reviewing and, in general, translation into an understandable language.