Machine learning in trading: theory, models, practice and algo-trading - page 3337

 
Maxim Dmitrievsky #:

Speed of testing the model exported to the naive code (catbust)

And exported to ONNX

The internals of the two versions of the bot are almost similar, the results are the same.

Pay for universality.

It is a pity that CatBoost has significant limitations on model conversion.

 
Aleksey Vyazmikin #:

The price to pay for versatility.

It is a pity that CatBoost has significant limitations on model conversion.

Started poking more into object importance, there's a whole article offered there. I'll see what it can give.
 
Maxim Dmitrievsky #:
Started poking around more on object importance, there's a whole article suggested. I'll see what it has to offer.

I'm glad to see that you're still interested. Write about your progress in researching the usefulness of this approach.

 
Forester #:

I think I'll try to recreate the leaf estimation taking into account the stepwise error correction by doing a repartitioning after each leaf (tree).

But still, it doesn't seem to work the same way when categorising..... I don't understand the formulas very well.

As I understood that on the first iteration an approximated function of approximation of loglosses is built on the labels of the target, which should be approached with the help of trees, and the delta between the ideal function and the one obtained with the help of trees is written into the leaf after multiplication by the coefficient of learning rate.

It's just that if you mark an error by taking the approach literally, then what-is it necessary to mark an error in two different classes by one, let's say "1"?

Or what?

 
Aleksey Vyazmikin #:

Glad I was able to get you interested after all. Write about advances in researching the usefulness of this approach.

I've been on this topic for a long time. There are other ways/packages out there. This feature was somehow missed, maybe it was added recently
 
Maxim Dmitrievsky #:
I've been in this thread for a long time. There are other ways/packages. This feature was somehow missed, maybe they added it recently

You can watch a video on this topic


 
Aleksey Vyazmikin #:

The values in the leaves that are summed to form the Y coordinate of the function.

To me, this is the answer or prediction of a leaf. I thought that you want to correct it by some coefficient.

Aleksey Vyazmikin #:
It's just that if you mark an error by taking the approach literally, then should an error in two different classes be marked with one, let's say "1"?

Or, how?

In the training example from the article, only regression. I can't say for classification.

 
Aleksey Vyazmikin #:

You can watch a video on this topic

It is interesting that if you have been doing MO for more or less a long time, you come to similar conclusions. Some natural process of evolution of the approach. That's how you came to kozul, statistical learning and reliable AI. If you google these words, you can find useful things.
 
Forester #:

In the training example from the article, only regression. I am not sure about the classification.

About classification kind of write here. But CatBoost has a slightly different formula, but maybe this is the cost of mathematical transformations....

And a link to a video from the same place, I think.


 
Maxim Dmitrievsky #:
Interestingly, if you yourself have been doing MO for more or less a long time, you come to similar conclusions. Some natural process of evolution of the approach. This is how I came to kozul, statistical learning and reliable AI. If you google those words, you can find useful stuff.

Yes, it's a normal process - a common information field. History knows discoveries a couple of years apart, and works describing them are published late - after checks, reviewing and, in general, translation into an understandable language.

Reason: