Discussing the article: "Cross-validation and basics of causal inference in CatBoost models, export to ONNX format" - page 2

 
Forester #:
You can't reverse the order that way:
It has to be Why do the reverse order at all?

I can't remember why I did it this way... the problem with the seriality of the source array, I guess. No matter how I flipped it through AsSeries true-false, I got the same model signals.

I don't know much about the "peculiarities" of MQL.

It is also not quite clear with the settings of model outputs, I adjusted them by poke method. I described it in the article.
 
Maxim Dmitrievsky #:

I can't remember why I did it this way... I think it was a serialisation problem of the source array. No matter how I flipped it via AsSeries true-false, I got the same model signals.

I don't know much about the "peculiarities" of MQL.

It is also not quite clear with the settings of model outputs, I adjusted them by poke method. I described it in the article.
I think it is not necessary to invert (now you are feeding a non-inverted array), because in the tester you get the same graph as in python. Otherwise splits would not work on their chips and there would be randomness in the prediction.


Sericity won't help if you filled the array by indexes after creating it, how you fill it is how it counts. Probably... I don't work with serialisation myself.

 
Forester #:
I think you don't need to flip it (now you are feeding a non-flipped array), since you get the same graph in the tester as in python. Otherwise splits would not work on their chips and there would be randomness in the prediction.


Sericity won't help if you filled the array by indices after creating it, as you filled it - so it counts.

A you substitute features instead of f, the prediction will be different

 
Maxim Dmitrievsky #:

And you replace f with features, you get a different prediction.

That's weird. It seems to be copied 1 to 1. features are dynamic and f is static, but this is hardly the reason for the differences.

UPD: in the examples from the OnnxRun help, features are passed in a matrix, but yours are passed in an array, maybe this is the reason? It's strange that the help doesn't write as it should.

 
Forester #:
Strange... It's like a 1-to-1 copy.

Exactly, but the model response is different

k-- artefact, yes, you can remove it.

 
Maxim Dmitrievsky #:

Exactly, and the response of the model is different

k-- artefact, yes, can be removed

Saw that the serialisation is set for featurs. That's probably why the result is different.

 
Forester #:

Strange... It seems to be copied 1 to 1. features is dynamic, while f is static, but this is hardly the reason for the difference.

UPD: in the examples from the OnnxRun help the chips are passed in a matrix, while yours are passed in an array, maybe this is the reason? It's strange that the help doesn't write as it should.

Only arrays,vectors or matrices ( hereinafter referred to asData)can be passed as input/output values in ONNX model.

I think I got a wrong response with a vector too. I have to double-check, but it works for now.

https://www.mql5.com/ru/docs/onnx/onnx_types_autoconversion

Документация по MQL5: ONNX модели / Автоконвертация данных
Документация по MQL5: ONNX модели / Автоконвертация данных
  • www.mql5.com
Автоконвертация данных - ONNX модели - Справочник MQL5 - Справочник по языку алгоритмического/автоматического трейдинга для MetaTrader 5
 
Great article. I heard about the idea of using 2 neurons: one to predict the direction, the other to predict the probability of the first prediction being correct. So the question is: did you choose gradient bousting because it is better than neural networks in this area?
 
Ramirzaev gradient bousting because it is better than neural networks in this area?

Thanks. I compared the results of simple MLP, RNN, LSTM with bousting on my datasets. I didn't see much difference, sometimes bousting was even better. And bousting is much faster to learn, and you don't have to worry too much about the architecture. I can't say that it is unambiguously better, because NS is a stretch, you can build so many different variants of NS. I probably chose it because of its simplicity, it is better in this respect.