Machine learning in trading: theory, models, practice and algo-trading - page 3360

 
Aleksey Vyazmikin #:

Well, I personally did not associate the model's response with the probability of class dropout, but take it as the model's confidence in class detection. Confidence is based on leaves, and leaves are based on the training sample. Here a single leaf will show the probability of class dropout. Since each leaf does not have responses at all points in the sample, it turns out that the summation of probabilities is distorted in the final response of the model. Perhaps there is a way to correct at this level - and I'm interested in it - that's the direction I was trying to turn the discussion in.

In my opinion - the solution is to group leaves by similar response points and further transformation of the average summary results of the groups....

Sorry, but without references to libraries, notebooks or articles, I still take it roughly as follows


 
Maxim Dmitrievsky #:

Sorry, but without links to libraries, notebooks, or articles, I still take it roughly like this


Eh, all you need are packages...

 
Aleksey Vyazmikin #:

Oh, I wish you guys had bags.

After calibrating any classifier by CV method, you can immediately see the potential of this model. If it is not capable of anything, the probabilities cluster around 0.5 after this procedure. Although before that it was overconfident. Further fiddling with such a model is of no interest at all. It cannot be improved. That is, it cannot even be calibrated normally, there are no fish. It's quite convenient.

There is no "quantum cut", in your words, no range or bin where it would give a probable profit.

 

Ok, lastly, to close the topic. I managed to export sigmoid calibration to metac.

Given: retrained gradient bousting, then calibrated to this state:


At threshold 0.5 everything is obvious, you can see where the OOS is:

I run threshold and stop optimisation:

I get all sorts of variations, the best at thresholds 0.75-0.85. Even a little bit on new data is a little bit out, although with a threshold of 0.5 there are no normal variants.

Quite a fun toy.

 
Maxim Dmitrievsky #:

After calibrating any classifier using the CV method, one can immediately see the potential of that model. If it is not capable of anything, the probabilities clump around 0.5 after this procedure. Although before that it was overconfident. Further fiddling with such a model is of no interest at all. It cannot be improved. That is, it cannot even be calibrated normally, there are no fish. It's very convenient.

There is not a single "quantum cut", in your words, not a single range or bin where it would give a probable profit.

If it allows you to automate model screening, that's already a good thing.

I have a visualisation of the model by its probability-confidence index with a step of 0.05 and there I can see everything at once. The main thing is the transformation of the result on the training sample and others - there the probabilities are creeping, that's why I'm talking about non-representativeness. That's why I see calibration as an ineffective measure in our case. If there is no strong sample-to-sample bias in your models, it is rather surprising.

And, I will note that just an under-trained model will produce probabilities in a narrow range.

A normally trained model will often lie precisely in the zones of strong confidence - that's why it makes sense to set not a classification threshold, but a window - for example, from 0.55 to 0.65 consider the returned class as a unit, and ignore the rest. At the ends the model is confident, but often there are very few observations, so the statistical significance is small.

 
While it's still in pristine condition, make the most of it. Nobody understands what you're saying.
They'll be swooping in, and they won't leave a letter on a letter.
 

Gentlemen of the MoD!

Is it worth it?

Taking up my algorithm - General Discussion - MQL5

Механика рынка и Машинное Обучение (ИИ). - Заняться моим алгоритмом.
Механика рынка и Машинное Обучение (ИИ). - Заняться моим алгоритмом.
  • 2023.12.26
  • www.mql5.com
вероятно компилятор с оптимизацией на си быстрее проги на асемблере. Не уверен вообще что это заработок оптимальным образом. Это оптимальный метод дешифровки биржевого позволяет четко описать механику изменения цены
 
Maxim Dmitrievsky #:
While it's still in pristine condition, make the most of it. No-one understands what you're saying.
The hardcore M.O.S. will swoop in, they won't leave a letter on a letter.

Come on)))))) Happy New Year))))

The truth is unchanged))))

 
Valeriy Yastremskiy #:

Yeah good)))))) Happy New Year))))

The truth is unchanged)))))

Such a year, generally not happy in the world around us, and someone is happy about it. A toast: may the new year see the monad turn over. And to the people who will make it possible.
 
Renat Akhtyamov #:

Gentlemen of the MoD!

Is it worth it?

Taking up my algorithm - General Discussion - MQL5

about compiler optimisation errors? There are compilers with automatic proof of correctness, I won't advise you anything, I have some questions about these compilers. But you can try it.
Reason: