Machine learning in trading: theory, models, practice and algo-trading - page 2581

 
Maxim Dmitrievsky #:
It is convenient with python now. I wrote my own tester but it is possible to transfer models or to trade through the api. If ONNX is added, it will be a real cannon.

I got in touch with the Mac M1, now I'm waiting half a year for the catbust for it, they promise a release in 2 weeks. So far through a virtual machine on vin.
There is a package for python for the backtest, why don't you use it?

Or did you write a tester with optimization?
 
mytarmailS #:
There is a backtest package for python, why don't you use it?

Or did you write a tester with optimization?
I don't like ready-made ones, they are not flexible enough. I have written it specifically for my tasks with my own metrics. Roughly, the input quotes and model results. Plus, I now have two "competitive" models that are retrained several times, iteratively improving. The results for improvements are also taken from my tester.
 
Maxim Dmitrievsky #:
One model learns to trade, the other learns to filter signals.
I understand that it's a vogue now to create generatively so called algorithms, but what's the advantage of two relatively simple algorithms, that are so called and improve each other, from one complex algorithm that does it in itself, just roughly speaking, it makes more complex decision-making rules than your two...
I honestly do not understand the advantages, it's just a fashion.
 
mytarmailS #:
Well, I understand that now it's fashionable to generatively so-consider algorithms, but what's the actual advantage of two conditionally simple algorithms that so-consider and improve each other from one complex one that does it in itself, just roughly speaking it builds more complex decision-making rules in itself than your two...
I sincerely do not understand the advantages, just the fashion.
So I created something like this and saw that it's good ) The problem is model errors and finding really stable patterns, on the machine. This is kind of the basic idea from the beginning. That's where the approaches can already vary. One model cannot correct itself, two can.

Let's say you train a model, she's bad. What to do? Go through something yourself? No way, the person wasn't born for intensive work, you replace the person with a second model.
 
Maxim Dmitrievsky #:
Let's say you train a model, it's bad. What do I do? Do you have to do something yourself? Nope, the man was not born for intensive work, you replace the person with a second model.
Look, finally get acquainted with optimization algorithms and fitness functions and stop re-inventing the bicycle on square wheels.
 
mytarmailS #:
Listen, get to know optimization algorithms, fitness functions and stop reinventing the bicycle on square wheels
This is different. Through optimization there will be a fit. Through analysis and correction of model errors there is also a fit, but you find stable patterns by throwing out unnecessary stuff. At least you find some plateau where there is stability. Through a simple genetic enumeration it's harder, more of a handjob.
 
Maxim Dmitrievsky #:
This is different. Through optimization there will be a fitting. Through the analysis and correction of model errors it is also a fitting, but you find stable patterns by discarding unnecessary things. At least you find some plateau where there is stability. Through simple genetic enumeration it's harder, more of a handjob.

Elementary example.

you need to train AMO to maximize profits what will you do?


1) you make a target

2) you fit the modelsto a standard metric by RMSE for example (this is deeply irrelevant)

3) you create a group of the best models

4) choose the best model from the group with the largest profit


And now a question: why do you think that your group is the absolute top of the best models in the global sense? You have run the models through two subjective filters

(1) your target and (2) the RMSE error measure


Isn't it better to change weights (if it's a neuron) and create rules (if it's a tree) for the purpose of maximum profit, the question is rhetorical... of course it's better and faster

The point is that you're missing out on other groups of models who earn and these groups make millions

 
mytarmailS #:

An elementary example.

you need to train AMO to maximize profits what will you do?


1) you make a target

2) you fit the modelsto a standard metric such as RMSE (this is deeply irrelevant)

3) you create a group of the best models

4) choose the best model from the group with the largest profit


And now a question: why do you think that your group is the absolute top of the best models in the global sense? You have run the models through two subjective filters

(1) your target and (2) the RMSE error measure


Isn't it better to change weights (if it's a neuron) and create rules (if it's a tree) for the purpose of maximum profit, the question is rhetorical... of course it's better and faster

The point is that you're missing out on other groups of models who earn and these groups make millions

I select R2 by balance, plus the minimum number of losing trades, but with the lowest entropy (logloss) and maximum acuracy. That is why models are the most profitable by default. It turns out to be a combined criterion. Plus it would be nice to add the results of cross-validation to the estimation. I haven't got around to it yet.
 
Maxim Dmitrievsky #:
I select R2 by balance, plus the minimum number of losing trades, but with the lowest entropy (logloss) and maximum acuracy. That is why the models are the most profitable by default.

You may select from ready-made models or create a model. That's the difference.

 
mytarmailS #:

You select among ready-made models, and you can create a model. That's the difference.

It's when you know what to create and why. They're not ready-made, the transactions are sampled randomly, like in the articles. There are no a priori assumptions or heuristics at any stage of data preparation, there are some ranges of values like maximum and minimum deal holding times.

Basically, the whole mechanism works to find the unknown, but it's supposedly there, but we don't know what.
Reason: