Machine learning in trading: theory, models, practice and algo-trading - page 3412

 
СанСаныч Фоменко #:

None.

Confirmed by his personal practice.

Even his left OOS, knees backwards, is no guarantee against looking forwards.

Everything Maxim writes is self-promotion, which is not based not only on real trading, but even on trivial testing in the MT5 tester.

Anti-advertising then
 
mytarmailS #:
We can't know anything for sure, yes, but we can calculate the probability, but I wonder what probability it works with.


I also have a method that adds confidence that the TS will work, it is to look at confidence intervals of Arima forecasts, there is also a method. Prado's Pbo, which also adds to the odds.

I'm all this to what... We can create a classifier that will say whether the TS will work on new data or not, and as signs of a bunch of different of our methods, together they are stronger than one.


We can add a lot of attributes, if the probability is more than 60-70%, it's already a grail, provided that we can generate a lot of TCs >=50.
And it's a mess if you get different models.
There is still room to move, I am looking towards scalpers. Then there is still a desire to complete LLM training.
 
Maxim Dmitrievsky #:
And if it's a mish-mash of different models.

No, it won't, more like a beautiful soup.

 
mytarmailS #:

No, it's not gonna work, more like beautiful soup.

I've figured out how to put clustering in there, but I need to experiment. I want to select signals with 90% winning outcomes. Otherwise, I get a lot of parasitic trades. Or a hard Martin, so that sometimes I lose, and sometimes 1000% :)
 
Maxim Dmitrievsky #:
I've figured out how to put clustering in there, but I need to experiment. I want to select signals with 90% winning outcomes. Otherwise I get a lot of parasitic trades. Or hard Martin, so that sometimes I lose, and sometimes 1000% :)

put a threshhold on the model's confidence in the class and it will filter out what it is more confident in.

 
mytarmailS #:

put a threshhold on the model's confidence in the class and it will filter out what it's more confident in.

Not always that well calibrated, I'd say never )
 
Maxim Dmitrievsky #:
Not always so well calibrated, I would say never.)

I don't know... if you calibrate the data like that, the quality of recognition on them always increases, I've done it many times, I've never got worse....

the only problem is that there is no data left on the big trehold))))

 
mytarmailS #:

I don't know... if you filter the data like that, the quality of recognition always increases, I've done it many times and never got worse.

the only problem is that there is no data left on a large treishold)))

Well, when calibration was discussed, depending on the model, they show wrong probabilities. Even if you calibrate them afterwards, you still don't get it perfect. Although calibration is always useful if thresholds are used.

 
Rorschach #:

Question for connoisseurs...

Even if it is because of volatility, equitic charts don't solve it.

 
Maxim Dmitrievsky #:
I've figured out how to put clustering in there, but I'll have to experiment.

It turned out to be simple and tasteful. All the same 2 models are trained without kozul. First, clustering is done on a set of small dimensionality features (for example, volatility) to identify different market regimes. Then the meta model is trained to predict the cluster (the learning error is usually minimal), and the second model is trained to trade only on samples within this cluster. And so for all clusters. Then you can choose.