Machine learning in trading: theory, models, practice and algo-trading - page 1201

 
Aleksey Nikolayev:

Everything is spoiled by non-stationarity, which can be both sharp and creeping.

It can be solved by selecting optimal weights... for example how to vary posterior... from uniform to exponential

 
Aleksey Vyazmikin:

If you're talking about me, I showed the curves on the test sample and the exam sample - I don't even look at the sample for training...

I mean, you're looking at them anyway... I'm talking about what you can look at and lick your eyes at, or what you can put into circulation...

we already have Prior and Posterior)) you just need to update them with weights... it's ingenious and straightforward

as Alexander would say ... get your bags ready

 
Maxim Dmitrievsky:

You're still looking at them for some reason. I'm talking about what you can look and lick your eyes, or you can turn them into turnover.

I'm more likely not licking, but learning - dug with metrics - felt. I have already forged about 200 predictors (many predictors are expressed in 10 columns), which characterize the model.)

By evening the models will be ready, and I will try to learn how to predict non-clear models :)

 
By the way, I also want a predictor that gives the drawdown of the balance predictions (alternately add 1 if predicted correctly and -1 if wrong), does anyone have any function (similar to the drawdown of the normal balance) for these purposes in MQL?
 

I'm looking at the graph of profit dependence on the number of trees in the model (512 models)

and it looks like models with a large number of trees over 60 are less likely to leak, or the sample is small...

 
Aleksey Vyazmikin:

I'm looking at the graph of profit dependence on the number of trees in the model (512 models)

and it looks like models with a large number of trees over 60 are less likely to leak, or the sample is small...

How do you manage to build such a number of models manually... as in the TC league that you have...

Ideally they should be done by GA or full bruteforce. I wrote a new article on how to do it, which hasn't been published yet. All by mql means
 
Maxim Dmitrievsky:

How do you manage to build such a number of models manually... as in the TC league as in yours...

I would use GA or full bruteforce to search them ideally. I wrote a new article on how to do it, which hasn't been published yet. All by mql means.

Why manually? Ketbustu made a batcnik with cycle to generate models according to parameters, the setup file for model parameters is generated by a script in MT5. I also use another script for processing results in MT5 and in the output I get a summary file with characteristics of the models.

If I were to automate drawing of charts and their saving it would be good.
 
Aleksey Vyazmikin:

Why manually? Catbustu has made a batnick with a cycle for generating models according to parameters, the setup file for model parameters is generated by a script in MT5. The results are also processed in MT5 by another script, and in the output I get a summary file with the characteristics of the models.

If I could also automate the drawing of graphs and their saving it would be good.

Ah, cool, cool, leveller)

 
Maxim Dmitrievsky:

Ah well cool, cool, level )

Thanks.

Here I decided to think about the question of automating the probability selection for the breakdown of the classification into 0 and 1, did a balance calculation in 0.1 increments and was horrified by the result on the test sample

the same models on the test sample

It turns out that my test sample is very favorable to the strategy without any additional MO conditions, which apparently interferes with learning (learning takes place on the training sample, and model selection takes place on the test sample), what do you think?

 
Aleksey Vyazmikin:

Thanks.

Here I decided to think about the issue of automating the probability fitting to break the classification into 0 and 1, did a balance calculation with 0.1 step and was horrified by the result on the test sample

the same models on the test sample

It turns out that my test sample is very favorable to the strategy without any additional MO conditions, which apparently prevents learning (learning takes place on the training sample, and model selection takes place on the test sample), what do you think?

I do not really understand what is in the figures and the essence of the problem.

i have done a lot of model variations myself, now i'm trying to figure out which one to choose for monitoring :D or to improve it further

in short... i think that these approaches do not feed the deal output correctly, be it zigzags or some other nonsense

Because for each dimension of a moving window there should be a different distribution from which trades are executed. Then the model adjusts better, including to the test sample. (while zigzag or other outputs are very deterministic in themselves, there are few degrees of freedom for fitting) The last one is to do it and that's all, i.e. enumeration of outputs is more thorough, and then there really will be nothing else to do there

to inputs increments with different lags, the old way, with self-selection through importers and maybe through PCA to get rid of correlation, such variants of bots also made. But in general, using PCA is a flawed idea (although, again, the opposite is what they say on the Internet). Not only do the samples have to be centered, but on new data these components slowly turn into slag.

All this gives something like this, pretty much no fuss, just wait 10 minutes:

the possibility of further improvements seems dubious actually, when the model is already working out more than 100% of the trayn

maybe you can squeeze more out of it with a good piece of graphics/tools