Machine learning in trading: theory, models, practice and algo-trading - page 3141

 
Maxim Dmitrievsky #:

how much longer do you have left? )

Can you take any attributes in sufficient quantity related to the time series and any labels showing profits in the tester and make a robust model out of it?

After all, all BP derivatives are relevant to it :)


The task is difficult in other areas, where it is not clear where this feature comes from and why it is needed. There are tonnes of such rubbish in the big date, which is very difficult to filter. And tonnes of false correlations as a consequence.

Our task looks even primitive compared to this, if we take BP and its derivatives. Because all the signs are related to it.

But we still have to mess with the algorithm and logic to match labels with features. There can be many logics. So you do yours, and we will do ours.

I have already written why I like kozul, because I came to it myself by thinking. And he organically fitted into my idea.

I'm not interested in "attitude".

I'm interested in the predictor's ability to predict classes. For example, mashka certainly has a "relation" to the quote, you can see it with the naked eye. But the ability of Mashka (as well as any other smoothing algorithms) to predict classes is almost zero.

In winter I found out that the pairs of "teacher-predictors" I have, which have classification error from 10% to 20%, in the EA these classification errors have very large values, which eat up all the profit from error-free classification.

So a few months ago I changed the teacher and now I am trying to recruit predict ors that are able to predict classes, and this ability should not change over time.

 
СанСаныч Фоменко #:

I'm not interested in "attitude".

I am interested in the predictor's ability to predict classes. For example, Mashka certainly has a "relation" to the quotation, you can see it with the naked eye. But the ability of Mashka (as well as any other smoothing algorithms) to predict classes is practically nil.

In winter, I found out that the pairs of "teacher-predictors" that I have, which have a classification error of 10% to 20%, have very large classification errors in the Expert Advisor, which eat up all the profit from error-free classification.

So a few months ago I changed the teacher and now I am trying to recruit predict ors that are able to predict classes, and this ability should not change over time.

Let's put it in fingers so everyone understands.

Everyone is interested in the ability of predictors to predict classes.

Now let's see what you do: you take 2 random series (trait and target) and check the predictive ability (whether it is in sliding or not is not important now).

So you do the usual greedy search of everything and anything. Probably there is some way to calculate all possible combinations, and it should take not 10 years, but + infinity.

But you might get lucky and be satisfied with the intermediate result.

Are there any other great unrevealed mysteries of this approach? Why it is so touted.

 
Maxim Dmitrievsky #:

Let's put it in terms everyone can understand.

Everyone is interested in the ability of projectors to predict classes.

Now let's see what you do: take 2 random series (trait and target) and check the predictive ability (whether it is sliding or not is not important now).

So you do the usual greedy search of everything and anything. Probably there is some way to calculate all possible combinations, and it should take not 10 years, but + infinity.

But you might get lucky and be satisfied with the intermediate result.

Are there any other great unrevealed mysteries of this approach? Why it is so touted.

There is no overshoot.

The three lines of code in R that calculate the predictor's ability to predict a particular class of teacher is some value. It varies from predictor to predictor, according to my algorithms (I have several of them, for 100 predictors run in less than a second) the larger the better. In addition, for different predictors, the value of the ability to predict a single class does not change very much when the window moves - within 10% sd, and for some predictors it is more than 100% sd. I select 5-8 predictors, which I feed to the model.

 
СанСаныч Фоменко #:

There's no overkill.

The three lines of R code that calculate the predictor's ability to predict a particular class of teacher is some value. It is different for different predictors, according to my algorithms (I have several of them, for 100 predictors work less than a second) the bigger the better. In addition, for different predictors, the value of the ability to predict a single class does not change very much when the window moves - within 10% sd, and for some predictors it is more than 100% sd. I select 5-8 predictors, which I feed to the model.

window is the time frame of the quote history?
 
СанСаныч Фоменко #:

In addition, the magnitude of the ability to predict an individual class does not change much for different predictors as the window moves, within 10% sd

Name at least one :) To make it clear where to look for such predictors.

 
Renat Akhtyamov #:
window is the time frame of the quote history ?

The window is the number of predictor values that are fed to the model input. For me, it is 1500 bars on H1.

 
Evgeni Gavrilovi #:

Name one :) To make it clear where to look for such predictors.

You want too much.

 
СанСаныч Фоменко #:

So switched teachers a few months ago and now I'm trying to recruit predictors to it that are able to predict classes, and this ability should not change over time.

Kind of a long search, especially if the search is a few seconds long

SanSanych Fomenko #: for 100 predictors work less than a second)
 
СанСаныч Фоменко #:

There's no overkill.

The three lines of R code that calculate the predictor's ability to predict a particular class of teacher is some value. It is different for different predictors, according to my algorithms (I have several of them, for 100 predictors work less than a second) the bigger the better. In addition, for different predictors, the value of the ability to predict a single class does not change very much when the window moves - within 10% sd, and for some predictors it is more than 100% sd. I select 5-8 predictors, which I feed to the model.

Well, can I see a couple of graphs with OOS?

 
Maxim Dmitrievsky #:

a couple of O.C.D. graphs, would you?

Not for a new teacher.

I am trying to solve the problem of coarsening of predictor values. It seems to me that a classification error can occur if the predictor value is slightly different from the value on which the model was trained. I once tried to convert all predictors into nominal form with the same teacher, but it did not give any result. However, the number of values of nominal variables was one. Maybe we need several hundred? I am working, but many other interesting questions are in the way.