Machine learning in trading: theory, models, practice and algo-trading - page 2763

 

that's counterfactual reasoning - if even the signs of a deal and what to sell are generalised... and some people have time to discuss others (or only that) - to spread gossip on the thread... what are the goals - that's the bottom line

 
Maxim Dmitrievsky #:
Object features: tail, whiskers, ears. Object cat. Generalise to all objects of the same type. Signs of something to sell: ... ... ... . Generalise. If they don't generalise, then either the signs are wrong or the object under study.

It's easier with a cat, we only have price increments and time)))))) Generalise means they are roughly similar. That is, either the increment from and to, or the price from and to, or the pattern up to down similar, or the time from the beginning of five to six.))))))

 
JeeyCi #:

that's counterfactual reasoning - if even the signs of a deal and what to sell are generalised... and some people have time to discuss others (or only that) - to spread gossip on the thread... what are the goals - that's the bottom line

Normal discussion when there is a misunderstanding on some issues. And contras on the facts of the interlocutor among individual sports is a common thing)))).

 
Valeriy Yastremskiy #:

It's easier with a cat, we only have increments of price and time))))) Generalised, meaning they are roughly similar. That is, either the increment from and to, or the price from and to, or the pattern up to down is similar, or the time is from the beginning of five to six.))))))

There are two entities in MO: the teacher and the traits, the predictors....

There is a very large variety of both.

Especially diverse are the signs, the number of which depends on imagination. Experience allows you to reject some classes of attributes at once, for example, you cannot use as attributes any variations of mashka.

I have come up with about 170 predictor features that have predictive power with sd less than 50%. When training the model in several stages, I select from 5 to 10 of them and make predictions on them. And I make this selection every hour (model on H1) and almost always different number and names of signs.

 
Valeriy Yastremskiy #:

It's easier with a cat, we only have increments of price and time))))) Generalised, meaning they are roughly similar. I.e. either the increment from and to, or the price from and to, or the pattern up to down similar, or the time from the beginning of five to six.))))))

Well, just as you can assemble a dog out of belyashi, you can assemble a general pattern out of pieces of time series with some error
 
СанСаныч Фоменко #:

There are two objects in the MOE: teacher and traits, predictors....

There is a very large variety of both.

Especially diverse are attributes, the number of which depends on imagination. Experience allows to reject some classes of attributes at once, for example, it is impossible to use as attributes any variations of mashki.

I have come up with about 170 predictor features that have predictive power with sd less than 50%. When training the model in several stages, I select from 5 to 10 of them and make predictions on them. And I make this selection every hour (model on H1) and almost always different number and names of signs.

Did you invent or find them in the results of training / optimisation? And if invented, do the signs have a connection with time or time sign?

 
Maxim Dmitrievsky #:
Well, just as you can make a dog out of belyashis, you can make a general pattern out of pieces of a time series with some error

By the way, I didn't have time in the code/pattern portions, I had time manually (I sent you the code once, but I couldn't screw it in then))). ). Do you have time now or is it the same?

 
Valeriy Yastremskiy #:

Invented or found in the results of training / optimisation? And if invented, do the traits have a relationship with time or a temporal trait?

Time-based features are not used. They have low predictive power and very high variation in that predictive power. They are better than the mashup variants, though.

 
Valeriy Yastremskiy #:

There was no time in the code/pattern portions by the way, it was manual time (I sent you the code once, then I couldn't screw it on)))) ). Do you have time now or is it the same?

I don't understand what you mean

I don't use time, it's in the increments.
 
After oversampling, will the new predictive power of the predictors be of value? Who has thought about this?