Machine learning in trading: theory, models, practice and algo-trading - page 2554

 
Vladimir Baskakov #:
Here you solve a problem not to solve it

I mean the same thing, they set a problem and then heroically try not to solve it.) As I understand it, the more difficult the task setting is, the more excuses for a negative result, exclamations, sympathy and admonitions for even more complicated projects))).

 
Farkhat Guzairov #:

I mean the same thing, they set tasks and then heroically try not to solve them )))). I understand that the more difficult the task, the more excuses for a negative result, exclamations, sympathy and admonition to even more difficult projects))).

I can only imagine how they torment women).
 
mytarmailS #:

I remember...

I have a slightly different idea...

If you can qualitatively predict the distribution of future quotes for say 50 candles ahead, then from this distribution you can montecarlo a few thousand rows and train the model, thus the model will work adequately on the new 50 candles in theory...

But if the class is predicted incorrectly, then montekarlo will not help

You can play with the size of the window and see the quality of generalization at different ones. There is a chance to get into some cycles

 
Maxim Dmitrievsky #:

But if the class will be predicted incorrectly, then montecarlo will not help

you can play with the size of the window, look at the quality of generalization at different ones. There is a chance to get into some cycles

Why does the class not predict correctly? Because the quotes are not what the model expects, not the distribution. If I could generate quotes from the right distribution, then it would be OK...
 
Maxim Dmitrievsky #:

What do you mean, "sometimes?"

Either there is some sort of pipelining that has proven itself, or this is just idle speculation.

Making noise a separate class does not, in theory, improve the model (the noise stays inside the model and does not go anywhere)

About the drift - it's the basics, the bias-variance tradeoff

Sometimes it means that, depending on the model, the predictors used and the transformations. And there is a Pipeline that has proven itself.

Theoretically it may not improve the model, but practically it improves the result (the noise stays inside the model and doesn't disappear)What does that mean?

About drift - it's the basics, bias-variance tradeoff - it's not about that at all. If you don't understand it, don't write. Read and study.

Be modest, be modest...


 
Vladimir Perervenko #:

Sometimes this means that depending on the model, the predictors and transformations used. And there is a Pipeline that has proven itself.

Theoretically it may not improve the model, but practically it improves the result (the noise stays inside the model and does not disappear) What is this about?

About drift - it's the basics, bias-variance tradeoff - it's not about that at all. If you don't understand it, don't write. Read and study.

Modestly, modestly...


You put noise in the 3rd class so that you don't trade? It's easier to predict the occurrence of noise than the class mark for buying or selling.

That's exactly what I mean.

 

Vladimir seems to be trying to fight non-stationarity by throwing out examples that (presumably) belong to an irrelevant distribution.

A compromise between bias and variance is sought by assuming a constant distribution (joint distribution of predictors and outputs)

 
Aleksey Nikolayev #:

Vladimir seems to be trying to fight non-stationarity by throwing out examples that (presumably) belong to an irrelevant distribution.

The compromise between bias and variance is sought by assuming a constant distribution (joint distribution of predictors and output)

Removing outliers is not a fight against non-stationarity...

 
Dmytryi Nazarchuk #:

Removing emissions is not a fight against unsteadiness...

Depends on the nature of their origin.

 
Aleksey Nikolayev #:

Vladimir seems to be trying to fight non-stationarity by throwing out examples that (presumably) belong to a non-relevant distribution.

The compromise between bias and variance is sought by assuming a constant distribution (joint distribution of predictors and outputs)

Assuming that in the future the model should also work ) errors of all kinds (including noise) will always be there, the problem is to find a balance. Therefore we are talking about the same thing, in fact.

Actually, I was solving this problem in a different way, so I'm writing leading questions