Machine learning in trading: theory, models, practice and algo-trading - page 3321

 

That's a long way off,

I'm going to try to make a song.

It's hard to say, I've already forgotten it and it's long, the music, if we learn it, is a long melody:

 
Andrey Dik #:

Show me the graph.

Please.

Show on this graph the criteria where you need to stop the training.

at the minimum of the criterion on the whole sample, what's next?

 
when you fall in love with a new girl, you start listening to female music.
 

you know what I mean?

A real AI specialist needs red cigarettes and green cologne.

 
Maxim Dmitrievsky #:

at the minimum of the criterion on the gross sample, what's next?

Bingo!

Now you have finally realised that any learning is nothing but optimisation with search of global extremum. Or maybe you haven't realised it yet, but you will.

It cannot be otherwise, you always need an unambiguous criterion to stop learning and this criterion is always designed so that it is a global extremum. Usually an integral criterion is designed (not always). You named integral criteria.

 
Andrey Dik #:

Bingo!

Now you have finally realised that any learning is nothing but optimisation with search for a global extremum. Or maybe you haven't realised it yet, but you will.

It cannot be otherwise, you always need an unambiguous criterion to stop learning and this criterion is always designed so that it is a global extremum. Usually design an integral criterion (not always). You named integral criteria.

The original post was about model complexity, not extrema. You're just pulling your own line, forgetting what I wrote.

That is, you are again engaging in pee-hacking, or stretching data to fit your words.

 
Andrey Dik #:

Show me the graph.

Please.

Show on this graph the criteria where you need to stop the training.


Here is a typical plot of the model fitting error.

It asymptotically approaches some value of offset from the axis.

The amount of bias is a property of the target-predictor pair. By optimising the parameters that a particular model has, some crumbs can be obtained, but it is impossible to jump the "target-predictors" property by any optimisation.

If the bias is 45% of the error, it is impossible to get 10% less by changing the model parameters. And no optimisation will help.

And if you manage to find a pair "target-predictors" with 20% error, it will be about 20% whatever you do.

Moreover. If on the traine and then on validation the errors diverge more than 5%, it means that you need to work on the "target-predictors" pair in a meaningful way. If it is not possible to converge, then the "target-predictors" pair will have to be discarded.

 
Maxim Dmitrievsky #:

The original post was about model complexity, not extremes. You are just pulling your own line, forgetting what I wrote.

That is, you're pi-hacking again, or stretching the data to fit your words.

What do you mean "originally"? We discussed model complexity separately, at that time we said that increasing model complexity is only effective up to a certain point, then there is a drop in effectiveness, and that is true, yes, I didn't argue with that and I confirm it. Then I just suggested that perhaps efficiency can increase dramatically if you increase the model very significantly, because no one here has done that before (and I can see why).

I have always said, since very long ago, that any learning is optimisation with search of global extremum, but you denied it (and some others), saying that you are not an "optimiser". Now I have clearly shown you that learning can be stopped only when a global extremum is found, otherwise there is simply no way to do it otherwise (you don't know when to stop learning, you need a criterion for it). That is why the metacriterion of stopping is the essence of optimisation when learning to the global extremum.

Realising this makes it possible to look at learning from new angles.

 
There is an error in my drawing, the red val line should be above the trayne.
 
No one was discussing anything, you jumped into a conversation about model complexity, or reacted to a post, so you're known as an optimiser.
Reason: