Machine learning in trading: theory, models, practice and algo-trading - page 3307

 
Yes, what an era gone) What kind of people they were - Shurik, Koldun, ...)
 
Aleksey Nikolayev #:
Yes, what an era gone) What kind of people they were - Shurik, Koldun, ...)

Alexey Burnakov, Dr Trader...


one is worth hundreds...

 
mytarmailS matched the parameters to it in 10 iterations instead of 10000, can it be considered an untrained model?

After all, the very phrase"we came up with" also implies some kind of thought process (iterations).

How does the final model know whether it was brain or computer iterations and whether there is a difference between the two?

The question arose after reading Prado's article

the point is that the optimum can be found in 1000 iterations or 100, if we are talking about optimising some parameters of the model. as the number of iterations increases, the probability of finding it simply increases.

I said earlier that it is very important to use an estimate that gives the maximum value of what you want to find. using an incorrect estimate leads to guessing at coffee grounds.

 

Alexey stumbled into DQN and went no further, preferring to write stories about Valera. The DR trader got in touch with Alexander, and the latter ruined him, as the former had no opinion.

The only character was Wizard, who clearly understood something a bit, but was too nervous, and when he realised he was about to be caught up - ran away.

Tellingly, everyone wrote in R, so it achieved nothing, for R destroys the mind
 
Aleksey Vyazmikin #:

Overlearning arises from the memorisation of rare phenomena. These phenomena are isolated purely statistically, as there is no model describing cause and effect.

At the same time, a loss does not always mean that the model is overtrained.

Andrey Dik #:

the point is that the optimum can be found for 1000 iterations and for 100, if we are talking about optimisation of some parameters of the model. with increasing number of iterations the probability of finding it simply increases.

I said earlier that it is very important to use such an estimate that gives the maximum value of what you want to find. using an incorrect estimate leads to guessing at coffee grounds.


You don't even understand the essence of my question

 
mytarmailS #:


You didn't even get the gist of my question

I was talking about one thing, Alexei about another.

and I didn't expect my thought to be taken at face value.

Imagine that the set of parameters you need exists in a complete set of all variants. now think about what the estimation should be, so that at any number of iterations it would be possible to find only and exactly this set? if the fit grows with the growth of iterations, it means that it is not the set that is incorrectly located, but the incorrect estimation is used.

 
Andrey Dik #:

I was talking about one thing, Alexei was talking about another.

And both about the wrong thing.


Read the article, then about the problem of multiple testing, then my question again.

 
Andrey Dik #:
so that with any number of iterations you can find just this set.

Song of old!

Nobody needs this set. And optimisation in the tester is looking for just such a unique set and a large number of "optimisers" are satisfied with this single set of parameters without paying attention to the "two-dimensional diagram", by which you can try to find a set of sets, i.e. find a plateau, not a maximum.

And the optimal set, which on the two-dimensional diagram looks like a green island among pale or white squares, indicates overtraining, i.e. some particularity, which is very optimal, but which will never be met again, is taken out, which is called overtraining.

 
mytarmailS #:

And they're both about the wrong thing.

Read the article, then the problem of multiple testing, then my question again.

As you wish, I'm not insisting.

 
СанСаныч Фоменко #:

Song of old!

Nobody needs this set. And optimisation in the tester is looking for just such a unique set and a large number of "optimisers" are satisfied with this single set of parameters without paying attention to the "two-dimensional diagram", by which you can try to find a set of sets, i.e. find a plateau, not a maximum.

And the optimal set, which on the two-dimensional diagram looks like a green island among pale or white squares, indicates overtraining, i.e. some particularity, which is very optimal, but which will never be met again, is taken out, which is called overtraining.

OK, let me ask a simple question: can you unambiguously estimate what you call "plateau"? - if you can, then describe the estimation so that what you have on the plateau has the maximum estimation value!!!! - is it so hard to understand, make the estimation unambiguous so that what you need to find has the maximum possible estimation.

Once again, if the maximum is not what you want when optimising, then you are using the wrong estimate.