Machine learning in trading: theory, models, practice and algo-trading - page 3309

 
Maxim Dmitrievsky #:

Andrew is confusing NS training with optimising its parameters, I guess

both are kind of like optimisation, which is a bit disconcerting when a kitten has had a lot of food poured on it. there seems to be optimisation of food everywhere and it's not clear what to eat

 
Andrey Dik #:
so the statement "you don't need to look for a maximum, you need to look for a steady plateau" is inherently erroneous, speaking of the erroneous use of estimation.

Contrary to your assertion you have shown a plateau has been found by demonstrating estimation.

Where in practice can I apply this?

We are discussing overfitting, which is usually based on optimisation. in the tester, it is clear.

In MO, overfitting is revealed by running the model on different files. The variation of model performance is superfitting: no criteria are needed. There is also a package that is revealed by overfitting.

Come down from the sky, pardon me, from your extremes to the ground where things are different.

 
СанСаныч Фоменко #:

Contrary to your assertion you have shown plateau found by demonstrating appreciation.

Where in practice can I apply this?

We are discussing over fitting, which is usually based on optimisation. in the tester everything is clear.

In MO, overfitting is revealed by running the model on different files. The variation of model performance is superfitting: no criteria are needed. There is also a package that detects overfitting.

Come down from your extremes to the ground where things are different.


you don't know what exactly you are looking for (you never answered the question). and if you don't know, you will never find it.
 
СОЗДАТЕЛЬ ИИ: ИСКУССТВЕННЫЙ МОЗГ, СВОБОДА ВОЛИ, СИНГУЛЯРНОСТЬ
СОЗДАТЕЛЬ ИИ: ИСКУССТВЕННЫЙ МОЗГ, СВОБОДА ВОЛИ, СИНГУЛЯРНОСТЬ
  • 2023.10.15
  • www.youtube.com
Освойте бесплатно за 2 недели основы Python и анализа данных на курсе «Специалист по Data Science» от Яндекс Практикум: https://ya.cc/t/HG2PtD254WHFyvПереход...
 
mytarmailS matched the parameters to it in 10 iterations instead of 10000, can it be considered an untrained model?

After all, the very phrase"we came up with" also implies some kind of thought process (iterations).


How does the final model know whether it was brain or computer iterations and whether there is a difference between the two?


The question arose after honouring Prado's article

P hacking is about stretching data to your wants. You take any FF and add as much data to the input to maximise it. If it doesn't maximise well, you add more data or choose a more accurate optimisation algorithm. That is, any FF can be maximised in this way. This is the most common case in TC optimisation. In this case, more data - more overtraining. No options. Global minima-maxima do not tell anything at all. The logical solution is to maximise the FF while minimising the number of features, as I wrote above. The least evil, so to speak. Baes is variant traidof, in scientific words.

The reverse process is research, where you don't make initial assumptions, you don't take FF from the ceiling, you examine the data for patterns.

"You made it up" has nothing to do with reality. "You drew conclusions based on research" does.

And if you're going to do research, you need to at least define the subject and method of research, then choose a research tool. If the subject of your research is not even BP, but an entity known to you alone, you can even determine the result of such a research in advance. I realise that they don't teach this at the university, so here goes :)
 
Maxim Dmitrievsky #:
P hacking is about stretching data to your wants. You take any FF and add as much data to the input to maximise it. If it maximises poorly, you add more data or choose a more accurate optimisation algorithm. That is, any FF can be maximised in this way. This is the most common case in TC optimisation. In this case, more data - more overtraining. No options. Global minima-maxima do not tell anything at all. The logical solution is to maximise the FF while minimising the number of features, as I wrote above. The least evil, so to speak. Baes - variant tradoff, in scientific words.

The reverse process is research, when you don't make initial assumptions, don't take FF from the ceiling, but examine the data for patterns.

"You made it up" has nothing to do with reality. "You drew conclusions based on research" does.

And if you are going to do research, you need to define at least the subject and method of research, then choose a research tool. If the subject of your research is not even BP, but an entity known to you alone, you can even determine the result of such a research in advance. I realise that they don't teach this at the university, so here goes :)

A barrel of honey with a spoon of tar, so the honey can be thrown away. As Stirlitz said, it's the last phrase that is memorable.

 
СанСаныч Фоменко #:

A barrel of honey with a spoonful of tar, so the honey can be discarded. As Stirlitz said, it's the last sentence that counts.

That's so you don't sound too clever.
 
The optimisation process is a search for unknown parameters.

Each iteration is an experiment/research where the hypothesis (parameters) is put forward and the result of the experiment is verified (FF).

So the process of optimisation (search) is quite a research.

But this is not given to intellectual nothingness to understand, of course not given, here you need to think, logic should be included....

 
mytarmailS #:
The optimisation process is a search for unknown parameters.

Each iteration is an experiment/research where the hypothesis (parameters) is put forward and the result of the experiment is verified (FF).

So the process of optimisation (search) is quite a research.

But this is not given to understand intellectual nothingness, of course not given, here you need to think, logic to include ...

The parody of a researcher doesn't realise that all parameters are known before the start of optimisation. And the process of optimisation is a search for parameter values that maximise any function his inflamed mind has invented.

His optimisation turns into a long thorny path of exploring the bottom where he is.
 

Started to reject such a thing - completely OOS (2023). In the second half, the character of the curve changes.

Reason: