Machine learning in trading: theory, models, practice and algo-trading - page 3293

 
СанСаныч Фоменко #:

Where's that graph from?

The basics of the basics

https://en.wikipedia.org/wiki/Bias%E2%80%93variance_tradeoff

 
СанСаныч Фоменко #:

Let's not forget that the very notion of "model" is a condensation of reality. There are no extremes here. There is a balance between coarsening and acceptability of model accuracy. But the main thing is not the accuracy of the model, but its coarsening, its generalising ability. And this is understandable, since the main enemy of modelling is over-fitting, the sibling of model accuracy.

You constantly confuse the concept of "extremum" with "sharp peak" (the point at which a function has no derivative).

Even a flat surface has an extremum.

Another thing is that FFs always try to choose such a way that the FF surface is as smooth as possible and the global extremum is the only one. The only global extremum must be the only unambiguous solution of the problem.

If the global extremum of the FF is not the only one, and even more so if it does not have a derivative, it means an incorrect choice of the FF (the criterion for model evaluation). Misunderstanding of this leads to the term "overfitting", misunderstanding of this leads to searching for some ambiguous local extremum.

We can draw an analogy: a specialist - a doctor - is trained, qualification examinations (FF) are developed for certification, for a doctor there can be no concept of "overtrained" or "overfitted", if a doctor does not get the maximum score - it means he is undertrained. And according to you, a good doctor should always be an undertrained underachiever.

Once again, the problem with "overtraining" is the wrong choice of criteria for evaluating the model. It seems that such cool experts are present on the forum, but they repeat the same mistakes over and over again. The development of correct estimation criteria is no less important than the choice of predictors, otherwise it is simply impossible to adequately estimate the model.

I anticipate a flurry of objections, it's okay, I'm used to it. If it will be useful to someone - great, and those who will not be useful - whatever, so they think that it is fine as it is.

 
Andrey Dik #:

You keep confusing the concept of "extremum" with "sharp peak" (the point at which a function has no derivative).

Even a flat surface has an extremum.

Another thing is that the FFs always try to choose so that the FF surface would be as smooth as possible and the global extremum would be the only one. The only global extremum must be the only unambiguous solution of the problem.

If the global extremum of the FF is not the only one, and even more so if it does not have a derivative, it means an incorrect choice of the FF (the criterion for model evaluation). Misunderstanding of this leads to the term "overfitting", misunderstanding of this leads to searching for some ambiguous local extremum.

We can draw an analogy: a specialist - a doctor - is trained, qualification examinations (FF) are developed for certification, for a doctor there can be no concept of "overtrained" or "overfitted", if a doctor does not get the maximum score - it means he is undertrained. And according to you, a good doctor should always be an undertrained non-scientist.

Once again, the problem with "overtraining" is the wrong choice of criteria for evaluating the model. It seems that such cool experts are present on the forum, but they repeat the same mistakes over and over again. Developing correct estimation criteria is no less important than selecting predictors, otherwise it is simply impossible to adequately estimate the model.

I anticipate a flurry of objections, it's okay, I'm used to it. If it will be useful to someone - great, and those who will not be useful, don't give a damn, so they think it's fine as it is.

You're confusing entities. You're trying to fit optimisation to approximation, or vice versa.

Approximation and optimisation are different approaches in solving machine learning problems.


Approximation refers to building a model that approximates the relationship between input and output data. This can be, for example, constructing a linear or non-linear function that best describes the data. Approximation does not consider the goal or problem to be solved, but only seeks to build a model that best fits the data.


Optimisation, on the other hand, refers to finding the optimal model parameters to achieve a particular goal or problem. In this case, the model may be more complex and contain more parameters than in the case of approximation. Optimisation takes into account the goal or objective and adjusts the model parameters to achieve the best possible result.


In general, approximation and optimisation are often used together to build effective machine learning models. First, approximation is performed to build the model, and then optimisation is performed to tune the parameters of that model to achieve the desired goal or task.

 
Neural network doesn't care about your FFs. It performs its task on off-the-shelf data. The discussion here is how to find a balance between variance and bias of such a model. Mitramiles put different FFs on the second end of the NS. Got all the same fits.

You write about finding a target f-i, which we have already set by default.

You still need to realise the difference.
 
That's why it was written above about the importance of proper markup or Oracle, based on expert knowledge or algorithmic decisions. That's what you bring into the model a priori. No FFs are going to save you there.

It's already been discussed several times, going round and round. Either something concrete is discussed, or everyone pulls the load to his own side.
 

Well, I told you.

I'd understand if Sanych started to fight back, but Max....

FF is an evaluation, we evaluate everything. If we misjudge what we do, it doesn't mean we're doing it wrong. Without a proper evaluation it's 50/50 and then they say - this doesn't work, that doesn't work.... I don't claim to be an expert in assessment design, it's a very difficult task.

"It's just the same thing being said in circles" - these are not my words, if anything)))))) It is possible to change words in places so that it will sound even worse, here the evaluation criterion is "number of words", it is not a correct evaluation, because from changing words in places the meaning can change dramatically.

 
I can't help but share the stunning news (so true for me), an even stronger algorithm than SSG has been found.
 
There is a constant substitution of concepts, it is impossible to communicate.
 
Maxim Dmitrievsky #:
There is a constant substitution of concepts, it is impossible to communicate.

I agree, nobody understands each other, there is no single criterion for evaluating a statement and its semantic load. Nobody knows who means what, like in that anecdote:

- What do you mean?!

- I mean what I mean.

That's how it is in the MoD.

 
Andrey Dik #:

I agree, nobody understands each other, there is no single criterion for evaluating a statement and its semantic load. Nobody knows who means what, as in that anecdote:

- What do you mean?!

- I mean what I mean.

That's how it is in the MoD.

You didn't get the point (it was about kozul) and started pushing FFs again. They didn't fit in there at all :)