Machine learning in trading: theory, models, practice and algo-trading - page 3316

 
Maxim Dmitrievsky #:
Does everyone confirm Sanych's incorrect interpretation that Teacher is a synonym for markings?

No, they are not the same, they are not synonyms.

Although marks can act as a teacher, it depends on the task. But it is impossible to put an unambiguous equality between them.

 
Andrey Dik #:

Why are you fidgeting and fidgeting?

The green line is a trace, the red line is validation. And the mark with a red circle is the place where the validation error graph changes from falling to rising, this is the Global Extreme! - that's the place where you have to stop the training. You see, the simple answer to my question? Any learning is the essence of optimisation with the search of global extremum. Any MO method is reduced to this very thing, optimisation of some evaluation function to a global extremum (minimisation of the loss function or maximisation of the evaluation function). But you are not an optimiser, how so? Even if you don't do it intentionally - MO methods do it for you.

This is the graph of the retrained model, in your case.
And after the breakpoint, increasing complexity leads to increasing overtraining, which is what we were talking about.
 
Valeriy Yastremskiy structure of data.
  • Examples of tasks:

    • Learning with a teacher: classification, regression, prediction, fraud detection, object detection, machine translation, etc.
    • Learning without a teacher: clustering, dimensionality reduction (PCA, t-SNE), associative rule, data visualisation, and many others.
  • Model evaluation:

    • Learning with a teacher: A model is evaluated based on how well it is able to make predictions or classifications by comparing it to known labels. Evaluations may include accuracy, F1-measure, RMS error, and other metrics.
    • Learning without a teacher: Estimation is more difficult because there are no known labels to compare. Evaluation can be based on visual inspection of clustering quality, comparison with other algorithms, or analysis by an expert.
  • Both types of learning have their applications in machine learning, and the choice between them depends on the specific task and the available data. Sometimes hybrid methods are also used, combining learning with and without a teacher to achieve better results.

    Clearly something's up.

    Back to definitions.

    P.Z.

    It's not far from the end.

    Huh. Someone's had an epiphany!

     
    Andrey Dik #:
    Similar indeed, but in MO this graph shows and means differently.))

    I was wondering if somehow you were aware of that.)

     
    Maxim Dmitrievsky #:
    This is a graph of the retrained model, in your case.
    Why "my case"? It's the same for everyone. If you keep training after the red circle, you get an overtrained model. So wait a few iterations until the validity starts to grow over a few iterations, stop training and choose the result where the red circle is the global extremum. Some may take the result for 2, 3, 4, and more iterations BEFORE, but this does not change the essence, for this still need to find the global extremum.
     
    Andrey Dik #:
    Why "mine"? They all do. If you keep training after the red circle, you get an overtrained model. So you wait for several iterations until the validity starts to grow over several iterations, stop training and choose the result where the red circle is the global extremum. Some can take the result for 2, 3, 4, and more iterations BEFORE, but it doesn't change the essence, you still need to find this global extremum.
    You got a retrained model before the circle.
     
    Maxim Dmitrievsky #:
    ...
    And after a point, increased complexity leads to increased overtraining, which is what we were talking about.

    It's a traine and validation graph. Complexity has nothing to do with it. It's about the fact that whatever you do in MO, you're looking for a global extreme, you're an optimiser, no matter how much you deny it.

     
    Maxim Dmitrievsky #:
    You got a retrained model before the circle.
    That's enough, you're completely lost. Either prove the opposite, but not with one-word phrases, but with drawings, explanations.
     
    Andrey Dik #:
    That's enough, you've completely blown it. Either prove the opposite, but not with one-word phrases, but with drawings, explanations.
    This is a graph of the errors on each iteration on treyne and shaft. After each iteration/epoch there is a complication of the model. You didn't draw what the error is around the circle on the y-axis and how many iterations/epochs on the x-axis. If it is 0.5, then the model has not learnt anything there, then it starts retraining. That's why your graph is nothing.

    The global maximum/minimum there is zero error.
     
    Andrey Dik #:

    Stay where you are. Everyone's stupid but you, so you're the one answering the questions.

    What's that? What's that red blush over there? It's round, red, down there.


    I've never seen a graph like that before.

    Fictionist, you're making things up and forcing discussions about things that don't exist.

    It's actually normal to get such a graph from some model, rather than blabbering on here for years.

    Reason: