Machine learning in trading: theory, models, practice and algo-trading - page 3316
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Does everyone confirm Sanych's incorrect interpretation that Teacher is a synonym for markings?
No, they are not the same, they are not synonyms.
Although marks can act as a teacher, it depends on the task. But it is impossible to put an unambiguous equality between them.
Why are you fidgeting and fidgeting?
The green line is a trace, the red line is validation. And the mark with a red circle is the place where the validation error graph changes from falling to rising, this is the Global Extreme! - that's the place where you have to stop the training. You see, the simple answer to my question? Any learning is the essence of optimisation with the search of global extremum. Any MO method is reduced to this very thing, optimisation of some evaluation function to a global extremum (minimisation of the loss function or maximisation of the evaluation function). But you are not an optimiser, how so? Even if you don't do it intentionally - MO methods do it for you.
Examples of tasks:
Model evaluation:
Both types of learning have their applications in machine learning, and the choice between them depends on the specific task and the available data. Sometimes hybrid methods are also used, combining learning with and without a teacher to achieve better results.
Clearly something's up.
Back to definitions.
P.Z.
It's not far from the end.
Huh. Someone's had an epiphany!
Similar indeed, but in MO this graph shows and means differently.))
I was wondering if somehow you were aware of that.)
This is a graph of the retrained model, in your case.
Why "mine"? They all do. If you keep training after the red circle, you get an overtrained model. So you wait for several iterations until the validity starts to grow over several iterations, stop training and choose the result where the red circle is the global extremum. Some can take the result for 2, 3, 4, and more iterations BEFORE, but it doesn't change the essence, you still need to find this global extremum.
...
It's a traine and validation graph. Complexity has nothing to do with it. It's about the fact that whatever you do in MO, you're looking for a global extreme, you're an optimiser, no matter how much you deny it.
You got a retrained model before the circle.
That's enough, you've completely blown it. Either prove the opposite, but not with one-word phrases, but with drawings, explanations.
Stay where you are. Everyone's stupid but you, so you're the one answering the questions.
What's that? What's that red blush over there? It's round, red, down there.
I've never seen a graph like that before.
Fictionist, you're making things up and forcing discussions about things that don't exist.
It's actually normal to get such a graph from some model, rather than blabbering on here for years.