What to feed to the input of the neural network? Your ideas... - page 61

 
Forester #:
You can't. You learn what you're given to learn.
How do you know if you've learnt it or not?
 
mytarmailS #:
How do you know if you have learnt or not?
Learn 100%, up to the last example or how to write them all down in the database.

If you were given incomplete information, it is not your mistake/problem, but the teacher's fault.
At school, if you learn the multiplication table only up to 5, you should not ask about 6.
 
Forester #:
You can't. What you're given to learn, you learn.

So, memorisation (learning) is an endless process? You are given to memorise the multiplication table and you learn and learn the same thing all your life?

Without evaluation, it is not clear whether the memorisation is good or not so good, and it is not clear when to stop the memorisation process.

 
Forester #:
Teach 100%, to the last example or how to put them all in the database.

"100%" is an estimate.

Learning and recording are different things.

Saving/recording - does not require evaluation, it is assumed that recording is 100% efficient (it is a finished process, optimisation has already been done by developers before and achieved 100% quality of record retention).

Memorisation process - unfinished process, it requires evaluation to understand that the memorisation is of high quality.

Please confirm or refute my assertions.

 
Grasping at straws when the last least algorithm or the last instance should be optimisation ) whether it is optimisation of hyperparameters or model weights. It is just important for a person to have that word there, then he will think he has proved something to someone :)
 
Andrey Dik #:

"100 per cent" is the estimate.

To teach and to write down are different things.

Preserving/recording - does not require evaluation, it is assumed that recording is 100% effective (it is a complete process, optimisation has already been done by developers earlier and achieved 100% quality of preservation of recording).

Memorisation process - incomplete process, it requires evaluation to understand that memorisation is of high quality.

Please confirm or refute my assertions.

You're trying to shove optimisation through again )) It's an endless loop. I am not going to get stuck in it.
Everything I wanted to say about learning != optimisation, I have already said.

 
Forester #:

You're trying to cram optimisation in again )) It's an endless loop. I am not going to get stuck in it.
Everything I wanted to say about learning != optimisation, I have already said.

Dead end, are you out of arguments?

I'm not going to shove anything, I'm just dispelling misconceptions. Do not worry, misconceptions can be anyone, here in this thread and we will deal with it, the topikstarter approved this good intentions)))).

I am not talking about optimisation. I am talking about the fact that any meaningful process has an evaluation, because without evaluation it is either impossible to understand the quality of work done, or it is not clear when the process can be completed. Do you agree with this?

By dealing with some of the cornerstone things - you can discover new horizons that were not visible before.

 
Andrey Dik #:

Stalemate, are the arguments over?

I'm not going to shove anything, I'm just dispelling misconceptions. Do not worry, misconceptions can be anyone, here in this thread and we will deal with it, the topikstarter approved this good intentions))).

I am not talking about optimisation. I am talking about the fact that any meaningful process has an evaluation, because without evaluation it is either impossible to understand the quality of work done, or it is not clear when the process can be completed. Do you agree with this?

By understanding some of the cornerstone things - you can discover new horizons that were not visible before.

There can be many assessments, and they can be combined in very bizarre ways. For example, training is done through iterative optimisation using one estimate, but model selection is done using completely different estimates (metrics).

In the context of trading, I would also remind you of fxsaber's approach, in which a plateau is searched for instead of a peak. This is also a problem that is not clearly formalised as an optimisation problem.

 
Forester #:
Well, if you teach multiplication table, Ohm's law and other laws, then the more examples you give during training, the more accurate the answers will be on new data. And the model will always be undertrained, because there are infinitely many variants, you can't feed them all of course.

In a noisy situation radio operators can cope with white noise (or other natural learnt noises), in trading and the noise changes all the time. So it's all quite complicated for quality assessment.

Well it isn't. The accuracy of the responses on new data (and by new data we mean different data than the training data) will depend on the properties of each particular model, not the number of training examples.

On traine you have some data, nowhere else to get more. You abstract away from the amount of data and try to train the model to predict the new data as accurately as possible. That's the point of training.

There are two key evaluation criteria - the variance and the bias of the model relative to the expected values. Finding a balance between the two is the main part of training, the one that is important. However, these criteria are not optimised, but are determined after the fact. In other words, the problem is more often found in the data than in the qualities of the model.

 
Aleksey Nikolayev #:

There can be many estimates, and they can be combined in very bizarre ways. For example, training is performed by iterative optimisation using one score, but model selection is performed using completely different scores (metrics).

In the context of trading, I would also remind you of fxsaber's approach, in which a plateau is searched for instead of a peak. This is also a problem that is not clearly formalised as an optimisation problem.

True, there can be many estimations. Usually, the whole set of evaluations, consecutive or taken as separate elements of the overall evaluation (integral), are called metrics. In either case, the final result will be evaluated.

About plateaus. A plateau can also be described as a final score. To do this, you need to describe what the "plateau" is and look for what fits (the score) that description as much as possible. For example, it could be a set of nearest neighbours with a certain allowable maximum variation in height. If something a person can see with their eyes or visualise, then it can be described, and therefore evaluated by an estimate.