What to feed to the input of the neural network? Your ideas... - page 60

 
Maxim Dmitrievsky #:
Maximising the quality of training is maximising the quality of predictions on new data. No one is interested in predictions on the training sample, because they are already known. That's not learning, it's approximation. You don't call approximation learning.

For example, a two-layer MLP is a universal approximator that can approximate any arbitrary function to any accuracy. Does that mean it is maximally quality trained - of course not. Otherwise, we would not invent other neural network architectures that are better at learning, not fitting, for specific tasks.

Weak, although you seem to have been on the topic for a long time.
Well, if training the multiplication table, Ohm's law and other laws, then the more examples you give during training, the more accurate the answers will be on new data. And the model will always be undertrained, because there are infinitely many variants, you can't feed them all of course.

In a noisy situation radio operators can cope with white noise (or other natural learnt noises), in trading and the noise changes all the time. So it's all quite complicated for quality assessment.
 
Andrey Dik #:

Okay, the word "grade" came up, excellent.

So learning needs to be assessed in some way, it doesn't matter how, the main thing is to improve the grade. Right?

The maximum grade is with absolute memorisation. In trading in a noisy situation, everyone dances with his tambourine as he can)))) Some on the test, some on cross validation, some on Walking Forward. And someone by eye))))
 
Aleksey Nikolayev #:
What is wrong with the usual definition of learning - assigning specific values to the model parameters?
Ivan Butko #:

It does not reflect the essence. You can assign any kind of gibberish and nonsense.

If we start from the opposite (memorisation/remembering), then learning is the identification of certain patterns, thanks to which you can create or identify new knowledge.

As an example: Chat writes poems on an arbitrary topic.

Both model learning and human learning - in both cases you need to adjust the parameters of the model (neurons in the brain).

Ok. The question is that nobody needs just any training, but good training. What is the criterion for evaluating the goodness of training?

 
Forester #:
The maximum score is at absolute memorisation. In trading in a noisy situation, everyone dances with his tambourine as he can))))) Some on the test, some on cross validation, some on Walking Forward. And someone by eye)))))
I.e., learning is a process that maximises the estimate (or minimises the error), right?
 
In the discussion of intelligence we tried to describe its essence. And the proponents of biological connection had a criterion (if I am not mistaken) - learnability.

So, rote learning (man-encyclopaedia), as social practice shows, on the contrary, is a sign of weakness of intellect.

And on the contrary, a person who does not have all knowledge can come to it faster by experience.

And so, being in one volume of some knowledge, the second type of intellect will begin to outstrip the first, in development, practice, activity, research, etc.

Therefore, I would not include the concept of "full memorisation" in the definition or description of learning.



Projecting to forex, in order to create an engine that can benefit from the price chart, we need to consider an architecture that accepts more than 1 value per input and does not break.

Practice shows that the more values per input, the worse it is, and it should be the other way round.

But, on the other hand, there are two types of inputs in relation to the chart:

1. A sequence of temporally similar(!) data.

2. only the most recent but heterogeneous data.

So, in the same simple MLP, the first type of data unambiguously breaks down if more than 1 value is input.

But the 2nd type sometimes(!) works better if you find suitable (complementary) input data.

For example, price position in a range and position of some oscillator sometimes give a working model that repeats success on allied pairs.

With the first type of data this is impossible, with each new (old in time) input - the result on allied pairs gets the absolute random value.


So this is where I got the idea that there is learning in the informational sense (not on examples from life, but on examples from the virtual environment). And the virtual environment is bits.

If we omit the question about bytes and why there are so many of them (signs), one thing remains: not only numbers but also signs are subject to learning.
And hence - the input data should not have only numerical value, because the number has a power factor (in itself - already weight), but should have some qualitative form (a, B, C), where these signs will be assigned weight - already by numbers.

Accordingly, if one composes a learning architecture, it does not have to have views from textbooks, it is really almost a creative process.

But in order to get out of creativity and follow the way of justification, it is necessary to describe at least what learning is in the informational (applied) sense
 
Andrey Dik #:
I.e., learning a process that maximises the estimate (or minimises the error), right?

No. Learning can be without evaluation. Grading is an option.

If you memorise the entire multiplication table. Whether you are graded or not, your knowledge will not change (if you memorised it well).
 
Andrey Dik #:
I.e., learning is a process that maximises estimation (or minimises error), right?

By learning you are not going through the options:

3*3=1,2,3,4,5,6,7,8,9,10,11... and then you calculate the difference with 9 and learn from it that the answer is really 9.
You memorise 9 immediately.

 
Forester #:

No. Learning can be without grades. Grades are an option.

If you memorise the entire multiplication table. Whether you are graded or not, your knowledge will not change (if you memorised it well).

how will you know if you have learnt the multiplication table in full or only partially without a grade?
 
Forester #:

No. Learning can be without grades. Grades are an option.

If you memorise the entire multiplication table. Whether you are graded or not, your knowledge will not change (if you memorised well).
Forester #:

It's not like you're going through the options when you're learning:

3*3=1,2,3,4,5,6,7,8,9,10,11... and then you calculate the difference with 9 and by it you learn that the answer is really 9.
You memorise 9 immediately.


How will you know if you have learnt the multiplication table completely or only partially without a grade?

 
Andrey Dik #:


how will you know if you have learnt the multiplication table completely or only partially without a grade?

You don't. You learn what you are given to learn.