What to feed to the input of the neural network? Your ideas... - page 59

 
mytarmailS #:
Well, make up your mind.

Aproximation is not learning, but neuronics is an aproximator...

Neuronics is not trainable?


One thinks DB is a classifier, the other is confused with approximation....

What are you experts? 😀
Learning is a broader concept than optimisation and approximation. Why is it so difficult? Because the scoofs have got their act together?
 
Maxim Dmitrievsky #:
Learning is a broader concept than optimisation and approximation. Why is it going so hard?
Broad, narrow.

Training a model on a traine is nothing but curvefitting, i.e. approximation... more details.
 
mytarmailS #:
Wide, narrow.

Training a model on a traine is nothing but curvafitting, i.e. approximation... further details
That's your personal opinion.
 
I have a question and bewilderment in one bottle. Do you really intend to continue discussing the MoD with a bare arse, i.e. with no minimal knowledge of the subject of discussion? :)
 
Maxim Dmitrievsky #:
I have a question and bewilderment in one bottle. Do you really intend to continue discussing the MoD with a bare arse, i.e. with no minimal knowledge of the subject of discussion? :)
Well, if deep study of the topic for them is to chat with gpt, how do you think?


They won't even be able to formulate a question properly due to lack of knowledge and terminology.
 
mytarmailS #:
Well if the deep learning topic for them is chatting with gpt then what do you think?
At least he's better trained than some. A lot of experts have worked hard.
 
Maxim Dmitrievsky #:
At least he's better trained than some. A lot of experts have worked hard.
The hallucinations haven't gone away
 
mytarmailS #:
The hallucination didn't go anywhere
This is a known problem on new data. As a knowledge base it's quite tolerable :)
 
Maxim Dmitrievsky #:
This is a known problem on new data. As a knowledge base it is quite tolerable :)
I'm thinking about creating a knowledge base on the market and then trade from it.

You can use for example obsidian
 
Forester #:
Maximum quality of training will be at absolutely accurate memorisation, i.e. when all data is completely recorded in the database, or when training a tree to the very last possible split or clustering with number of clusters = number of examples.

Trees with stopping splitting earlier or clustering with fewer clusters - will generalise and merge data in leaves/clusters. These will be undertrained models, but in the presence of noise they may be more successful than models with exact recall.

There was an example at the beginning of the MO branch with teaching a scaffold the multiplication table. Since it was not fed an infinite number of possible choices for training, the forest produces sometimes exact answers, but mostly approximate answers. Clearly, it is undertrained. But it is able to generalise - finding and averaging the closest to the correct answers of individual trees.

With learning in noise - it's hard to assess quality. Especially if the noise is much stronger than the patterns, as in trading.

For this they invented evaluation on validation and test sample, cross-validation, jacking forward, etc.

So, the word assessment came up, great.

So, learning needs to be assessed somehow, it doesn't matter how, the important thing is to improve the assessment. Right?