What to feed to the input of the neural network? Your ideas... - page 56

 
Ivan Butko #:

Great question

If it is fully explored in the context of a virtual environment (information field), one can presumably get in the right direction, rather than broadcasting academic knowledge and textbooks

Every time picking at architectures I asked myself the question "why like this? Why? Why did they decide to do it this way?" No, just take - and translate the architecture, which was written by smart uncles-mathematicians.


I even asked the chat room, why the LSTM block has the form it has? In reply - nonsense from MO textbooks: it is a long short-term memory, blah blah blah,adapted to learning on classification tasks and blah blah blah blah.

I ask "so why exactly so?", the answer is like "mathematicians decided so". No theory, no information theory, no information processing theory, no definitions of learning, learning theories, etc. Stupid postulates.


From the third time the chat started to talk about fades and gradient spikes. LSTM solves these problems. Well, OK, how does it solve it? - Withgates!
What gates!?!? What gates? -

What information???? The numbers in the input? But you convert it from the incoming numbers into a traily-valley distorted incomprehensible gibberish that turns the incoming numbers-colours RGB into something unreadable, a black box of mush.

Well let's say, converting some numbers into others, but learning in what? Memorisation? So it's memorisation! And how does it differ from learning?


In the end, it's unclear what they try to apply to unclear what in the second degree - non-stationary market.



In general, the question is great, it was asked for a long time ago. Its unfolding is something extremely interesting.

I.e., there is really no systematisation in the science of applying IO? Just some alchemy with no guarantee of a positive result?

What's the difference between training a neural network and teaching a poodle to do tricks? - or teaching a schoolboy to do tricks? Are there any differences, and if so, who and where systematised these differences and substantiated them?

 
Ivan Butko #:

Training some intelligent system to perform some tasks. For example, predicting.

To learn is to acquire knowledge or experience.

A childish question that "valuable experts" cannot answer. They cannot distinguish between learning and optimisation.

The whole process of model creation, including data preparation and model selection, is called learning. Optimisation of NS weights is called fitting or optimisation.

There are models in which the optimisation stage is not explicitly present at all, such as random forest or rule based models, some types of clustering. In neural networks it is explicitly present.

Don't even go on about gates :)
 
Maxim Dmitrievsky #:

Training some intelligent system to perform some tasks. For example, predicting.

To learn is to acquire knowledge or experience.

A childish question that "valuable experts" cannot answer. They cannot distinguish between learning and optimisation.

The whole process of model creation, including data preparation and model selection, is called learning. Optimisation of NS weights is called fitting or optimisation.

There are models in which the optimisation stage is not explicitly present at all, such as random forest or rule based models, some types of clustering. In neural networks it is explicitly present.

Don't even go on about gates :)

Water has flowed

As you saw in the water.

 
Ivan Butko #:

The water ran

Like I was in the water.

If it flows into one and out of the other, it's pathological.
 
Maxim Dmitrievsky #:
If it flows into one and out of the other, it's pathological.

You don't even understand the question

You're just broadcasting general knowledge

It's like an encyclopaedia.

 
Ivan Butko #:

You don't even understand the question

You're just broadcasting general knowledge

It's like an encyclopaedia.

General knowledge doesn't look like such nonsense as personally forged through denial of everything and anything :)
 
Andrey Dik #:

So, in fact, there is no systematisation in the science of ME application? Just some kind of alchemy with no guarantee of a positive result?

What's the difference between training a neural network and teaching a poodle to do tricks? - or teaching a schoolboy to do tricks? Are there any differences, and if so, who and where systematised these differences and substantiated them?

If you find such works - share them
I am curious myself

Followers of textbooks translate exactly one thing - by training they understand the method of tuning/adjusting the engine (model).

.

But this does not reflect the essence of learning, which in a general sense has a deeper meaning.

 
Better not look for it, but make it up and write it yourself, maestro :)
 
Ivan Butko #:

Great question

If it is fully explored in the context of a virtual environment (information field), one can presumably get in the right direction, rather than broadcasting academic knowledge and textbooks

Every time picking at architectures I asked myself the question "why like this? Why? Why did they decide to do it this way?" No, just take - and translate the architecture, which was written by smart uncles-mathematicians.


I even asked the chat room, why the LSTM block has the form it has? In reply - nonsense from MO textbooks: it is a long short-term memory, blah blah blah,adapted to learning on classification tasks and blah blah blah blah.

I ask "so why exactly so?", the answer is like "mathematicians decided so". No theory, no information theory, no information processing theory, no definitions of learning, learning theories, etc. Stupid postulates.


From the third time the chat started to talk about fades and gradient spikes. LSTM solves these problems. Well, OK, how does it solve it? - Withgates!
What gates!?!? What gates? -

What information???? The numbers in the input? But you convert it from the incoming numbers into a traily-valley distorted incomprehensible gibberish that turns the incoming numbers-colours RGB into something unreadable, a black box of mush.

Well let's say, converting some numbers into others, but learning in what? Memorisation? So it's memorisation! And how does it differ from learning?


In the end, it's unclear what they try to apply to unclear what in the second degree - non-stationary market.



In general, the question is great, it was asked for a long time ago. Its unfolding is something extremely interesting.


It's a crackpot ))))))

Study mashki better .
 
Maxim Dmitrievsky #:
Better not look for it, but make it up and write it yourself, maestro :)
mytarmailS #:

It's crack ))))))

Study mashki better .

You guys are so sensitive.

Well, I have a different approach, a deeper one. Yours is more superficial.

I want to get to the bottom of it, and you're satisfied with ready-made packages.

That's fine.