What to feed to the input of the neural network? Your ideas... - page 57

 
😁
 
Ivan Butko #:

Why are you being so sensitive



I'd say predictable.
It's like pouring water on someone who's asleep and they're bound to piss themselves....
 
good.
if learning is a process aimed at learning some system of a given topic, then what should be the result? it is necessary to somehow evaluate the result, how good it is.
 

Both one and the other instead of reading special literature on the subject, use gpt chat and consider their approach to study as profound :-O

The fucking Pepsi Generation ))))

 
mytarmailS #:

Both one and the other instead of reading special literature on the subject, use the gpt chat and consider their approach to study deep :-O

Pepsi fucking generation ))))

Pepsi tastes better than cola, I don't mind that.

Although when I was tested with my eyes closed I couldn't tell one from the other

I'm sure they gave me two colas then.
 
If we continue this fascinating conversation, those who pour water will shit themselves again, it's already a tradition 😁 After all, someone taught them this, otherwise how would they know.

It's called over-optimisation - wishful thinking. When you seem to have learnt something, but then it still doesn't work :)

You get rote learning instead of learning. Because the process of learning includes practice.
 
Andrey Dik #:

Fun. The point of replacing the default BB is to get the values earlier?

The purpose of replacing the standard one was not to check the quality of the neural network, and the purpose = to know the beginning and end of the flatness in advance.

 

About the training...

A couple of years ago I met this expression on a common (not technical site): databases based on neural networks.
In general, I agreed with this term for myself. I do trees myself - a tree-based database is also applicable.
1 leaf in a tree = 1 row in a database.

Differences:

1 row in the database contains only 1 example from the data stored in the database.

1 leaf contains:

1) 1 example and all exactly the same examples (when dividing the tree as much as possible up to the last difference)

or

2) 1 example and exactly the same examples + the most similar examples if the division stops earlier. This is called generalisation of examples.
Similar examples are defined differently by different algorithms when selecting tree splits.

Advantages of trees over databases: generalisation and fast search for the required leaf - no need to go through a million rows, the leaf can be reached through several splits.

Clustering generalises too. Kmeans - by proximity of examples to the centre of the cluster, other methods differently. You can also divide by max number of clusters = number of examples and you will get an analogue of database/leaves without generalisation.

Neural networks are more difficult to understand and comprehend, but in essence also a database, though not as obvious as leaves and clusters.

Bottom line: tree learning = it's memorising/recording examples, just like a database. If you stop division/learning before the most accurate memorisation possible, youmemorise with generalisation.

Andrew of course wants to bring up the point that learning is optimisation. No - it is memorisation. But optimisation is also present. You can optimise over variations with learning depth, split methods, etc. Each step of optimisation will train a different model. But learning is not optimisation. It is memorisation.
 
Forester #:

About the training...

A couple of years ago I met this expression on a common (not technical site): databases based on neural networks.
In general, I agreed with this term for myself. I do trees myself - a tree-based database is also applicable.
1 leaf in a tree = 1 row in a database.

Differences:

1 row in the database contains only 1 example from the data stored in the database.

1 leaf contains:

1) 1 example and all exactly the same examples (when dividing the tree as much as possible up to the last difference)

2) 1 example and exactly the same examples + the most similar examples if the division stops earlier. This is called generalisation of examples.
Similar examples are defined differently by different algorithms when selecting tree splits.

Advantages of trees over databases: generalisation and quick search for the required leaf - no need to go through a million rows, the leaf can be reached through several splits.

Clustering generalises too. Kmeans - by proximity of examples to the centre of the cluster, other methods differently. You can also divide by max number of clusters = number of examples and you will get an analogue of database/leaves without generalisation.

Neural networks are more difficult to understand and comprehend, but in essence also a database, though not as obvious as leaves and clusters.

Bottom line: tree learning = it's memorising/recording examples, just like a database. If you stop division/learning before the most accurate memorisation possible, youmemorise with generalisation.

Andrew of course wants to bring up the point that learning is optimisation. No - it is memorisation. But optimisation is also present. You can optimise over variations with learning depth, split methods, etc. Each step of optimisation will train a different model. But learning is not optimisation. It is memorisation.
Retraining is memorisation. Memorisation and generalisation are closer to learning :)
 
Maxim Dmitrievsky #:
Overlearning is memorisation. Memorisation and generalisation - closer to learning :)

Generalisation is more like under-learning. I.e. you have memorised, but not absolutely accurately (you have also involved your neighbours in it...). Almost like a schoolboy with a C grade)))

But if we memorise something defined by a law (for example Ohm's law), there will be no overlearning, it is easier to get underlearning if there are few examples and an infinite number of them.

For trading, where patterns are almost non-existent and noisy, absolutely accurate memorisation along with noise will result in a loss.
For some reason this has been called overlearning. Accurate memorisation is not harmful in itself, as in the case of pattern learning. But memorising noise/trash is not useful.