What to feed to the input of the neural network? Your ideas... - page 57
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Why are you being so sensitive
Both one and the other instead of reading special literature on the subject, use gpt chat and consider their approach to study as profound :-O
The fucking Pepsi Generation ))))
Both one and the other instead of reading special literature on the subject, use the gpt chat and consider their approach to study deep :-O
Pepsi fucking generation ))))
Fun. The point of replacing the default BB is to get the values earlier?
The purpose of replacing the standard one was not to check the quality of the neural network, and the purpose = to know the beginning and end of the flatness in advance.
About the training...
A couple of years ago I met this expression on a common (not technical site): databases based on neural networks.
In general, I agreed with this term for myself. I do trees myself - a tree-based database is also applicable.
1 leaf in a tree = 1 row in a database.
Differences:
Advantages of trees over databases: generalisation and fast search for the required leaf - no need to go through a million rows, the leaf can be reached through several splits.
Clustering generalises too. Kmeans - by proximity of examples to the centre of the cluster, other methods differently. You can also divide by max number of clusters = number of examples and you will get an analogue of database/leaves without generalisation.
Bottom line: tree learning = it's memorising/recording examples, just like a database. If you stop division/learning before the most accurate memorisation possible, youmemorise with generalisation.Neural networks are more difficult to understand and comprehend, but in essence also a database, though not as obvious as leaves and clusters.
Andrew of course wants to bring up the point that learning is optimisation. No - it is memorisation. But optimisation is also present. You can optimise over variations with learning depth, split methods, etc. Each step of optimisation will train a different model. But learning is not optimisation. It is memorisation.
About the training...
A couple of years ago I met this expression on a common (not technical site): databases based on neural networks.
In general, I agreed with this term for myself. I do trees myself - a tree-based database is also applicable.
1 leaf in a tree = 1 row in a database.
Differences:
Advantages of trees over databases: generalisation and quick search for the required leaf - no need to go through a million rows, the leaf can be reached through several splits.
Clustering generalises too. Kmeans - by proximity of examples to the centre of the cluster, other methods differently. You can also divide by max number of clusters = number of examples and you will get an analogue of database/leaves without generalisation.
Bottom line: tree learning = it's memorising/recording examples, just like a database. If you stop division/learning before the most accurate memorisation possible, youmemorise with generalisation.Neural networks are more difficult to understand and comprehend, but in essence also a database, though not as obvious as leaves and clusters.
Andrew of course wants to bring up the point that learning is optimisation. No - it is memorisation. But optimisation is also present. You can optimise over variations with learning depth, split methods, etc. Each step of optimisation will train a different model. But learning is not optimisation. It is memorisation.
Overlearning is memorisation. Memorisation and generalisation - closer to learning :)
Generalisation is more like under-learning. I.e. you have memorised, but not absolutely accurately (you have also involved your neighbours in it...). Almost like a schoolboy with a C grade)))
But if we memorise something defined by a law (for example Ohm's law), there will be no overlearning, it is easier to get underlearning if there are few examples and an infinite number of them.
For trading, where patterns are almost non-existent and noisy, absolutely accurate memorisation along with noise will result in a loss.For some reason this has been called overlearning. Accurate memorisation is not harmful in itself, as in the case of pattern learning. But memorising noise/trash is not useful.