Machine learning in trading: theory, models, practice and algo-trading - page 1747

 
Mihail Marchukajtes:
What the hell is going on?

What was it?) Glad they brought it back)))

 
I have no way to understand the mathematical principle of NS.

What I understand:

1. training sample - isolated blocks of data containing modified representations of the semantic invariant.

2. NS structure - an assembly of successive layers of "neurons", where the first layer accepts data (and has the necessary number of neurons for this), and other layers are designed to generalize the data processed in the first layer and bring it to an invariant, operated by a clear program logic.

3. "Neuron" - a function that consistently receives a fragment of training sample data, converts this fragment into a "weight" and passes it to the next layer.

It is not clear to me how the non-obvious invariant in the data is mathematically cleared from the "husk" through several filtering layers, correcting not the data itself, but their "weights".
 
Reg Konow:
I'm having trouble understanding the mathematical principle of NS.

What I understand:

1. Learning sampling - isolated blocks of data containing modified representations of the semantic invariant.

2. Structure of NS - assembly of successive layers of "neurons", where the first layer accepts data (and has necessary number of neurons for this purpose), and other layers are intended for generalization of data processed in the first layer and their reduction to the invariant, operated by clear program logic.

3. "Neuron" is a function, which sequentially takes a fragment of training sample data as an input, transforms this fragment into a "weight" and passes it to the next layer.

It is not clear to me how a non-obvious invariant in data is mathematically cleared from "husk" through several filtering layers, correcting not the data itself, but their "weights".

Searching for the highest hill in the clouds, the altitude is not visible behind the clouds. Low-frequency find the beginning of elevation and survey near them, where there are no elevations do not survey. You can survey areas of the beginning of elevations and not take small areas. Kind of smart sampling. But in any case, it is an algorithm. In any case, full search with a very small probability will not lose to different options, with any logic of search, through and through, at both ends to start, the probability of finding a faster search in the search with the logic of the search is higher than in a full sequential.

 
Reg Konow:
I'm failing to understand the mathematical principle of NS.

You are not trying to understand it - you are trying to make it up.

To understand the mathematical foundations of NS, you should read Kolmogorov-Arnold-Heht-Nielson theory.

 
Aleksey Nikolayev:

You are not trying to understand it - you are trying to make it up.

To understand the mathematical foundations of NS you should read the Kolmogorov - Arnold - Hecht-Nielson theory.

It is rarely explained clearly. And few people can understand it from formulas)))))

 
Aleksey Nikolayev:

You're not trying to understand it - you're trying to make it up...

To some extent, this is necessary. You can only really understand something that was created by yourself. I am trying to reproduce the original idea of the NS concept.
 

through backpropagation of invariant definition error and search for local or global extremum of neuron function by Newtonian or quasi-Newtonian optimization methods, adjusting different gradient steps

This will be clearer for Peter

 
Valeriy Yastremskiy:

Search for the highest hill in the clouds, the height is not visible behind the clouds. Low-frequency find the beginning of elevation and survey near them, where there are no elevations do not survey. You can survey areas of the beginning of elevations and not take small areas. Kind of smart sampling. But in any case, this is an algorithm. In any case, a complete search with a very small probability will not lose to different options, with any logic of search, through and through, at both ends to start, the probability of finding a faster search in the search with the logic of the desired higher than in full sequential.

ahahahah )))

 
Valeriy Yastremskiy:

Search for the highest hill in the clouds, the height is not visible behind the clouds. Low-frequency find the beginning of elevation and survey near them, where there are no elevations do not survey. You can survey areas of the beginning of elevations and not take small areas. Kind of smart sampling. But in any case, this is an algorithm. In any case, a complete search with a very small probability will not lose to different options, with any search logic, through and through, at both ends to start, the probability of finding a faster search in the search with the logic of the desired higher than in full sequential.

This explanation is more suitable for GA, in my opinion.)))
 
Maxim Dmitrievsky:

through backpropagation of invariant definition error and search for local or global extremum of neuron function by Newtonian or quasi-Newtonian optimization methods, adjusting different gradient steps

This is more understandable for Peter

So, the work of NS is somehow tied with Optimization?