Not quite Friday, but ...
There is an NS, any NS, there is an input A={A1, A2, .... A20}. Train the NS and get a satisfying result. How do we practically evaluate the contribution of each element of input A1, A2, ... A20 to this result?
The options off the top of my head are:
1) Somehow sum up and calculate all weights with which the element passes through the network. I'm not quite clear how to do it, I would have to immerse myself in the network operation and calculate somehow some coefficients, etc.
2) Try to "zero out" somehow, or e.g. reverse an element of input vector and see how it affects the final result. So far I've settled on it.
But before realizing this second variant I decided to ask my advice. Maybe someone has been thinking on this topic longer than me? Maybe someone can advise a book-article?
or maybe exclude this input and try teaching without it. If the result is almost identical, you haven't really eliminated the right element :)
Or maybe exclude this input and try to train without it. If the result is almost identical, then we haven't excluded the right element :)
Here's the picture: this approach will strongly depend on the training method, or rather the ability of the training to find the absolute maximum. I for example have no illusions in this regard, I'm sure that using GA to train 300 weights I will not find it. So some local maximum, but it suits me fine. By excluding something I can get a result just as good, but just a different variant of NS. But if I could train NS by means of ISC, i.e. to find the only correct solution - then I would do exactly that.
In any case, my task is different: there is a NS, there is an input, there is a learning result and the degree of influence of each element of the input on the final result must be found.
Here the picture is as follows: this approach will strongly depend on the method of learning, or rather the ability of learning to find the absolute maximum. I have no illusions about it, I'm sure I can't find the absolute maximum by training 300 weights with GA. So some local maximum, but it suits me fine. By excluding something I can get a result just as good, but just a different variant of NS. But if I could train NS by means of ISC, i.e. to find the only correct solution - then I would do exactly that.
In any case, my task is different: there is a NS, there is an input, there is a learning result and the degree of influence of each element of the input on the final result must be found.
Then your option 2. But probably not zeroing or inverting, but replacing it with random (noise)
but replace it with random (noise)
Another option is to try all networks with 1 input, then 2 inputs, then 3 inputs, etc. :-)
That's just about what I was thinking... I should somehow try to account for the interrelation of inputs, i.e. exclude or include inputs in groups too.
The results are very interesting, but I don't know how to interpret them.)
Started experimenting, eliminating different combinations of 1 to 5 inputs. The results are very interesting, but I don't know how to interpret them yet) Some results are just so unexpected... I will have to scratch my head for a long time.
What's the surprise?
The degree of influence of each input is practically impossible to assess . There are all sorts of mathematical formulas, and specialised software can automatically calculate the degree of influence. But all these calculations are just a conventional value, which does not really say much, because it may have a big error.
By experience, only a trader who knows from his or her own experience which instrument has a greater influence on another instrument (when speaking about multicurrency entries) can determine this.
If we are talking about selection of indicators as an input from the same symbol on which we trade, the selection of an input from various indicators has almost no effect on the output of the neural network since the neural network is very non-linear and therefore it almost does not care what is entered - stochastic or mcd or anything else. So there will definitely be some difference, but not a drastic or generally notable one that practically does not affect the result.
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
You agree to website policy and terms of use
Not quite Friday, but ...
There is an NS, any NS, there is an input A={A1, A2, .... A20}. Train the NS and get a satisfying result. How do we practically evaluate the contribution of each element of input A1, A2, ... A20 to this result?
The options off the top of my head are:
1) Somehow sum up and calculate all weights with which the element passes through the network. I'm not quite clear how to do it, I would have to immerse myself in the network operation and calculate somehow some coefficients, etc.
2) Try to "zero out" somehow, or e.g. reverse an element of input vector and see how it affects the final result. So far I have settled on it.
But before realizing this second variant I decided to ask my advice. Who may have been thinking on this subject longer than me? Maybe someone can advise a book-article?