You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Personally, I have my own networks (:
You say A, you say B, but you're smiling mysteriously :)
What is the cardinal difference between your networks?
Look, to predict the time series you can use the difference of, say, the opening and closing price! And then the corridor will be smaller and will not bounce so much!
It's not that simple...
Question for network experts
Suppose there is some network that ideally should give a 1 to buy (or let's say >0.7 to buy), (-1) to sell, the rest we wait for. There are a number of indicator nets inputs. Some of inputs is the indicator that at crossing 0 (i.e. at changing of sign from negative to positive) kind of gives a signal to buy. That is, the maximum value of the buy signal by this indicator is exactly at the moment of crossing from 0 (further the signal remains, but the potential profit decreases).
Now - a neuronet, it is roughly a function of the sum of products of inputs and weights (well, inner layer neurons are also considered). If we consider the formula f=F(w1x1+w2*x2+...) then if x1=0, then regardless of other inputs and activation function at the moment this input is simply excluded from the final output. It turns out that the signal will simply be ignored.
This situation reminds me of a sort of real case (from Wikipedia) - It is known a case where a network was trained to recognize images of tanks on photos, but later it turned out that all the tanks were photographed against the same background. As a result, the network "learned" to recognize this type of landscape, instead of "learning" to recognize tanks.
So, the actual question is. Does it make sense in this case to transform the value of that input in such a way (though another question - how), that the maximum value of the buy signal by such an indicator is not at crossing 0, but when, say, this indicator = 1.
For example, we can split this indicator into two:
- The first (of the 1-x type) shows the degree of approach to zero.
- The second is binary - just the sign of this difference (+1, -1).
Does this manipulation have any fundamental importance for the network?
For example, it is possible to split this indictor into two:
- The first (of the 1-x type) shows the degree of approximation to zero.
- The second is binary - just a sign of this difference (+1, -1).
Does this manipulation make any difference to the network?
For a neural network, the first option is more informative than the second one.....
If you say A, say B, then you smile mysteriously :)
What's the fundamental difference between your networks?
What's the cardinal difference ... It is highly specialised (recognition of combinations of waves, fractals), and thus the simplest in execution. For example, weights are simply picked up by the tester (as in Reshetov's perceptron), at the same time my perceptron, with the same range of input parameters, is able to memorize a particular pattern, which Reshetov's perceptron cannot. To the credit of Reshetov's perceptron, his design perfectly finds a flat, which in skilful hands can and probably does bring profit.
For a neural network, the first option is more informative than the second one.....
And if we compare (under the above conditions, of course) the original signal and the derivative, the choice is the derivative?
And if one extends the situation to crossing some threshold by specifying an offset relative to zero, should the "threshold" signals be amplified in this way...?
The coding of the signal must be chosen by the trader, based on its meaning.
>> I agree. The idea of the TS should be present. NS is a tool only. Therefore, it is paramount to choose signals to enter and be aware of what we want to get in the output.