You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
I'll take some time off -:)
We should go to the library...
I'm not quite sure what you mean by "retraining at each step"?
I only make predictions using NS for one count forward. Then, in order not to lose the accuracy of the forecast, I retrain the network using new input data, etc. In this situation, you can not retrain the grid "from scratch", but keep the old values of weights as starting values in the new countdown.
This is exactly what I had in mind.
Neutron, when I go from amplitude prediction to sign prediction, it turns out that the error in the grid output is an error in sign. That is, the error takes on a value of +1 or -1.
Do I understand this point correctly? If not, how?
I make a forecast using NS only one count forward. Then, to keep the accuracy of the forecast, I retrain the grid using new input data, etc. In this situation, you can not retrain the grid "from scratch", but keep the old values of weights as starting values in the new countdown.
This is exactly what I had in mind.
I wonder if a flat - network changes the outlook on the forecast?
I wonder if it's a flat - does the network change the outlook on the forecast?
Well, of course it does! It is adaptive by nature.
And then, in fact, a flat is the same as a trend, only smaller... So your question comes down to adaptation of NS to a new/changed trade horizon. That's her direct responsibility. The fact that I use the "old" values of weights in an already "new" market when I'm retraining - doesn't spoil the process itself, even on the contrary. The matter is that the process of changing (exactly changing) the trend is quasi-stationary and therefore the selected tactic justifies itself.
Neutron, when I switch from amplitude prediction to sign prediction, it turns out that the error in the grid output is a sign error. I.e. the error takes the value +1 or -1.
Do I understand this point correctly? If not, what is it?
No, you are not.
The process of training the network does not differ from the classical case, the difference is that at the input of the hidden layer of neurons you give a binary signal and the output is a real value determined on the interval [-1,1] (in case of activation of the output neuron th()) and proportional to the probability of event occurrence (certainty of the network in a sign of the expected increment). If you are not interested in probability, but only in sign of expected cotyr movement, then interpret only the sign of the prediction, but train the network on real numbers (I mean that error in ORO method must be a real number). The fact that the learning rate increases with this method compared to the general case is not a paradox. The fact that by giving the input binary signal, we significantly reduce the dimensionality of the input feature space in which the NS should be trained. Compare: either +/-1 or from -1 to 1, in increments of 0.001 and each value must be placed on the hypersurface of dimension d (number of inputs), having previously constructed it by the same NS (it does this during its training).
...you feed a binary signal to the hidden layer of neurons and the output is a real value defined on the interval [-1,1]
That's it! It would not even occur to me! >> I'll try it now.)
...Compare: either +/-1 or from -1. to 1, in steps of 0.001 and each value must be placed on a hypersurface of dimension d (number of inputs), having previously constructed it by the same NS (it does this during its training).
And if the input is a binary signal, isn't it better to make it 0/1 ?
No of course not!
Your centre of gravity of such "input" is shifted by 0.5 (its MO), while the input has MO=0 at initialisation. So, you'll have to spend part of your resources on an empty pulling up (adjusting the weight) of the single input of a neuron to fit the obvious thing. In general, everything that can be done independently, without AI's participation, should be done. This saves a lot of learning time for the NS. It is for this purpose that inputs are normalized, centered and whitened. All this, so as not to distract the AI's attention with trivialities, but to concentrate on the most important and difficult - nonlinear multivariate correlations and autocorrelations.
Yeah, I got that.
I'm currently tweaking my two-layer self-learning perseptron. Hopefully, it will be up and running today.
Hopefully it will be up and running by today.
Don't get your hopes up :-)
In my experience, you'll be ready to say the same thing 20-25 more times before it actually works.