You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
The careful man is the better man! The tightening procedure is uncomplicated, and NS training does not suffer - it is an effective method.
As for not achieving optimum values, that's pure bluff for our BPs. I understand if you are predicting a sine wave! - Then yes - there are optimal values. But what are those in market choppiness? Now the optimum is there, and in the next step (which you are predicting), it is there... and you were looking for it "here" with all your might. In short, there is no exact localization problem, and it is solved satisfactorily by retraining at each step.
The converse statement is also true that if the global minimum, or at least its vicinity, is not found, training at each step may not be satisfactory. I studied this problem a bit, I personally got something like self-deception, when the error seems to diverge asymptotically on both samples, but the network of the same configuration at different time intervals gave completely opposite buy/sell signals. Even though the mathematical expectation of winning was positive, still in the end I came to the conclusion that I was still playing the casino. And all this, respectively, because of the initial weighting coefficients, this is the conclusion I came to. These are my thoughts:)
By the way, as a result of my observations, the best randomisation of weights at which the grid learns quickly is in the interval [-0.07; +0.07]. I don't know why this is the case:)
It means that you don't have enough learning epochs. In the limit, no matter where you start (even +/-10), the grid should roll to the optimum, which is close to small values for centered input data. You are artificially shifting it there. This is not always a good thing.
The reverse statement is also true that if the global minimum or at least its vicinity is not found, training at each step may not be satisfactory. I studied this problem a bit, I personally got something like self-deception, when the error seems to diverge asymptotically on both samples, but the network of the same configuration at different time intervals gave completely opposite buy/sell signals. Even though the mathematical expectation of winning was positive, still in the end I came to the conclusion that I was still playing the casino. And all this, respectively, because of the initial weighting coefficients, this is the conclusion I came to. These are my thoughts:)
This is a symptom of a poorly trained grid. Are you sure the training vector wasn't shorter than the optimal P=w^2/d?
This means that you don't have enough training epochs. In the limit, no matter where you start (even +/-10), the grid should roll to an optimum, which for centered input data lies in the vicinity of small values. You are artificially shifting it there. This is not always a good thing.
Yes, I try to keep it to a minimum. I don't want to wait for it to eventually give me something after a few hundred thousand epochs. Usually a few thousand, or tens of thousands, is enough.
Usually a few thousand, or tens of thousands, is enough.
Surprised!
I got a few hundred.
This means that you don't have enough training epochs. In the limit, no matter where you start (even +/-10), the grid should roll to an optimum, which for centered input data lies in the vicinity of small values. You are artificially shifting it there. This is not always a good thing.
It is a symptom of a poorly trained grid. Are you sure the training vector wasn't shorter than the optimal P=w^2/d?
Honestly, I haven't looked at such formulas for a long time, all by experiment, starting with a small number of neurons, and continuing this whole thing until the errors asymptotically separate on two samples. Having found the optimal value of weights in the layer, I retrain the network several times and there are different results on the same sample, but the initial weights are different for each grid. Try retraining your net from scratch and see if you get the same trades on history. You'll tell me later, I'm interested to know.
Surprised!
I've got a few hundred.
>> well, as they say, flight's fine.)
Try retraining your network from scratch and see if you get the same trades on history. Tell me later, I'd be interested to know.
Well, of course not!
All deals will be different, and so on from time to time, but the profit is on average the same (and very small). I am interested in repeatability of averages, it saves computing resources.
Well, of course not!
All trades will be different, time after time, but the average profit is the same (and very small). I'm interested in the repeatability of averages.
So I think you're playing in a casino. I would advise you to use committees, as it may give the best effect. I, personally, am not satisfied with such working conditions. I can not afford to retrain the network on new data, it introduces errors and is not profitable, if after retraining the network is tested on this history again.
Yeah, I think I've got it. The results of the grid with initial randomisation apparently don't need to be repeated exactly. It's enough that the result is stable over some small range.
For example, this is what it looks like:
OPTION 1:
EXAMPLE 2:
The input data, apart from the initial initialisation, which was carried out in both cases, is the same.
That's right, comrade!