You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
You don't have to solve it. It is enough to find a sufficient length of training sample for a particular case, and it is very easy to do.
IMHO this is much easier to solve. I sat and experimented with various optimization windows and found this method:
We run the first optimization. We are looking for a successful or not very successful forward with a decent drawdown. Move cursor in a chart to the bottom of this very drawdown and see the date in a tooltip. Shift the end of the optimization window to this very date. We run one more optimization and look for successful forward and see that a miracle has happened: the fitting has evened out our previous drawdown and turned it into a profitable area, and then the forward is successful, as it was before this trick.
Theoretically this method is certainly better, because in this case our TS has learned to eliminate the drawdown, which it was not able to do earlier, because it had no telepathic powers and we told it what was its error. And practically - it is still written with a pitchfork, i.e. we should additionally check whether this method is adequate, because the window area for forward testing has decreased.
The material for the article is already completely assembled, the only thing left is to put all this stuff in order, add some images and you can send it for publication.
In brief, the article deals with a neural network with in-built expert system (what can paper writers dream up for a fee?) and gives answers to the following questions
1. Why does a neural network need interpolation? Really, why did it suddenly need it?
2. Can a neuron which has been trained for correct approximation on stationary and consistent data interpolate? Disclosed on the example of a logistic regression, which in turn is a neuron. The propagandists of logistic regression will remain dissatisfied. Pest doctors will also disapprove of this scribble, because nowadays it is fashionable to calculate computer diagnoses using logistic regression.
3. How to create an expert system for correct interpolation: Necessary and sufficient conditions? An expert system is essentially a layer of neural network, but not a black box, because it has a knowledge base of easily interpretable rules, like other expert systems. If anyone has something to hide, it is better not to read such a thing, but to use black boxes.
4. Is it possible to retrain a neural network with an expert system on board? Who forbids it?
5. How to automatically train the expert system on the set of training examples so that you do not have to manually create and correct its knowledge base? Of course, training examples are trading signals, i.e. we use readings of technical indicators or oscillators to train the system to trade, but not to recognize some nerdy Fisher's irises. But still it is more convenient and reliable to scratch the knowledge base with the hands, especially with curves and growing from some places, than to trust this case to stupid algorithms.
6. How to eliminate undertraining of a neural network with an expert system? This is certainly a strange question, because everyone is used to struggling with retraining and tweaking. But the author is obviously determined to fight something wrong.
7. Advantages and disadvantages of common neural networks compared to a neural network with an expert system on board? The author went too far in terms of disadvantages, since nowadays he is likely to boast about something like: know-how, patented device, you can hardly find analogues, recommendations from the best dog breeders and dentists, there can be no disadvantages, only advantages, order and buy it right now, while the stock is already running out, etc.
Also attached to this article will be source codes of neural network with expert system written in mql4 and mql5 without using external libraries and dll, and the article itself explains the main features of the algorithms along the way. And there's no sense at all, because all grapplers know for sure that source codes must be carefully hidden from prying eyes, witnesses must be removed and all traces must be covered up.
Such are the pies.
The problem with fitting is that some people only analyse individual optimisation results(runs). But you have to consider them as a whole - the overall result of the optimum zones. In this case forward is not needed.
For example, we have a system on a single machine with one optimum - the machine period. Optimized and got a bunch of sets of value of wholes, arranged them by PF for example. Of course, the chance that individual runs are random is high and we need to check them with for example forward. But if we consider not individual runs, but the optimum zone and the result in it, then it is almost impossible to fit a positive result in the optimum zone in different parts of the series. Of course, this depends on the width of the optimal zone and the sensitivity of the results of the run to a minimal change in the optimum. I.e. keeping the optimal zone is a sign of robustness and anti-fitting. And forward is good only when used once. Use it repeatedly for the same system and it just becomes part of the training sample.
P.S.
The time at which a particular system was running is also a system parameter. For example, one worked from 2005 to 2011. This is its optimal value range - life time. Optimizing history, we almost seek to find systems with the maximum of this range. But the system must not work forever. Therefore, when specifying a testing period, we should remember that we should arbitrarily select it and demand that the system will work in this entire range - it is just an empty phrase. If you decide to search for 10 years, you do it). Imha, enough such a period, that it gives the desired level of confidence in the results. It depends on the number of trades and the distribution of profitable/loss-making trades.
Avals:
But if you consider not individual runs, but the optimum zone and the result over it, then maintaining a positive result over the optimum zone in different parts of the row is almost impossible to fit.
Yes imho it's the same but in a different vein. Is the optimal zone some sort of smoothing filter on the results?
it's like an average value of the target index (profit factor for example) in a certain range of optics.
It is important that the range of options with a sufficient average target value is sufficiently broad and is maintained in all parts of the test. Individual runs may go temporarily into the loss zone, but on average the range should remain profitable. The system is robust if this is true for every option.
So the idea is not to estimate the robustness of an individual run, but of the option as a whole
For example, decide that a person's IQ depends on height. Optimize it on 1000 cells and get an average maximum IQ of 162cm tall. Then they started forward testing on other people and it turned out not so good)) But if it turns out, that steadily on each sample people with height for example of 160-170cm have higher average IQ then chances that it is chance much less than at a single value (because more people get to the sample). And this means, that in itself the dependence of IQ on height has a place.
New version in the attached file, this time with money management (non-aggressive percentage of deposit):
New version in the attached file, this time with money management (non-aggressive percentage of deposit):
The problem with fitting is that some people only analyse individual optimisation results (runs). But you have to consider them as a whole - the overall result of the optimum zones. And then the forward is not needed.