You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Hi all!
I would like to ask a question about the data range used to optimise EAs. I.e. on which timeframes which ranges to choose. For example, for H1, is it enough to optimise the Expert Advisor on one month, three months, or one year of data? I would like to see these values for different timeframes and at least a brief substantiation of the choice. Thank you very much.
I feel like I wasn't talking about the same thing :(((
... It means, that its optimal parameters should change slowly and evenly enough for you to earn money, using parameters adjusted to the nearest history. Or even to stop trading in time, if the market does not fit your system. To do this, you need to know what parameters make sense to optimize and within what limits, as well as the criteria for system abandonment (for example, if there are no optimal values inside the predetermined zones during the optimization period).
That's the whole point, it will NOT work - the optimum parameters are supposed to change slowly, the market is NOT stationary, and at any time
These parameters and their limits can change drastically.)
That's the whole point, it will NOT work - the optimum parameters are supposed to change slowly, the market is NOT stationary, and at any time
These parameters and their limits can change drastically.)
that's the point :)))
That's the tricky part, it will NOT work - the optimum parameters are supposed to change slowly, the market is NOT stationary, and at any time
these parameters and their limits may change drastically :-o)
For this purpose, there are system abandonment criteria, and in most cases this can be done before it is expressed in equity. Also, no one forbids trading only longs if shorts do not work and vice versa ;) All this can be done in due time if one does not make decisions solely based on changing equity on the traded parameters.
Lately I have been trying to use some kind of stability coefficient.
For example - optimisation for a year, then for each month the growth coefficient (increase in DEPO per month) is counted. The maximum and minimum coefficient is calculated. Their ratio is called stability coefficient. If it tends to one, then it is the ideal variant. The minimal coefficient should also be larger than one. All parameters are saved in the file. I don't have time to make all this in a decent form. I want to post it on my forum.
Lately I have been trying to use some kind of stability coefficient.
For example - optimisation for a year, then for each month the growth coefficient (increase in DEPO per month) is counted. The maximum and minimum coefficient is calculated. Their ratio is called stability coefficient. If it tends to one, then it is the ideal variant. The minimal coefficient should also be larger than one. All parameters are saved in the file. I don't have time to make all this in a decent form. I want to post it on my forum.
IMHO the drawback is in the fixed time ranges: month, year. For this reason I agree with Neutron - to compare parameters we should use a fixed number of trades and then you may calculate not only increase of DEPO (profit) but also profit/risk by comparing, for instance, profit factor.
IMHO the disadvantage is in the fixed timeframes: month, year. In this respect I agree with Neutron - to compare indices on a fixed number of trades and then you can count not only the increase of DEPO (profit), but also the profit/risk, comparing for example the profit factor.
The system may always be improved. If only we had criteria.
........Whenever there are criteria.
That's the whole point :), that everyone adjusts their criteria for themselves, even after reading a "great book on optimisation"......... NO ANSWERS TO ALL QUESTIONS..... somewhere and someone works, somewhere doesn't..... etc. etc.....
..................
Unfortunately I don't have a statistical-mathematical apparatus that would allow me to calculate all this, but I don't think it would help either - there are too many options.....
In general, if you take a bird's eye view of the Strategy Tester optimizer, it is clear that it does not differ from the Neural Network. Indeed, we have a certain amount of customizable parameters, a certain number of indicators used and one output which signals to us to open a position to Long or Short. As a rule, the number of adjustable parameters is the same as the number of indicators (inputs), it is a variant of classic single layer perseptron. But we do not know it, and nevertheless we actively use it in trading. And it would be useful to know better the apparatus that is used when working with NS, which would allow to avoid standard errors and suboptimal behavior in parameter optimization. For example from this it immediately follows the limitation of strategy tester, because single layer persepron is not an optimal approximator and therefore in principle it is impossible to get the best result for MTS in terms of profitability of TS on it in this formulation.
For the NS we are getting the optimal number of fitting parameters for a predetermined history length, not taking it into account leads to the effect of parameters overoptimization (I already mentioned above). This is where all problems with the tester memorizing the history and losing deposits during forwards tests stem from. Moreover, if we take into account that the two-layer perseptron is a universal approximator, then any TS with any cunning links between used indicators (one with multiplication, division, etc.) can be reduced to the weighted sum of the same indicators without losing power, and this is the classical NS architecture and we can use the most effective method of parameter optimization in the world - the method of backward error propagation. It's obviously orders of magnitude faster than a simple brute force and even genetic algorithm used in the tester. Moreover, there is nothing difficult in such a transfer to a new architecture, you just need to take the sum of indicator signals and find the optimal weights.
What I want to say is the following: we are all very skeptical of Artificial Intelligence and everything related to it, especially of NS. But we don't notice that we exploit this knowledge area implicitly at every step - optimization in a strategy tester! We exploit this area in the most suboptimal way - by groping. Hence there is often a desire to discard "bad" passes in a series of tests, etc. In fact, the world is simpler and there is nothing to do, but you just need to know the area of applicability of the method and its limitations.