You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Don't you forget, it's just that the memory of the site where the training took place is transformed into a memory of the results of the training from that site.
What if this poke method is automated?! Interesting thoughts in this direction. Indicators are changing, so are their interactions, interactions are represented as separate functions, it turns out that the number of parameters can change, only the simplest optimization according to them takes place. That is why I am interested in the question asked in this thread, a more universal approach, which would not depend on strategy parameters. The branch you are proposing has completely different objectives. But if you show the applicability of those methods to the task at hand, please do.
The tester has two interrelated but fundamentally qualitative functions:
1. Verification (debugging) of the TC idea itself. We see if the inputs-outputs coincide with our ideas about it.
2. Selection of parameters of the TS, in which our ideas of the TS are fixed.
After that, nothing can change, since in the case of success of these stages, we put TS on the real and start trading.
And here the main question is: will the performance of real trading be the same as in training?
Here and in the parallel thread this very question is asked.
If we obtain different results in real trading, then this is called retraining, i.e. during the creation of TS we catch some specifics on the training cotier, which we do not see in real trading. In the parallel thread I argue that the problem of retraining is solved solely by the correct selection of a list of input data, in TA it is a set of indicators. I also argue that we can determine before the creation of the TS itself whether the selected set of indicators will be useful for this TS. And when we solve this problem, then pp. 1 и 2.
If, ideally, indicators are balanced and control each other and different aspects of the market at different historical horizons, then they are not in danger of over-learning.
If we get different results in real trading, this is what is called overtraining, i.e. when creating the TS it picked up some specifics on the training quotient that are not found in real trading.
This is just more like undertraining. -)
In my opinion, one must distinguish between the very ability of the system to pick up a pattern and the OPTIMISATION traps into which this ability can fall. Optimization traps are a wider notion which includes both overoptimization and underoptimization, and inertia of the code writing process itself, the human factor and many other things. But I'm afraid the author was referring to the simple fact that EAs are losing, nothing more...
In other words, his question should be "How to avoid optimization pitfalls".
Here two people are advising him - check it on a forward, i.e. on a non-optimized area, and life will get better.
And if you can decide BEFORE writing the code whether the indicator will be useful or not, that's great! You have no need in testing in this case. -)
This is an illusion.
Well then, all living organisms are an illusion. This is exactly how they are built. They have genetic, long-term and operative memory and they learn all the time. But we do not say - poor mankind - overlearned up to Forex.)
Well then, all living organisms are an illusion. This is exactly how they are built. They have genetic, long-term and operative memory and they learn all the time. But we do not say - poor mankind - overlearned up to Forex.-)
The term "retraining" itself is silly, designed to justify the inoperability of the EA itself and is completely meaningless with a volking forward. Whether a variable is overtrained or undertrained is not actually obvious from the degradation. It can only be seen when comparing the forward results under different optimization and test conditions. Both history depth and forward step are selected in each case personally and we can already see what is over and under trained.
The term 'retraining' itself is silly, designed to justify the inoperability of the EA itself and completely loses its meaning when volking-forward. Whether a variable is overlearned or underlearned, in fact, is not obvious from the degradation. It can only be seen when comparing the forward results under different optimization and test conditions. Both history depth and forward step are selected in each case personally and then it is already visible what is over and what is undertrained.
Yeah. When one begins to understand the whole affair, one wonders about the adequacy of the term 'retraining' and understands its inadequacy. There was a discussion on this issue in the 4th forum, and there was someone else to discuss it with. They came to the conclusion that the term "rote learning" or "rote memorization" is more appropriate. The Expert Advisor is like a diligent student who has rote the lesson but does not understand anything and cannot apply his knowledge in other conditions.
Here, even the term is misunderstood by some people. It turns out that someone understands it as "re-learning" - funny.
And the fact that the term is settled in some scientific world does not mean anything, there are many muddy but settled terms, science is entirely composed of terms that do not reflect reality, the true meaning of which is understood only by a narrow circle of "dedicated".
In the parallel thread I argue that the problem of retraining is solved solely by the correct selection of the list of input data, in TA - a set of indicators. I also argue that we can determine before creating the TS itself whether the selected set of indicators will be useful for this TS. And when we solve this problem, then pp. 1 и 2.
You must have missed the meaning of the problem. It is an automatic search strategy. The final strategy may use only a couple of indicators selected from a certain set. It may not be all the features that are available. Eventually we obtain a structure represented by the oriented graph where the condition of entering or exiting the market, Take and Stop are calculated. The graph elements are functions, indicators and constants (parameters). Each element that can form a graph has several groups of rules of interaction with other graph elements, necessary to control some "meaningfulness" of calculations in the graph.
Any ideas about finding strategies?
This term is not silly, but well-established and "approved by the best dog breeders" of the entire scientific world, including market algorithmists. Your ideas about the selection of depth, pitch and other "side" parameters return us to the former problem of the quality of their selection (and probable retraining). So, in any case, we cannot do without analysis of forwards. And the fact that it is necessary to analyse for different sections is clear from the very beginning.
So tell me, how do you decide whether a forward is over-trained or under-trained. Does overtraining degrade in some way differently from undertraining?
The only way to determine the quality of training conditions is to see the corresponding quality of the out-of-sample test. And only by comparing the results can you tell if the optimization is over-optimized or under-optimized. But I don't see any topic dealing with under-optimization anywhere. Everyone for some reason or other sees the root of the problem in the mythical overoptimization instead of code quality.