A little surprised :) Thought I'd share and ask a NOT rhetorical question. - page 14
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
The optimiser is not exactly a 'linearly scaled tester', but has its own optimisation methods that work effectively on large scale repeated calculations.
We are just now busy speeding up mass calculations. Here is a link to the past results, and a new version with faster calculations is ready.
I agree, not exactly a "linearly scaled tester". You do explicit optimisations, which is a very good thing. However, I can't imagine how you would optimise for a univariate case of a very frequent situation:
The optimisation goes for two parameters, one parameter (range of 100 values) does not touch the calls of the indicator, the second (range of 5 values)does.
In this case you will calculate the indicator values 500 times while searching 500 variants. In this case, you will actually perform a huge number of recalculations. This is because the range of the second variable is only 5, not 500.
This is just the simplest example. Perhaps you have already come up with some ideas how to circumvent this linear scalability of the tester for the optimizer.
P.S. It's examples like these that give you speed advantage in your own calculators by orders of magnitude, not by percent. But these calculators are not universal, so the comparison is incorrect from the very beginning.
Ok - let's say there is an optimizer without cloud computing, but multi-threaded, and which supports C++ and MT4 (and all its subsystem) and is 100 times faster than it, and 6 times faster purely by MT5 code, yes... and "solves" not only with brute force and GA, but also with about 50 more variants. How much would you buy it for? Would you buy it for $1000? Why so expensive? You and ten other people will be the only buyers. :)
However, I can't imagine how you would optimise for a univariate case a very frequent situation:
I can already imagine something (but not completely). Before running the optimiser, you should perform a dependency analysis on the input parameters to be optimised (in the example above, two variables are completely independent). Next, the optimization is run first through the independent variables from the smallest range to the largest (not always correct, as it also depends on the resource intensity of the same indicators. Sometimes it's better to count the light indicator 100 times, than 5 times the heavy one), caching the results.
It is clear that the implementation of such optimization is very complex (especially for the cloud case). But if implemented, then at least absolutely all of the Expert Advisors created in MQL5 Wizard will be optimized orders of magnitude faster. Because the MQL5 Wizard is a combination of a large number of indicators among each other (i.e. there are a huge number of input parameters independent of each other). Another thing is that such an activity does not make sense for profitable trading...
Caching followed by sampling results on huge (millions and tens of millions) samples is more expensive than direct calculation.
Caching with subsequent sampling of results on huge (millions and tens of millions) samples is more expensive than direct calculation.
I'm sure it's almost unrealistic to implement a perfect universal optimizer so that it's as "smart" as I described above. Of course, there is room for improvement, but it cannot be perfect in any case.
Huge samples (tens of millions), of course, you exaggerate considerably. There is no need to cache such things at all.
I think you all understand perfectly what I mean. And many do. No one will even criticize you for it, otherwise it would be programmer's ignorance of critics. Because those who are adequate are well aware of the difficulty of implementing such things.
I will explain the meaning of caching using the same example:
If the indicator is not redrawn, then by the end of a single run in the tester you will have a full buffer of all values of the indicator. You already have it. And, if the next pass uses the same indicator values (the second variable hasn't changed), we don't have to read it again. You can take values from the already calculated buffer (which you already have, there is no need to cache it, the memory from the previous run is not completely free).
If the indicator does not redraw, then
I'm sure it's almost unrealistic to implement a perfect universal optimizer so that it's as "smart" as I described above. Of course, there is a lot of room for improvement, but it cannot be done perfectly anyway.
Huge samples (in the tens of millions), of course, you exaggerate that considerably. There's no need to cache something like that at all.
For example, the EURUSD test for the last 11 years gives more than 50 million ticks.
It means that a simple one-buffer indicator like MA will have to store 50 million states (50 million * 8 bytes(double) = 400 mb buffer), which is too much. If something more complex or larger in number is used, in fact the cache will not fit into memory, let alone multi-core agents.
We were working on the idea of indicator caches and it turned out that it is much faster and less resource consuming to calculate the next indicator value (and even with an economical method) than to build caches.
Yeah, precisely because it's impossible to write such a fast universal optimiser, non-universal numeric grinders will always win in terms of speed. And there's nothing good or bad about that.
They don't win anything.
They have no market environment, no infrastructure, no indicators, no analytics. And this is more important than a one-off cycle (and not even represented).
For example, the EURUSD test over the last 11 years gives more than 50 million ticks.
We are talking about an optimiser, not as many single tester runs. The concept of the optimiser is quite different. There significant speed gains are achieved at the expense of small errors in the results. The optimizer doesn't need models based on ticks at all. At most, they are based on opening prices. An optimizer is not a tester, it is another thing altogether. Your approach is different, and quite logical too.
They are not winning anything.
They have no market environment, no infrastructure, no indicators, no analytics. And that's more important than a one-off cycle (and not even represented).
They win in speed, because nothing can be faster than the for cycle alone. Sometimes speed is exactly what you need, and a calculator will beat any universal tester in speed (but not in other parameters). Not only from MetaQuotes.
I cannot prove my assertion for the following reason:
My calculator is simply a C++ implementation of my EA, where all operations are specifically made integer (prices are integers), where unnecessary passes etc. are reduced completely to zero. There's no interface there, nothing. The only output is a file with optimization results. I.e. I can write an EA with algorithmic optimization in C++ and my tester won't do any trading checks (for example, check if there is enough margin, etc.). It won't emulate history and won't count indicators. There is nothing universal in terms of MT5 tester versatility. The only task of the calculator is to calculate quickly, as quickly as possible. And it counts a hundred times faster than the MT4 tester, producing an error of <1%. I don't understand what I'm trying to show here.
It's obvious that for loop without checks and with only integers will always count faster than a universal tester.