Have you guaranteed that the logic of your code is the same in both cases?
Not in the sense of "I didn't change the code", but to guarantee that the technical and logical behaviour of the robot is unchanged.
This kind of claim requires giving as much technical detail as possible, including logics and replay conditions. Plus your own attempts to understand and gather more information. The program is yours.
Shouting "I wrote something here, I didn't understand it, I didn't describe the details, I just want to express my indignation because I think that my program, which I don't know how it works, was somehow mishandled by someone else" is futile.
I used to run the optimizer when I was young and green, and now I haven't run it for several years, because the tester just needs to understand, and the optimizer just feeds us illusions.
The optimizer has a clear task: to check the options field and give a set of parameters for the function defined by the trader.
This is a technical task and it was and is doing a great job.
The quality and robustness of robots must be monitored.
The optimizer has a clear task: to check the options field and give a set of parameters for the function defined by the trader.
This is a technical task and it was and is doing a great job.
The quality and robustness of robots must be monitored.
I have deleted all the caches. Repeated the optimization test. Result is the same. Clear discrepancies.
Of course, I'm not an Expert Advisor, but I just don't understand why the optimizer has a 100 when it uses parameters 3, 5, 20 and the test with the same parameters shows -20....
The EA logic in both optimizer and tester is the same. The hardware is the same. What and how should I do to obtain such a difference in the results?
And you have guaranteed that the logic of your code will not change in both cases?
Not in the sense of "I didn't change the code", but in the sense of guaranteeing unchanged behaviour on the technical and logical levels of the robot.
This kind of claim requires giving as much technical detail as possible, including logics and replay conditions. Plus your own attempts to understand and gather more information. The program is yours.
Shouting "I wrote something here, I haven't looked into it, I haven't described the details, I just want to express my indignation because I believe that my program, which I don't know how it works, was somehow mishandled by someone else" is futile.
Advice on how to detect this "variability" between optimization and test?
I will post the logs tomorrow. I will clean everything and make optimization and test. I remember your lesson )))) I will be very attentive.
In the meantime, happy Great Victory to all!!!
I'll go celebrate ))))
Removed all caches completely. Repeated the optimisation test. Same result. Clear discrepancies.
Of course, I'm not an expert, but I just don't understand why the optimizer has a 100 result, while the test with the same parameters shows -20....
The EA logic in both optimizer and tester is the same. The hardware is the same. What do I need to do to obtain such a difference in the results?
Please, advise me how to detect this "variability" between the optimizer and the tester?
I will post the logs tomorrow. I will clean everything up and do the optimization and test. I remember your lesson )))) I will be very attentive.
In the meantime, happy Great Victory to all!!!
I'll go celebrate ))))
His optimisation and backtest are the same for all his EAs (everything is OK), but one EA is the same as yours.
He gave the result of backtest - you can look it (I gave him an idea at a glance - why is the difference).
He didn't give the code, but it is clear that the reason is in the logic (in the code) of his EA (because he said that he didn't have such problems with all his other EAs - see his last post in this thread): https://www.mql5.com/en/forum/338047
You may just forget to initialize something in your code and there will be a difference. Check the code.
I ran test variants a long time ago, there really was rubbish in variables, did the developer beat this at the start with his own efforts? After all, if there is any rubbish in the middle of the code, it should always be the same, right?
I'm not a professional programmer, it's a question, I don't know the principle of low-level programming
There's a small thread in the English section where a user is asking the same question.
His optimisation and backtest on all his EAs matches (everything is OK) and one EA is like yours.
He gave the result of backtest - you can look it (I gave him an idea at a glance - why is the difference).
He didn't give the code, but it is clear that the reason is in logic (in the code) of his EA (because he said that he didn't have such problems with all other EAs - see his latest post): https://www.mql5.com/en/forum/338047
Everything is the same as mine ))))
You just need to forget to initialize something in the code and there will be a difference. Check the code.
How's that? It's the same there and there. How can there be a difference? Especially since practically all data is initialized in a loop. And if something is not initialized there, you exit with an error.
Long ago I ran test variants, there was trash in variables, didn't the developer defeat this at the start by his own efforts? After all, in the middle of code if there may be trash, it should always be the same, right?
I'm not a professional programmer, it's a question.
That's my point exactly. It must be the same in both cases. Even if I have a bug somewhere, it must be everywhere. But as it turns out, this flaw (mine) is somehow magically corrected. But it's absolutely unclear where? Either in optimizer, or in tester....
Away from the festive table for a moment )))))))))))
Everything is the same as mine ))))
How's that? It's the same there and there. How can there be a difference? Especially since almost all data are initialized in a loop. If something there is not initialized, you will get an error.
That's what I mean. It must be the same in both cases. Even if there is a bug in some place, it must be everywhere. But as it turns out, this flaw (mine) is somehow magically corrected. But it's absolutely unclear where? Either in optimizer, or in tester....
Away from the holiday table for a moment )))))))))))
I meant the variables, when you run the script and instead of 0 they contain the data from the last run or something like that, I don't remember, it was like 5 years ago or something like that
and in the tester most likely again you have somewhere nuance, which by the way says that MQ it needs to be improved, I personally do not like everything in the tester, the interface to run it does not like
Everything is the same as mine ))))
How's that? It's the same there and there. How can there be a difference? Especially since practically all data is initialized in a loop. And if something there is not initialized - output with an error.
...I gave that comrade there in the branch an idea why he has a difference in this EA.
After all, when backtest ends (at the moment of backtest ending), all open positions are forcibly closed (only in tester).
If he just traded, they would not be closed at that moment.
Therefore, here for him -
- Or trust optimisation (which in his case is preferable),
- or ignore forced closing of open positions in the bestest only before it ends, but then he won't get digits,
- or pick a time interval for the backtest with the same optimisation parameters, when the EA will close all positions before the end of the backtest (by picking ...), and then I think it will be the same.
- www.metatrader5.com
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
You agree to website policy and terms of use
This is once again about the tester/optimiser...
Noticed discrepancies in results of optimisation and single test. Rebooted terminal, changed settings (to be sure). Started optimization.
Ran a single test...
How could it be? Where does this nonsense come from?