You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
. i would like to help you forward test this, but for some reason i can not attach it to a chart .. cant figure out why not..
when you copy the mqh file don't paste the
"************************************"
lines at the top and bottom of the file...I don't think the compiler would like those
there must be a reason that the platform is processing the same data differently one time than it does another time.
I can see that the second time it modeled more bars(ticks) but I can account for that because the second time I changed the ending date to the current day instead of one week ago...however that doesn't account for the dramatic changes in the results because the big divergences have occurred long before the most current week of data gets processed in the timeline.
I don't know how to get answers to these questions...
I was given this reply to my email.....
Hello Aaragorn,
Please try to refer to our articles at https://www.mql5.com/en/articles/mt4/tester/
Best regards, Tatyana Vorontsova
MetaQuotes Software Corp.
www.metaquotes.net
Having now read all of those articles that still doesn't tell me or really even suggest why the platform is processing the data differently from one time to the next with the same data files and the same EA settings.
actually the mqh file needs to go in the "include" folder which is a subfolder of the "experts" folder. At least that's how it works for me. I don't have troubles attaching it> I just have troubles getting the strategy tester to be consistent with it.
oh.. ok.. i have just been loading the other ones i have been testing in the expert file and away they go.. i didnt know that your needed to be done differently.. i will make the adjustments and see what happens.. thanks for your quick responce.
there must be a reason that the platform is processing the same data differently one time than it does another time. I can see that the second time it modeled more bars(ticks) ....
That might be the reason why there is a difference. During the first run, there were chunks of data that were missing. However, when you updated the databank and then ran the test, many of those chunks were filled and the stops that were not triggered due to missing data are now being triggered. Did you try widening your stops and running the tests?
Good luck.
That might be the reason why there is a difference. During the first run, there were chunks of data that were missing. However, when you updated the databank and then ran the test, many of those chunks were filled and the stops that were not triggered due to missing data are now being triggered. Did you try widening your stops and running the tests? Good luck.
you mean widening the stop loss? or the trailing stop loss?
or do you mean...
widening the date range of the test?
I don't think the increased number of bars is on the beginning, that data is not changing, it's only the most recent week that changed that I believe accounts for the increased data.
I saw this test differently for several weeks now. The only testing progress I can report is that by opening a new demo account (think the old one expired) the EA is now testing forward in my demo just fine. It's a very busy aggressively engaged program at this point in time. It lost $450 since I started it last night but it's still trading...it's within the models predictions still.
But you see that's just it! How do you tell how it's processing the data or what it's really doing? I don't know any way to observe or alter how it processes data...why would it leave chunks of data out the first time and not the second? How do we verify that the data processing is stable?????
If it's not processing the data in exactly the same way each time then it cannot be relied on.
you mean widening the stop loss? or the trailing stop loss?
or do you mean...
widening the date range of the test?
Widen the stop loss and the trailing stop.
Don't mess with the date range.
Widen the stop loss and the trailing stop. Don't mess with the date range.
that really does nothing to address the processing instability issue.
That might be the reason why there is a difference. During the first run, there were chunks of data that were missing. However, when you updated the databank and then ran the test, many of those chunks were filled and the stops that were not triggered due to missing data are now being triggered. Did you try widening your stops and running the tests?
Good luck.why would the data be missing the first time? It's the same historical data...all that is updated is the most recent week? It's not updating the data from the begining of the test...
Something is variable here...either the data or the way the data is processed. My question remains...
How can I or anyone verify how the data is processed so we know that it's stable and doing it exactly the same each time?
I made one change to the code in a GGS and it ruined the result, so I undid the change and the result didn't return to as good as it was before the change.
Sending another letter to metaquotes.net
using this link. https://www.metaquotes.net/bugtrack/
I submitted once to:
Software:MetaTrader 4
Version:4.00
Type:Error
and once to:
Software: DataCenter
Version:4.00
Type:Error
*****************************************************
I have not received any reply to my last 3 emails to support@metaquotes.ru this is my question.
Three things must be verified to be stable for the strategy backtester to work.
1- the data itself
2- the EA code
3- the way the platform processes the data
I have done two strategy tests on the same EA and gotten very different results each time.
I can verify that the EA code didn't change in each test.
I can assume that it used exactly the same historical data from the history center because the date range was not changed either.
How can I verify that the platform is processing the data exactly the same way in each test?
My results seem to suggest that it is not processing the data the same way each time....see this link for details of my results
https://www.mql5.com/en/forum/general
I have already read these articles: https://www.mql5.com/en/articles/mt4/tester/
I do not see anything in any of the articles to help answer this question about how the platform processes the data and how I can verify it's stability.