You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
I have adjusted the best optimisation and forecasting parameters from the beginning of the year, the picture is not impressive:
...
More than 800 runs are not encouraging, the optimization results on the June story are worse than when tested with the initial parameters on the same period.
If I understood the phrase expressed a bit earlier correctly, did the optimisation give worse options than with the parameters before optimisation? This may be the case if some constraints are set on the optimisation, e.g. drawdown. Otherwise, there is something wrong. Also check that it does not happen that you optimize a small area (because of performance problems), and then do the forward test on a larger period. You need to stick to the inverse proportion: e.g. you optimise on a few months, then test on the next month, not vice versa. Otherwise you are giving the impression that the optimization results of June were tested in several months ;-).
If I understood the phrase expressed a bit earlier correctly, did the optimisation give worse options than with the parameters before optimisation? This may be the case if some constraints are set on the optimisation, e.g. drawdown. Otherwise, there is something wrong. Also check that it does not happen that you optimize a small area (because of performance problems), and then do the forward test on a larger period. You need to stick to the inverse proportion: e.g. you optimise on a few months, then test on the next month, not vice versa. Otherwise, you are giving the impression that the optimization results for June were checked for the next several months ;-).
Yes you understood correctly, only I used GA, and I have a suspicion that the best options were not found, and the best ones, including those with parameters that were before optimization, were discarded, I did not set any restrictions.
I do optimization for June, forward for July and 12 days of August. I limit optimization to one month history, because it takes more time when increasing the history, and I plan to reoptimize each week of the history, on which the optimization is done in one month.
That's the thing about closed bars: all four OHLCs are the same
Please give me an example from the quote archive, it's not clear what you mean...
Насколько я понимаю, у баров кроме цены открытия и закрытия есть еще максимальное и минимальное значения, которые могут оказать влияние... Ну да автору виднее...
That's the thing about closed bars, all four OHLC's are the same
OrlandoMagic wrote :>>
Please give me an example, from the quotes archive, it's not clear what you mean...
All readings of any non-zero bar (including max and min) will be the same in the tester as they came from the server online. It's not clear what's not clear.
Testing will not be correct because of this. The Expert Advisor can look at the opening bar, at the closing bar, at the arithmetic average. But it will have to perform trades on the whole range that is in this bar. This is why people test on all ticks. If there is a problem with the speed of such testing, one should find out where the program is wasting time and simplify it. This can be done, for example, by commenting out blocks of the algorithm one by one. As far as I understand, testing by opening prices and checkpoints is used only when testing an idea...
You are wrong. It all depends on the algorithm of the Expert Advisor. If the Expert Advisor works on completed bars, it makes no difference whether it received this bar from the tester or from online.
Yes, there are Expert Advisors that work on every tick - you really cannot test them on open prices.
OK, I'm wrong. Test it however you like.
How do I treat this? Optimisation was done from 1 May 2008 to 1 May 2009. Doing a forward test from 1.01.2008 to 1.05.2008 and from 1.05.2009 to today. The opposite picture is different, so what should I believe? How will my TS behave in reality, if tests on both sides of the optimization range show opposite results? A run in the tester with parameters obtained through the optimization range is also different from the results obtained in the optimization itself. Anyway, the further I get, the less confidence I give in this optimization.
Optimization results
Passage Profit Total trades Profitability Expectancy Drawdown$ Drawdown%
25 ____656.40____ 22_________ 3.28_________ 29.84_______ 176.40___ 13.90%
Optimization range test results