Backtesting, optimizing and real trading

 
I would like to ask you some question if I may. I'm now optimizing and backtesting some nice strategy. Already created a set file, and strated to optimize it from November 2009 to January 2010. When optimization was complete I took the best result (highest Profit) and set these settings as input file to backtesting. The trading history results were very nice (from 1000 USD to almost 4000 USD). But when I changed the dates, for example from 1st February to 14th February 2010 (using the same input parameters) the strategy lost almost all money.

Now I'm little confused :(


1. Would you be so kind and tell me, what history results should I take to have some propability that the strategy will earn some money in the future (let's say next month)?
2. For example, February is going to end soon. What history data should I include in optimizing & backtesting to create such an input file, what can be used in March?
3. Maybe I should change the optimized parameter (now it is set to Balance) and uncheck generic algorithm before optimization?
4. Maybe also I shouldn't take the best profit optimization result?

Thanks in advance,

syndrom

 
syndrom wrote >>
...Maybe also I shouldn't take the best profit optimization result?

That is often the way - often minimum drawdown may be a better pointer or highest Profit Factor better than Max Profit...

But most likely you have a strategy that is too specific to a market pattern, i.e. it may be for trending, ranging or breakout patterns, which dont run all the time, far from it

Many strategies wont work at for any extended period in the low levels of ATR we have had for many months - just look at the ATR (20) and ATR (250) on the Daily chart of any pair of interest..

And see this EA https://www.mql5.com/en/code/8714

to see how types of market pattern can be tested for while the EA is running

Good Luck

-BB-

 
I've been working to backtest as well, and am now realizing how worthless backtesting really is. You get more accurate data if you only test a couple months back, but that isn't enough time to determine anything (past couple months may have had big trends, or lots of small chops, and you can optimize a system to work with that). Then if you try to backtest further back, data is less reliable (at least on smaller timeframes), and so your results are less trustworthy. Not sure that there's a great solution to that. My advice though: don't rely on indicators very much, keep running forward tests day and night (demo account of course), and manage your losses (and manage your losses, and manage your losses :). And as for optimization, if it doesn't work, say with a stop of 50 pips, but then works great with a stop of 55 pips, it's probably not going to work. What I mean is if you have to optimize it so precisely to see profit, your probably fitting it to the data, and it's unlikely to work in the future.
 
nebeno:
You get more accurate data if you only test a couple months back... Then if you try to backtest further back, data is less reliable (at least on smaller timeframes)...

That really depends on the data you use. MetaQuotes data ('download' in history center) is indicative data. I agree the further u go back, the sketchier it is.

But who says u have to use it? There are other data sources that are much better and non-indicative (even for a few years back and for small time-frames).


...test a couple months back, but that isn't enough time to determine anything...

You are overgeneralizing. It really depends on what u r trying to 'determine'. Running a test over that period will give u a rough estimate of an expert's performance in that period (u would have to take into consideration the differences between Tester and Live to asses how 'rough' that estimate is). If this estimate is 'what u r trying to determine', then it's certainly not worthless.


...keep running forward tests day and night (demo account of course)

In this case u would also have to take into consideration the differences between Demo and Live in order to asses how rough an estimate the Demo was (to a Live account).


And as for optimization, if it doesn't work, say with a stop of 50 pips, but then works great with a stop of 55 pips, it's probably not going to work. What I mean is if you have to optimize it so precisely to see profit, your probably fitting it to the data, and it's unlikely to work in the future.

Agreed. But your example is an extreme and obvious case of over-optimization (curve fitting). There are useful applications for optimization that do not involve curve-fitting.

 

Hi Syndrom

BB and Gordon have given good information onn the ins and outts of back testing but I would like to add my two peneth about how to use your results. You should look at why the loss occurred because it is telling you your strategy is not correct yet. I often move the start date forward one day at a time to find the worst results then try and optimise those. If they cannot make a profit with any settings then the strategy has a fault but often you find the parameters can be adjusted. The decision then has to be taken is it viable to adjust the parameters to market conditions as the EA is running or find an automatic way of doing it. Perhaps that would be a good enhancement for MT5 to be able to step the start date of a back test so one could get multiple runs for rigorous testing.

 

Ruptor wrote >>

Perhaps that would be a good enhancement for MT5 to be able to step the start date of a back test so one could get multiple runs for rigorous testing.

I have been using a similar concept for a long time in MQL4. Here's the basic idea (this code is simplified...):

extern int every_hours; 
datetime start_time; 

int init()
  {
   start_time = TimeCurrent();
  }

int start()
  {
   if (IsOptimization())
     {
      int every_secs = every_hours*60*60;             // convert from hours to seconds
      if ( TimeCurrent() < start_time + every_secs )  // exit start() as long as every_hours not passed since start_time
         return(0);
     } 
  
   // your EA code here... 
  }

The idea is to optimize on parameter every_hours. So for example set every_hours: start=0, step=24, stop=96... Do this in optimization mode and make sure start date is a Monday. This will run the test every day (every 24 hours) in that week.


Note that this example is simplified. To do this properly u need more than one 'every' param (cause otherwise if we attempt to do this on more than one week then ALL passes that are 'supposed' to start at the end of the week will start at Monday the following week).

 
That's interesting I haven't reached that level yet but what was your conclusion about using it to get a set of general optimised parameters?
 
Ruptor:
That's interesting I haven't reached that level yet but what was your conclusion about using it to get a set of general optimised parameters?

I have no conclusion relating to your method cause I don't use it for that at all... Actually, I don't really agree with your method (IMHO), I think it would almost certainly lead to curve-fitting.

Ruptor wrote >>

You should look at why the loss occurred because it is telling you your strategy is not correct yet. I often move the start date forward one day at a time to find the worst results then try and optimise those. If they cannot make a profit with any settings then the strategy has a fault...

My viewpoint is that an EA is profitable if it is profitable on average. I don't really care that my EA's lose once in a while, as long as on average they remain profitable. I'd even say that in most cases attempting to make an EA never lose would make it's average profit smaller than if u let that same EA lose once in a while.

Regarding your method - I think looking for the worst result and optimizing the parameters on it will almost certainly lead to curve-fitting. But why do it in the first place? Don't worry too much that the EA lost in that particular case. The question that should be asked is was the EA profitable on average?