Phoenix optimization - page 4

 

Thanks wilson, I'll take a look too.

Testing Artemos settings was pretty scary! It seems like people are finding "conditional local maximums" to use Genetic Algo terminology.I look forward to trying these settings.

 

Optimized Settings

Version: 5_6_04

Data: Alpari

Period: 2006-09-01 to 2006-12-02

Model Quality: 90%

I optimized this on the past three months history of the usdjpy. It is interesting to see Wilsons results (Profit trades: 100% @ 30 days) in comparison to mine (79.55%@ 90 days). I wonder if this might suggest more frequent tuning of the system for higher profit percentages????

 
 

GBPUSD input settings for Phoenix - Tick Data

Questions from a novice:

Do there exist reliable input settings for Phoenix to run backtesting on GBPUSD?

Has anyone successfully run backtesting of Phoenix on tick data from Gain rather than timeframes? Wouldn't tick data provide the most accurate basis for backtesting?

 

Daraknor, see attached reports from backtesting 5_6_07a. Looks like there is some error in the mode3 since trading stopped somewhere after 3 months (I was backtesting 6 month period).

There is probably nothing wrong with historic data, since I've backtested also mode1 and mode2 and those trades were executed for the whole period.

I've used default settings.

I'm impressed!

Mode1

Long wins 88%

Short wins 71%

Total wins 78%

3 out of 4 trades are winning!

Portfolio almost doubled up in 6 months.

Mode2

Long wins 83%

Short wins 66%

Total wins 75%

Portfolio more than trippled up! Similar winning trades. What is nice is, you can see, first 3 months were favorable as with Mode1, as Contest version. But even in last 3 months, Mode2 didn't lost no money, actually even in this unfavorable period did managed to increase portfolio!

Mode3

Long wins 98%

Short wins 86%

Total wins 93%

Halo?!? Only one losing trade out of 10!

Most of trades are ending break even, so final result is not as astronomical as in Mode2 (gain some 20%) but with this win/loss ratio... wau, few more tweaks...

Man, this looks good.

Btw, I also tried Fikko's settings for Mode1. He managed to find great settings, for time, when original settings don't work for Phoenix. But if you check first 3 months, when is Phoenix with original settings making nice pile of pips, then Fikko's settings underperforming. So we have two modes, Original and Fikko's. Now all you need to do is to switch betwen them when right time and you will be rich

Mario

khm... automatic switch... am I asking too much for?!? lol lol

Files:
crown1.zip  63 kb
 

Great minds think alike! I just finished posting about how I want to eventually set up an automatic switching ability in Phoenix. That might be in Phoenix 7, and one idea for switching between settings is put a pile of settings in multiple files, and then log the accuracy of each settings file in a separate file. By setting some market indicators for trending, channel detection, volatility, etc we should end up with a performance profile for each, and automatically switch settings as necessary.

Another way to achieve a similar result is dynamic resetting of parameters. Easier in terms of overall management, harder in terms of logical understanding. We might learn lessons from the previous method to make this happen automatically.

A variation that combines both systems would be backtesting dozens or hundreds of settings using the GA backtest feature for the previous 5-21 days, with an offset of two days. Once settings are chosen from the previous calendar, they are applied to the next two days. (yesterday and the day before). If market conditions remain the same, then we have a "great foot forward" on the current market. This removes the long term learning, but if we store the results we can start to correlate the settings ourselves, and later use them as initial selections. Every night at 6-7pm EST we would load the new settings and use them for a day.

The second method adds the most deductive intelligence, the first method adds the most trained intelligence. Once we figure out how to do these things well as humans, we can probably write code that does it faster and automatically in software.

 

I decided to publish some of my notes on each of the variables and some of the test data. Of course, you can skip learning and just load the new optimization set file, check all of the relevant boxes (everything in signal) and fire it up.

A strategy I intend to play with is testing the signals that pick trades separate from the signals that reduce drawdown. Pick one set of high performers with a lot of trades, and then filter it as a separate step. This will give more complete results, and may take a lot less CPU time.

I applied two filters in the 5.7.2 development tree to cut down on signals where fast and slow are reversed, or too close together. This eliminates USDJPY from our current settings, but it cuts the optimization time down to about 1/3 when optimizing only these two values. The settings for USDJPY aren't working well anyway, so rewriting them in the standard direction wouldn't be bad, and it would speed up our optimizations looking for new values.

Of course, doing a full search on 1,069,807,835,545,473,000 values would take a while, doing them in smaller segments and disabling invalid settings helps a lot. Optimizing each signal independently only takes a few hours, but doesn't create great settings. A quick scan on a few settings detailed in the notes, followed by a closer look at specific regions helps a lot. A GA search on clumps of settings also helps, and some currencies are noted to have an affinity for specific groups of settings. All in all, we can cull our search and cull our results so we're only looking for things that will benefit us without missing the bulk of what we want.

I will come back and write more about how to do each search in phases, combined with the techniques I mentioned in the settings thread. If anyone wants to add to my very basic notes, I'd love to see it.

 

Questions :

daraknor:
Of course, doing a full search on 1,069,807,835,545,473,000 values would take a while, doing them in smaller segments and disabling invalid settings helps a lot. Optimizing each signal independently only takes a few hours, but doesn't create great settings. A quick scan on a few settings detailed in the notes, followed by a closer look at specific regions helps a lot. A GA search on clumps of settings also helps, and some currencies are noted to have an affinity for specific groups of settings. All in all, we can cull our search and cull our results so we're only looking for things that will benefit us without missing the bulk of what we want.

Should we wait for 5.7.2 to start ? Should you or another collaborator be assigning separate task to different "willing" collaborators and split the work ? Would you or the collaborator be able to retrieve and analyze the results and reassign work, etc ?

I am not sure I understand everything about your strategy and suggestions but I know I am available 100%. To do so, I need guidance and some instructions on how to do it.

 

Well I just did a chunk of work, but I'm not sure it is right or good. Some people may have a ton more experience with these indicators used in this way and able to offer insight to the parameters that I didn't get. Some people have also spent more time trying to optimize Phoenix than I have spent, and may have insights as well. If someone posts up my notes with a list of corrections, I will be filled with joy.

Right now I don't really have a method to follow, because I have spent more time trying to get a good head start.

Simple method might be:

Phase 1: fire up the master optimized setting file on a currency other than USDJPY and check all of the boxes for signal, TP/SL, consecutive signals. (I have sample output in the settings thread.) Run that on Nov 18 to Dec 30.

Phase 2: Take your top 20-50 results and run them on Dec 30 to current. (Don't reoptimize, just get results.) If the settings look mostly the same, then you should probably increase the variety of results you check.

Phase 3: Take all of the decently profitable settings and run them on 6+ weeks of data, Oct 7-Nov 18. These top results are likely to be profitable longer term, and not just during the time frame tested.

I am still curious about a few things and may refine the method a little more, but that is a start. Optimization is only done on Phase 1, a Phase 4 may have minor local optimization done. This may not be necessary depending on how the GA system in MT4 works.