MQL's OOP notes: Optimystic library for On The Fly Optimization of EA meets fast particle swarm algorithm

23 November 2016, 15:14
Stanislav Korotky
25
694
In previous parts we have developed the library Optimystic for self-optimization of EAs on the fly (part 1, part 2) and particle swarm algorithm (part 3) which is more efficient optimization method than brute force approach used in the library intially.

Next step is to embed the particle swarm into the library. Let's go back to Optimystic.mqh.

As far as we're going to add new optimization method to the library and users are supposed to choose between them, we need to define new enumeration.

enum OPTIMIZATION_METHOD
{
  METHOD_BRUTE_FORCE,    // brute force
  METHOD_PARTICLE_SWARM  // particle swarm
};

And we need a method to switch either of the methods on.

class Optimystic
{
  public:
    virtual void setMethod(OPTIMIZATION_METHOD method) = 0;

That's it! We're done with the header file. Let's move to the implementation part in Optimystic.mq4.

For beginning we add the header file of the particle swarm algorithm.

#include <ParticleSwarm.mqh>

And add new declarations to the class.

class OptimysticImplementation: public Optimystic
{
  private:
    OPTIMIZATION_METHOD method;
    Swarm *swarm;

  public:
    virtual void setMethod(OPTIMIZATION_METHOD m)
    {
      method = m;
    }

The class should change optimization approach according to the selected method, so onTick should be updated. 

    virtual void onTick()
    {
      static datetime lastBar;
      if(enabled)
      {
        if(lastBar != Time[0] && TimeCurrent() > lastOptimization + PeriodSeconds() * resetInBars)
        {
          callback.setContext(context); // virtual trading
          if(method == METHOD_PARTICLE_SWARM)
          {
            optimizeByPSO();
          }
          else
          {
            optimize();
          }
        }
      }
      callback.setContext(NULL); // real trading
      callback.trade(0, Ask, Bid);
      lastBar = Time[0];
    }

Here we added a new procedure optimizeByPSO. Let's define it using already existing optimize as a draft. 

    bool optimizeByPSO()
    {
      if(!callback.onBeforeOptimization()) return false;
      Print("Starting on-the-fly optimization at ", TimeToString(Time[rangeInBars]));
      
      uint trap = GetTickCount();

We marked the time to compare performance of the new and old algorithms later.

      isOptimizationOn = true;
      if(spread == 0) spread = (int)MarketInfo(Symbol(), MODE_SPREAD);
      best = 0;
      int count = resetParameters();

We modified the function resetParameters to return number of active parameters (as you remember, we can disable specific parameters by calling setEnabled). Normally all parameters are enabled, so count will be equal to the size of parameters array.

      int n = parameters.size();

The reason why we're bothering with this is our need to pass work ranges of parameters values to the swarm on the next lines.

      double max[], min[], step[];
      ArrayResize(max, count);
      ArrayResize(min, count);
      ArrayResize(step, count);
      for(int i = 0, j = 0; i < n; i++)
      {
        if(parameters[i].getEnabled())
        {
          max[j] = parameters[i].getMax();
          min[j] = parameters[i].getMin();
          step[j] = parameters[i].getStep();
          j++;
        }
      }
      swarm = new Swarm(count, max, min, step);

And here is the problems begin. Next thing that we should write is somethig like this:

      swarm.optimize(..., 100);

But we need a functor instead of ellipsis. The first thought you may have - let's derive the class OptimysticImplementation from the swarm's Functor interface, but this is not possible. The class is already derived from Optimystic, and MQL does not allow multiple inheritance. This is a classical dilemma for many OOP languages, because multiple inheritance is a double-edged sword.

C++ supports not only multiple inheritance but virtual inheritance which helps to solve well-known diamond problem. Java does not allow multiple inheritance (which means that a class can extend only one parent class), but allow multiple interfaces (a class can implement arbitrary set of interfaces) as a workaround. This can alleviate the problem in many cases.

When multiple inheritance is not supported one can apply a composition. This is a widely used technique.

In our case we need to create a class derived from Functor and include it into OptimysticImplementation as a member variable.

class SwarmFunctor: public Functor
{
  private:
    OptimysticImplementation *parent;
    
  public:
    SwarmFunctor(OptimysticImplementation *_parent)
    {
      parent = _parent;
    }
    
    virtual double calculate(const double &vector[])
    {
      return parent.calculatePS(vector);
    }
};

This class does minimal job. It's only purpose is to accept calls from the swarm and redirect them to OptimysticImplementation, where we'll need to add new function calculatePS. This is done so, because only OptimysticImplementation manages all information required to perform EA testing. We'll consider the function calculatePS in a few minutes, but first let's continue the composition.

Add to OptimysticImplementation:

class OptimysticImplementation: public Optimystic
{
  private:
    SwarmFunctor *functor;

And now we can return back to the method optimizeByPSO which we have left on half way. Let's create an instance of SwarmFunctor and pass it to the swarm. 

      functor = new SwarmFunctor(&this);
    
      swarm.optimize(functor, 100); // TODO: 100 cycles are hardcoded - this should be adjusted
      double result[];
      bool ok = swarm.getSolution(result);
      if(ok)
      {
        for(int i = 0, j = 0; i < n; i++)
        {
          if(parameters[i].getEnabled())
          {
            parameters[i].setValue(result[j]);
            parameters[i].markAsBest();
            j++;
          }
        }
      }

      delete functor;    
      delete swarm;
      
      isOptimizationOn = false;
      callback.onAfterOptimization();
      
      Print("Done in ", DoubleToString((GetTickCount() - trap) * 1.0 / 1000, 3), " seconds");
      
      lastOptimization = TimeCurrent();
      
      return ok;
    }

When the library invokes optimizeByPSO from onTick, we create the Swarm object and SwarmFunctor object taking a back reference to its owner (OptimysticImplementation object) and execute Swarm's optimize passing SwarmFunctor as a parameter. The swarm will call SwarmFunctor's calculate with current parameters, and the latter will in turn call our OptimysticImplementation object to test EA on given parameters. One pictrure is worth a thousand words.

Sequence diagram of EA using Optimystic powered by Particle Swarm

So we need to add calculatePS to the implementation. Most of lines in it are taken from the method optimize used for straightforward optimization.

    // particle swarm worker method
    double calculatePS(const double &vector[])
    {
      int n = parameters.size();
      for(int i = 0, j = 0; i < n; i++)
      {
        if(parameters[i].getEnabled())
        {
          parameters[i].setValue(vector[j++]);
        }
      }

      if(!callback.onApplyParameters()) return 0;
        
      context.initialize(deposit, spread, commission);
      // build balance and equity curves on the fly
      for(int i = rangeInBars; i >= forwardBars; i--)
      {
        context.setBar(i);
        callback.trade(i, iOpen(Symbol(), Period(), i) + spread * Point, iOpen(Symbol(), Period(), i));
        context.activatePendingOrdersAndStops();
        context.calculateEquity();
      }
      context.closeAll();

      // calculate estimators
      context.calculateStats();
        
      double gain;
      if(estimator == ESTIMATOR_CUSTOM)
      {
        gain = callback.onCustomEstimate();
        if(gain == EMPTY_VALUE)
        {
          return 0;
        }
      }
      else
      {
        gain = context.getPerformance(estimator);
      }
        
      // if estimator is better than previous, then save current parameters
      if(gain > best && context.getPerformance(ESTIMATOR_TRADES_COUNT) > 0)
      {
        best = gain;
        bestStats = context.getStatistics();
      }
      callback.onReadyParameters();
      return gain;
    }

Now we're finally done with embedding particle swarm into Optimystic. The complete source code of the updated library is provided below.

Let's compare performance of optimization using our example EA based on MACD. With 3 parameters being optimized the process executes 10 times faster thanks to the particle swarm. 

[brute force log]
Starting on-the-fly optimization at 2016.09.30 04:00
1000 optimization passes completed, 3.291 seconds

[particle swarm log]
Starting on-the-fly optimization at 2016.09.30 04:00
PSO[3] created: 15/3
Variants: 1000, Scheduled count: 16
PSO Processing...
PSO Finished 139 of 240 planned passes: true
Done in 0.312 seconds

You may see that instead of total set of 1000 passes, particle swarm invoked EA 139 times. Resulting parameter sets of two methods may differ of course, but this is also the case when you use built-in genetic optimization.

Files:
Optimystic.mqh  12 kb
Optimystic.mq4  33 kb
macd.mq4  9 kb