Русский Português
preview
Archery Algorithm (AA)

Archery Algorithm (AA)

MetaTrader 5Tester | 4 April 2025, 11:07
2 152 2
Andrey Dik
Andrey Dik

Contents

  1. Introduction
  2. Implementation of the algorithm
  3. Test results


Introduction

When tasks are becoming increasingly complex and resources are becoming increasingly scarce, optimization is becoming not only a necessity, but also a real algorithmic art in the modern world. How to find the best solution among many possible ones? How to minimize costs, increase efficiency and achieve maximum profit? These questions concern specialists in a wide range of fields – from economics to engineering, from social systems to ecology. Before solving an optimization problem, it is important to properly model the problem by identifying key variables and mathematical relationships that will adequately reproduce reality. Optimization is widely used in finance and trading, helping not only to develop new investment strategies, but also to improve existing ones. However, despite the universality of approaches, optimization methods can be conditionally divided into two categories: deterministic and stochastic.

Deterministic methods such as gradient descent offer rigorous and predictable solutions by using mathematical derivatives to find the optimum, allowing them to effectively model different scenarios, however, as soon as problems become nonlinear or multivariate, their effectiveness can decrease dramatically. In such cases, stochastic methods come to the rescue, which, relying on random processes, are able to find acceptable solutions in complex conditions, which makes them especially useful in volatile markets.

Combinations of deterministic and stochastic methods play a key role in modern approaches to optimization. By combining these two approaches, analysts can create more flexible and adaptive models that can account for both stable and changing conditions. This allows not only to improve the quality of forecasts, but also to minimize risks, which is critical for successful investment management.

In this article I will present a new approach to solving optimization problems - the Archery Algorithm (AA). The algorithm was developed by Fatemeh Ahmadi Zeidabadi and colleagues and was published in February 2022. This method, inspired by archery, generates quasi-optimal solutions by updating the position of population members in the search space based on a randomly selected element. We will test the efficiency of AA on standard objective functions and compare the obtained results with algorithms already known to us. By diving into the details, we will reveal how this innovative approach can change the way we think about optimization and open up new horizons for solving complex problems.


Implementation of the algorithm

The Archery Algorithm (AA) is a completely new stochastic optimization method designed to find optimal solutions to optimization problems and inspired by the behavior of an archer aiming at a target. AA simulates the process of shooting arrows at a target. Each member of the population represents a potential solution to the optimization problem, and their positions in the search space are updated based on the performance of a randomly selected "target" member, similar to how archers adjust their aim depending on where they want to hit.

The population is represented as a matrix, where each row corresponds to a member (solution) and each column corresponds to a dimension of the problem. This allows for a structured evaluation and update of decisions based on their objective function values. The performance of each member is evaluated using an objective function that quantifies how good a found solution is. The results are stored in a vector, which allows the algorithm to compare the efficiency of different solutions.

The target is divided into sections with their width corresponding to the productivity of the population members. A probability function is calculated to determine the probability of each member being selected based on its objective function value, with more efficient archers having a higher probability of being selected. A member of the population is randomly selected based on the cumulative probability, simulating an archer's target selection. This choice affects how other members' positions are updated. The algorithm updates the position of each archer in the search space using certain equations. The update depends on whether the selected archer has a better or worse objective function value than the current one. This process involves randomness to explore the search space. AA works iteratively, updating the population until a stop condition (maximum number of iterations) is reached. During this process, the algorithm keeps track of the best solution found.

The original version of the AA algorithm presented above describes the matrix as a population and the vectors as members of the population. However, the text does not indicate specific operations that are typical for working with matrices. In fact, the algorithm includes standard actions with search agents, as in most of the previously discussed algorithms.

It is also worth noting that the phrase "the target is divided into sections with their width corresponding to the productivity of the population members" implies the use of a roulette method for selection. In this case, the probability of choosing a sector is proportional to its width.

In this way, complex formulations describing many concepts can be explained in much more simple manner simplifying the idea implementation.

So, the archery algorithm is a population-based optimization method that uses the principles of target shooting to guide the search for optimal solutions. It combines elements of randomness with normal distribution to explore and exploit the search space. Key components of the algorithm:

1. Population of agents (archers)
2. Vector of probabilities and cumulative probabilities
3. Inheritance mechanism (not present in the original version)
4. Position update mechanism
5. Training intensity parameter (I)

First, let's present the pseudocode of the algorithm:

Initialization:
    Create a population of popSize agents
    For each agent:
        Initialize a random position within the search range
        Initialize previous position and fitness

Main loop:
    Until the stop condition is reached:
        For each i agent in the population:
            Calculate the vector of probabilities P and cumulative probabilities C
            
            For each c coordinate:
                Select k archer using cumulative probability
                
                If (random_number < inheritance_probability):
                    new_position [c] = k_archer_position [c]
                Otherwise:
                    I = rounding (1 + random_number_from_0_to_1)  // Training intensity parameter
                    random_gaussian = generate_gaussian_number (mean = 0, min =-1, max = 1)
                    
                    If (k_archer_fitness > i_agent_fitness):
                        new_position [c] = previous_position [c] + random_gaussian * (k_archer_position [c] - I * previous_position [c])
                    Otherwise:
                        new_position [c] = previous_position [c] + random_gaussian * (previous_position [c] - I * k_archer_position [c])
                
                Limit new_position [c] within the search range
            
            Update i agent position
        
        Evaluate the fitness of all agents
        Update the best global solution
        For each agent:
            If the new fitness is better than the previous one:
                Update the previous position and fitness

Return the best solution found

Implementation features in the code:

1. The algorithm uses a probabilistic approach to select archers to learn from.
2. The inheritance mechanism allows agents to directly copy the positions of successful archers with some probability.
3. When updating positions, a Gaussian distribution is used to introduce randomness into the archers' learning process.
4. The algorithm stores the previous best positions of the agents, which allows it to have some "memory" of good decisions.
5. The implementation will include a mechanism to limit new positions within the allowed search range.
6. The training intensity parameter (I) described by the authors will be used to regulate the degree of influence of the current position on the new one.

The I parameter (training intensity) is a random variable that can take the value 1 or 2. It is defined as follows: I = rounding to the nearest integer (1 + randomly generated number from 0 to 1). This means that I will be equal to 1 with probability 0.5 and 2 with the same probability. The role of the I parameter in the algorithm:

1. When I = 1, the algorithm makes smaller position adjustments.
2. When I = 2, the algorithm can make more drastic changes in a position.

Let's move on to the algorithm code. Describe the "archer" structure - S_AA_Agent. It represents an agent in an optimization algorithm with a set of coordinates in the solution space and contains information about its efficiency in terms of the fitness function. 

  • cPrev [] - the array stores the previous agent coordinates.
  • fPrev -  the variable stores the previous value of the agent fitness.   

The Init method allows us to prepare the agent for work by setting initial values for its coordinates and fitness. Next, the value is set to fPrev to the minimum possible value for the "double" type, because the fitness has not yet been calculated.

//——————————————————————————————————————————————————————————————————————————————
struct S_AA_Agent
{
    double cPrev []; // previous coordinates
    double fPrev;    // previous fitness

    void Init (int coords)
    {
      ArrayResize (cPrev, coords);
      fPrev = -DBL_MAX;
    }
};
//——————————————————————————————————————————————————————————————————————————————

Let's take a look at the C_AO_AAm class, which implements the algorithm itself and is inherited from the C_AO class. 

  • popSize - population size.
  • inhProbab - probability of inheriting a feature from another archer.

Then the params array is initialized by the size of 2, where the algorithm parameters are stored: population size and inheritance probability.

  • SetParams - method sets the parameters based on the values stored in the params array. It extracts values for popSize and inhProbab converting them to the appropriate types.
  • Init - the method initializes the algorithm by accepting the minimum and maximum search boundaries, the search step, and the number of epochs.
  • Moving and Revision - the methods are responsible for the logic of moving agents in the solution space and their revision (updating). 

S_AA_Agent agent [] - array of agents that will be used to perform optimization.

The C_AO_AAm class implements the optimization algorithm, while SetParams, Init, Moving and Revision manage the configuration and behavior of the algorithm during its operation. 

//——————————————————————————————————————————————————————————————————————————————
class C_AO_AAm : public C_AO
{
  public: //--------------------------------------------------------------------
  ~C_AO_AAm () { }
  C_AO_AAm ()
  {
    ao_name = "AAm";
    ao_desc = "Archery Algorithm M";
    ao_link = "https://www.mql5.com/en/articles/15782";

    popSize   = 50;    // population size
    inhProbab = 0.3;

    ArrayResize (params, 2);

    params [0].name = "popSize";   params [0].val = popSize;
    params [1].name = "inhProbab"; params [1].val = inhProbab;
  }

  void SetParams ()
  {
    popSize   = (int)params [0].val;
    inhProbab = params      [1].val;
  }

  bool Init (const double &rangeMinP  [], // minimum search range
             const double &rangeMaxP  [], // maximum search range
             const double &rangeStepP [], // step search
             const int     epochsP = 0);  // number of epochs

  void Moving ();
  void Revision ();

  //----------------------------------------------------------------------------
  double  inhProbab; //probability of inheritance

  S_AA_Agent agent [];

  private: //-------------------------------------------------------------------
};
//——————————————————————————————————————————————————————————————————————————————

The Init method in the C_AO_AAm class is responsible for initializing the optimization algorithm. It takes four parameters: arrays for the minimum and maximum search bounds, the search step, and the number of epochs that are equal to zero by default.

  • If the standard initialization is successful, the method resizes the agent, so that it corresponds to the specified popSize population size. This allows us to create the required number of agents to be used in the algorithm.
  • In the for loop, each agent from the array is initialized using the Init method, which specifies the initial coordinates for each agent.

At the end, the method returns true, indicating successful completion of the algorithm initialization. Thus, the Init method ensures that the algorithm is prepared for operation by setting up the necessary parameters and creating agents that will participate in the optimization.

//——————————————————————————————————————————————————————————————————————————————
bool C_AO_AAm::Init (const double &rangeMinP  [],
                     const double &rangeMaxP  [],
                     const double &rangeStepP [],
                     const int     epochsP = 0)
{
  if (!StandardInit (rangeMinP, rangeMaxP, rangeStepP)) return false;

  //----------------------------------------------------------------------------
  ArrayResize (agent, popSize);
  for (int i = 0; i < popSize; i++) agent [i].Init (coords);

  return true;
}
//——————————————————————————————————————————————————————————————————————————————

The Moving method in the C_AO_AAm class is responsible for moving agents in the solution space based on their current positions and the values of the function they are optimizing. Let's break it down into parts:

  • If the method is called for the first time (revision is equal to false), then a random value is initialized within the specified boundaries of rangeMin and rangeMax for each agent and each coordinate.
  • Then, this value is adjusted using the SeInDiSp method, which ensures that the value matches the specified step.

After that, the revision flag is set to true and the method completes its work.

  • Two arrays are created next: P for probabilities and C for cumulative probabilities.
  • The F_worst worst value of the function is found to normalize the fitness function values for agents.
  • The probabilities for each agent are then calculated and normalized so that they sum to 1.
  • C cumulative probabilities are calculated based on P probabilities.
  • For each agent and each coordinate, a partner archer (an agent) is selected based on cumulative probability.
  • If the random value is less than the specified inhProbab inheritance probability, the agent accepts the coordinate of the selected agent (ensuring the inheritance of features with a given probability).
  • Otherwise, the agent updates its position based on an equation that takes into account the current position, the random value, and the position of the partner shooter.
  • Finally, the new coordinate value is also adjusted using the SeInDiSp method.

The Moving method implements the movement of agents in the solution space taking into account their current positions and function values and uses probabilistic methods to select directions of movement and update positions.

//——————————————————————————————————————————————————————————————————————————————
void C_AO_AAm::Moving ()
{
  //----------------------------------------------------------------------------
  if (!revision)
  {
    for (int i = 0; i < popSize; i++)
    {
      for (int c = 0; c < coords; c++)
      {
        a [i].c [c] = u.RNDfromCI (rangeMin [c], rangeMax [c]);
        a [i].c [c] = u.SeInDiSp (a [i].c [c], rangeMin [c], rangeMax [c], rangeStep [c]);
      }
    }
    revision = true;
    return;
  }

  //-------------------------------------------------------------------------
  // Calculate probability vector P and cumulative probability C
  double P [], C [];
  ArrayResize (P, popSize);
  ArrayResize (C, popSize);
  double F_worst = DBL_MAX;
  double sum = 0;

  for (int i = 0; i < popSize; i++)
  {
    if (a [i].f < F_worst) F_worst = a [i].f;
  }

  for (int i = 0; i < popSize; i++)
  {
    P [i] =  a [i].f - F_worst;
    sum += P [i];
  }

  for (int i = 0; i < popSize; i++)
  {
    P [i] /= sum;
    C [i] = (i == 0) ? P [i] : C [i - 1] + P [i];
  }

  double x;

  for (int i = 0; i < popSize; i++)
  {
    for (int c = 0; c < coords; c++)
    {
      // Select archer (k) using cumulative probability
      int k = 0;
      double r = u.RNDprobab ();
      while (k < popSize - 1 && C [k] < r) k++;

      if (u.RNDbool () < inhProbab)
      {
        x = a [k].c [c];
      }
      else
      {
        // Update position using Eq. (5) and (6)
        double I   = MathRound (1 + u.RNDprobab ());
        double rnd = u.GaussDistribution (0, -1, 1, 8);

        if (a [k].f > a [i].f)
        {
          x = agent [i].cPrev [c] + rnd * (a [k].c [c] - I * agent [i].cPrev [c]);
        }
        else
        {
          x = agent [i].cPrev [c] + rnd * (agent [i].cPrev [c] - I * a [k].c [c]);
        }
      }

      a [i].c [c] = u.SeInDiSp (x, rangeMin [c], rangeMax [c], rangeStep [c]);
    }
  }
}
//——————————————————————————————————————————————————————————————————————————————

The Revision method in the C_AO_AAm class is responsible for updating information about the best agents in the population. The method does the following:

  • The ind variable is initialized with the value of -1. It will be used to store the index of the agent with the best function value.
  • The for loop iterates through all agents in the popSize population and if the value of the a [i].f current agent function exceeds the current best value of the fB function:
    • fB is updated to the new better value of a [i].f.
    • the index of this agent is stored in the ind variable.
After completing the loop, if ind is not equal to -1, the ArrayCopy function is called. It copies the coordinates of the best agent from the a array to the cB array. The second for loop also passes through all agents in the population:

  • If the value of the a [i].f current agent function exceeds its previous agent [i].fPrev fitness function value:
    • the previous value of fPrev for the agent is updated.
    • the current coordinates of the agent are copied to the cPrev array using ArrayCopy.

The Revision method serves to update information about the best global solution, as well as to update the best positions of agents.

//——————————————————————————————————————————————————————————————————————————————
void C_AO_AAm::Revision ()
{
  //----------------------------------------------------------------------------
  int ind = -1;

  for (int i = 0; i < popSize; i++)
  {
    if (a [i].f > fB)
    {
      fB = a [i].f;
      ind = i;
    }
  }

  if (ind != -1) ArrayCopy (cB, a [ind].c, 0, 0, WHOLE_ARRAY);

  //----------------------------------------------------------------------------
  for (int i = 0; i < popSize; i++)
  {
    if (a [i].f > agent [i].fPrev)
    {
      agent [i].fPrev = a [i].f;
      ArrayCopy (agent [i].cPrev, a [i].c, 0, 0, WHOLE_ARRAY);
    }
  }
}
//——————————————————————————————————————————————————————————————————————————————


Test results

I have slightly modified the algorithm. The original algorithm does not provide for direct exchange of information between archers. The exchange occurs indirectly through the interaction of coordinates via the normal distribution, so I thought it was necessary to add the exchange of such information. For this purpose, I have added the additional inhProbab algorithm responsible for implementing such an exchange with a given probability.

if (u.RNDbool () < inhProbab)
{
  x = a [k].c [c];
}

The results presented below correspond to the original version of the algorithm as intended by the authors.

AA|Archery Algorithm|50.0|
=============================
5 Hilly's; Func runs: 10000; result: 0.6699547926310098
25 Hilly's; Func runs: 10000; result: 0.37356238340164605
500 Hilly's; Func runs: 10000; result: 0.257542163368952
=============================
5 Forest's; Func runs: 10000; result: 0.38166669771790607
25 Forest's; Func runs: 10000; result: 0.199300365268835
500 Forest's; Func runs: 10000; result: 0.15337954055780398
=============================
5 Megacity's; Func runs: 10000; result: 0.4076923076923077
25 Megacity's; Func runs: 10000; result: 0.17907692307692308
500 Megacity's; Func runs: 10000; result: 0.10004615384615476
=============================
All score: 2.72222 (30.25%)

The algorithm scores 30.25% in the test, but with my modification the algorithm improved its performance by more than 13%. Below are the results of the modified version:

AAm|Archery Algorithm M|50.0|0.3|
=============================
5 Hilly's; Func runs: 10000; result: 0.9353194829441194
25 Hilly's; Func runs: 10000; result: 0.6798262991897616
500 Hilly's; Func runs: 10000; result: 0.2596620178276653
=============================
5 Forest's; Func runs: 10000; result: 0.5735062785421186
25 Forest's; Func runs: 10000; result: 0.22007188891556378
500 Forest's; Func runs: 10000; result: 0.1486980566819649
=============================
5 Megacity's; Func runs: 10000; result: 0.6307692307692309
25 Megacity's; Func runs: 10000; result: 0.344
500 Megacity's; Func runs: 10000; result: 0.10193846153846249
=============================
All score: 3.89379 (43.26%)

So, I have selected the algorithm with modification and added it to the rating table. Below we can see the algorithm visualization. I think it is quite good. There is, of course, a spread of results, however, it is not critical and occurs only for functions with a small number of coordinates.

Hilly

AAm on the Hilly test function

Forest

AAm on the Forest test function

Megacity

AAm on the Megacity test function

According to the operation results, the modified version of the algorithm occupies 26 th place.

# AO Description HillyHilly final ForestForest final Megacity (discrete)Megacity final Final result % of MAX
10 p (5 F)50 p (25 F)1000 p (500 F)10 p (5 F)50 p (25 F)1000 p (500 F)10 p (5 F)50 p (25 F)1000 p (500 F)
1ANSacross neighbourhood search0.949480.847760.438572.235811.000000.923340.399882.323230.709230.634770.230911.574916.13468.15
2CLAcode lock algorithm0.953450.871070.375902.200420.989420.917090.316422.222940.796920.693850.193031.683806.10767.86
3AMOmanimal migration ptimization M0.903580.843170.462842.209590.990010.924360.465982.380340.567690.591320.237731.396755.98766.52
4(P+O)ES(P+O) evolution strategies0.922560.881010.400212.203790.977500.874900.319452.171850.673850.629850.186341.490035.86665.17
5CTAcomet tail algorithm0.953460.863190.277702.094350.997940.857400.339492.194840.887690.564310.105121.557125.84664.96
6SDSmstochastic diffusion search M0.930660.854450.394762.179880.999830.892440.196192.088460.723330.611000.106701.441035.70963.44
7ESGevolution of social groups0.999060.796540.350562.146161.000000.828630.131021.959650.823330.553000.047251.423585.52961.44
8SIAsimulated isotropic annealing0.957840.842640.414652.215130.982390.795860.205071.983320.686670.493000.090531.270205.46960.76
9ACSartificial cooperative search0.755470.747440.304071.806981.000000.888610.224132.112740.690770.481850.133221.305835.22658.06
10ASOanarchy society optimization0.848720.746460.314651.909830.961480.791500.238031.991010.570770.540620.166141.277525.17857.54
11TSEAturtle shell evolution algorithm0.967980.644800.296721.909490.994490.619810.227081.841390.690770.426460.135981.253225.00455.60
12DEdifferential evolution0.950440.616740.303081.870260.953170.788960.166521.908650.786670.360330.029531.176534.95555.06
13CROchemical reaction optimization0.946290.661120.298531.905930.879060.584220.211461.674730.758460.426460.126861.311784.89254.36
14BSAbird swarm algorithm0.893060.649000.262501.804550.924200.711210.249391.884790.693850.326150.100121.120124.80953.44
15HSharmony search0.865090.687820.325271.878180.999990.680020.095901.775920.620000.422670.054581.097254.75152.79
16SSGsaplings sowing and growing0.778390.649250.395431.823080.859730.624670.174291.658690.646670.441330.105981.193984.67651.95
17BCOmbacterial chemotaxis optimization M0.759530.622680.314831.697040.893780.613390.225421.732590.653850.420920.144351.219124.64951.65
18(PO)ES(PO) evolution strategies0.790250.626470.429351.846060.876160.609430.195911.681510.590000.379330.113221.082554.61051.22
19TSmtabu search M0.877950.614310.291041.783300.928850.518440.190541.637830.610770.382150.121571.114494.53650.40
20BSObrain storm optimization0.937360.576160.296881.810410.931310.558660.235371.725340.552310.290770.119140.962224.49849.98
21WOAmwale optimization algorithm M0.845210.562980.262631.670810.931000.522780.163651.617430.663080.411380.113571.188034.47649.74
22AEFAartificial electric field algorithm0.877000.617530.252351.746880.927290.726980.180641.834900.666150.116310.095080.877544.45949.55
23ACOmant colony optimization M0.881900.661270.303771.846930.858730.586800.150511.596040.596670.373330.024720.994724.43849.31
24BFO-GAbacterial foraging optimization - ga0.891500.551110.315291.757900.969820.396120.063051.428990.726670.275000.035251.036924.22446.93
25ABHAartificial bee hive algorithm0.841310.542270.263041.646630.878580.477790.171811.528180.509230.338770.103970.951974.12745.85
26AAmarchery algorithm M0.935320.679830.259661.874810.573510.220070.148700.942280.630770.344000.101941.076713.89443.26
27ASBOadaptive social behavior optimization0.763310.492530.326191.582020.795460.400350.260971.456770.264620.171690.182000.618313.65740.63
28MECmind evolutionary computation0.695330.533760.326611.555690.724640.330360.071981.126980.525000.220000.041980.786983.47038.55
29IWOinvasive weed optimization0.726790.522560.331231.580580.707560.339550.074841.121960.423330.230670.046170.700173.40337.81
30Micro-AISmicro artificial immune system0.795470.519220.308611.623300.729560.368790.093981.192330.376670.158670.028020.563353.37937.54
31COAmcuckoo optimization algorithm M0.758200.486520.313691.558410.740540.280510.055991.077040.505000.174670.033800.713473.34937.21
32SDOmspiral dynamics optimization M0.746010.446230.296871.489120.702040.346780.109441.158260.428330.167670.036630.632633.28036.44
33NMmNelder-Mead method M0.738070.505980.313421.557470.636740.283020.082211.001970.446670.186670.040280.673623.23335.92
34FAmfirefly algorithm M0.586340.472280.322761.381380.684670.374390.109081.168140.286670.164670.047220.498553.04833.87
35GSAgravitational search algorithm0.647570.491970.300621.440160.539620.363530.099451.002600.326670.122000.019170.467832.91132.34
36BFObacterial foraging optimization0.611710.432700.313181.357590.544100.215110.056760.815970.421670.138000.031950.591622.76530.72
37ABCartificial bee colony0.633770.424020.308921.366710.551030.218740.056230.826000.340000.142000.031020.513022.70630.06
38BAbat algorithm0.597610.459110.352421.409150.403210.193130.071750.668100.210000.101000.035170.346172.42326.93
39AAAalgae adaptive algorithm0.500070.320400.255251.075720.370210.222840.167850.760890.278460.148000.097550.524022.36126.23
40SAsimulated annealing0.557870.421770.315491.295130.349980.152590.050230.552800.311670.100330.028830.440832.28925.43
41IWDmintelligent water drops M0.545010.378970.301241.225220.461040.147040.043690.651770.258330.097000.023080.378422.25525.06
42PSOparticle swarm optimisation0.597260.369230.299281.265770.372370.163240.070100.605720.256670.080000.021570.358232.23024.77
43Boidsboids algorithm0.433400.305810.254250.993460.357180.201600.157080.715860.278460.142770.098340.519572.22924.77
44MAmonkey algorithm0.591070.426810.318161.336040.311380.140690.066120.518190.228330.085670.027900.341902.19624.40
45SFLshuffled frog-leaping0.539250.358160.298091.195510.371410.114270.040510.526180.271670.086670.024020.382352.10423.38



Summary

i have presented two versions of the algorithm: the original and modified one, which includes minor changes, but provides a significant improvement in performance. This article clearly demonstrates that even minor adjustments to the logic of an algorithm can lead to significant gains in efficiency in various tasks. It also becomes clear that complex descriptions can make it difficult to understand how an algorithm works, which in turn hinders its improvement. On the contrary, complex concepts expressed in simple language open the way to more efficient solutions.

Tab

Figure 1. Color gradation of algorithms according to relevant tests Results greater than or equal to 0.99 are highlighted in white

chart

Figure 2. Histogram of algorithm testing results (scale from 0 to 100, the higher the better, where 100 is the maximum possible theoretical result, in the archive there is a script for calculating the rating table)


When the article was almost ready for publication, I had an idea that I decided to test. What if, following the logic of the authors about targets and archers using the "roulette" method for selection, we change the sizes of the targets themselves inversely proportional to the quality of the solutions found? If the solution turns out to be good, it should be refined and its surroundings explored. Otherwise, if the result is insignificant, it is necessary to expand the search area to identify new, potentially promising areas.

Goals

Figure 3. The number of arrows hitting the targets is directly proportional to the quality of the targets themselves, while the size of the targets is inversely proportional to their quality 

Let's look at the code that uses the idea of increasing targets inversely proportional to their quality.

void C_AO_AAm::Moving ()
{
  //----------------------------------------------------------------------------
  if (!revision)
  {
    for (int i = 0; i < popSize; i++)
    {
      for (int c = 0; c < coords; c++)
      {
        a [i].c [c] = u.RNDfromCI (rangeMin [c], rangeMax [c]);
        a [i].c [c] = u.SeInDiSp (a [i].c [c], rangeMin [c], rangeMax [c], rangeStep [c]);
      }
    }
    revision = true;
    return;
  }

  //-------------------------------------------------------------------------
  // Calculate probability vector P and cumulative probability C
  double P [], C [];
  ArrayResize (P, popSize);
  ArrayResize (C, popSize);
  double F_worst = DBL_MAX; // a[ArrayMaximum(a, WHOLE_ARRAY, 0, popSize)].f;
  double sum = 0;

  for (int i = 0; i < popSize; i++)
  {
    if (a [i].f < F_worst) F_worst = a [i].f;
  }

  for (int i = 0; i < popSize; i++)
  {
    P [i] =  a [i].f - F_worst; ////F_worst - a[i].f;
    sum += P [i];
  }

  for (int i = 0; i < popSize; i++)
  {
    P [i] /= sum;
    C [i] = (i == 0) ? P [i] : C [i - 1] + P [i];
  }

  double x;
  
  double maxFF = fB;
  double minFF = DBL_MAX;
  double prob1;
  double prob2;
  
  for (int i = 0; i < popSize; i++)
  {
    if (a [i].f < minFF) minFF = a [i].f;
  } 

  for (int i = 0; i < popSize; i++)
  {
    for (int c = 0; c < coords; c++)
    {
      // Select archer (k) using cumulative probability
      int k = 0;
      double r = u.RNDprobab ();
      while (k < popSize - 1 && C [k] < r) k++;

      if (u.RNDbool () < inhProbab)
      {
        x = a [k].c [c];
      }
      else
      {
        
        // Update position using Eq. (5) and (6)
        //double I   = MathRound (1 + u.RNDprobab ());
        double rnd = u.GaussDistribution (0, -1, 1, 8);
        /*
        if (a [k].f > a [i].f)
        {
          x = agent [i].cPrev [c] + rnd * (a [k].c [c] - I * agent [i].cPrev [c]);
        }
        else
        {
          x = agent [i].cPrev [c] + rnd * (agent [i].cPrev [c] - I * a [k].c [c]);
        }
        */
        
        prob1 = u.Scale (a [i].f, minFF, maxFF, 0, 1);
        prob2 = u.Scale (a [k].f, minFF, maxFF, 0, 1);
        
        x = agent [i].cPrev [c] + rnd * (a [k].c [c] - agent [i].cPrev [c]) * (1 - prob1 - prob2);  
        
      }

      a [i].c [c] = u.SeInDiSp (x, rangeMin [c], rangeMax [c], rangeStep [c]);
    }
  }
}
//—

1. The commented section in the original version used the conditional construct if-else to determine how to update the agent position. This logic has been removed and replaced with a new calculation.

2. Three new strings of code:

prob1 = u.Scale(a[i].f, minFF, maxFF, 0, 1);
prob2 = u.Scale(a[k].f, minFF, maxFF, 0, 1);

x = agent[i].cPrev[c] + rnd * (a[k].c[c] - agent[i].cPrev[c]) * (1 - prob1 - prob2);

These lines introduce a new approach to calculating the updated position:

a) Two probability values (prob1 and prob2) are calculated using the Scale function, which normalizes the fitness values of i and k agents in the range from 0 to 1 based on the minFF and maxFF minimum and maximum fitness values.

b) Then the new x position is calculated using these probabilities. It uses the i previous position of the agent [i].cPrev [c] agent, k position of the a [k].c [c] selected archer and rnd random factor.

c) Now the movement is affected by the sum of the fitness values of both agents subtracted from 1. This factor serves as a scaling factor, allowing the target to be expanded or contracted in inverse proportion to the fitness of the chosen archers. The less experienced the archers, the greater the spread of arrows, but the distribution of the probabilities of hitting the targets still follows a normal distribution.

Now let's look at the results:

AAm|Archery Algorithm M|50.0|0.3|
=============================
5 Hilly's; Func runs: 10000; result: 0.9174358826544864
25 Hilly's; Func runs: 10000; result: 0.7087620527831496
500 Hilly's; Func runs: 10000; result: 0.42160091427958263
=============================
5 Forest's; Func runs: 10000; result: 0.9252690259821034
25 Forest's; Func runs: 10000; result: 0.7580206359203926
500 Forest's; Func runs: 10000; result: 0.353277934084795
=============================
5 Megacity's; Func runs: 10000; result: 0.6738461538461538
25 Megacity's; Func runs: 10000; result: 0.552
500 Megacity's; Func runs: 10000; result: 0.23738461538461658
=============================
All score: 5.54760 (61.64%)

The algorithm performance has improved significantly. In the visualization below, we can see the confident convergence of the algorithm and the identification of significant areas of the function surface.

Hilly2

AAm on the Hilly test function

Let's conduct another small experiment. The results above are obtained by subtracting the sum of the archers' probabilities from one.

//x = agent [i].cPrev [c] + rnd * (a [k].c [c] - agent [i].cPrev [c]) * (1 - prob1 - prob2); 
 x = agent [i].cPrev [c] + rnd * (a [k].c [c] - agent [i].cPrev [c]) * (2 - prob1 - prob2);  

The main change is that the sum is subtracted not from one but from two. Let's see how such a simple action can affect the behavior of the algorithm:

  • in the previous version, the result of this operation could be negative if the fitness of both archers was high, resulting in a "mutation" effect in the resulting coordinates of the new archer.
  • in the new version the multiplier will provide a value from 0 to 2.

This change results in agents moving in a more sweeping manner and exploring the solution space more aggressively, as agents will take larger steps with each position update.

Thus, as can be seen from the printout of the algorithm's results, this change improved the convergence of the algorithm on medium-dimensional functions, but also resulted in a deterioration on high-dimensional functions (marked in yellow), although overall the algorithm scored a higher final value.

AAm|Archery Algorithm M|50.0|0.3|
=============================
5 Hilly's; Func runs: 10000; result: 0.9053229410164233
25 Hilly's; Func runs: 10000; result: 0.8259118221071665
500 Hilly's; Func runs: 10000; result: 0.2631661675236262
=============================
5 Forest's; Func runs: 10000; result: 0.9714247249319152
25 Forest's; Func runs: 10000; result: 0.9091052022399436
500 Forest's; Func runs: 10000; result: 0.2847632249786224
=============================
5 Megacity's; Func runs: 10000; result: 0.7169230769230768
25 Megacity's; Func runs: 10000; result: 0.6378461538461538
500 Megacity's; Func runs: 10000; result: 0.10473846153846252
=============================
All score: 5.61920 (62.44%)

The previous result looks more practical and will remain as the main variant of the modified version of the AAm algorithm. I will present the rating table with thermal gradation once again. AAm now occupies a worthy 7 th place. The algorithm can be characterized as very balanced (good convergence on functions of different dimensions) and can be recommended for solving various problems.

Tab2

Figure 4. Color gradation of algorithms according to relevant tests Results greater than or equal to 0.99 are highlighted in white

AAm pros and cons:

Pros:

  1. Quite fast.
  2. Self-adaptive.
  3. Just one external parameter.
  4. Good convergence.
  5. Good scalability.
  6. Simple implementation (despite the complex description made by the authors).

Cons:

  1. Somewhat prone to getting stuck on low dimensional functions.

Further addition of new algorithms to the rating table will make it difficult to read. Therefore, I decided to limit the number of rating participants to 45 algorithms, and now the competition is held in a "knockout" format. To make it easy for readers to access all the articles in a visually appealing way, I have prepared an HTML file with a list of all the previously reviewed algorithms, sorted by their rating in a table. This file has been present in the article archive for some time now, and for those who open it for the first time, there is a little surprise in store.

The article is accompanied by an archive with the current versions of the algorithm codes. The author of the article is not responsible for the absolute accuracy in the description of canonical algorithms. Changes have been made to many of them to improve search capabilities. The conclusions and judgments presented in the articles are based on the results of the experiments.

Translated from Russian by MetaQuotes Ltd.
Original article: https://www.mql5.com/ru/articles/15782

Attached files |
AAm.zip (35.89 KB)
Last comments | Go to discussion (2)
Ilya Melamed
Ilya Melamed | 13 Sep 2024 at 18:11
Thank you for your research. But I have a very simple question as a simple programmer of Expert Advisors on mql5 (I am not a mathematician). It may seem silly to you, I apologise in advance. But still, how can your research help in optimising EAs? Could you give me an example. Let's say we have a new EA, we want to optimise it, and ....? Thanks.
Andrey Dik
Andrey Dik | 14 Sep 2024 at 18:23
Ilya Melamed optimising EAs? Could you give me an example. Let's say we have a new EA, we want to optimise it, and ....? Thanks.

Thanks for your interest in my work and great question.

There are many scenarios for applying optimisation algorithms, wherever you want to get the best solution among the possible ones.

For example, you can apply it to self-optimisation of EAs as described here.

Or it can be used as part of the optimisation management of an in-house tester, as described here.

Neural Network in Practice: The First Neuron Neural Network in Practice: The First Neuron
In this article, we'll start building something simple and humble: a neuron. We will program it with a very small amount of MQL5 code. The neuron worked great in my tests. Let's go back a bit in this series of articles about neural networks to understand what I'm talking about.
Simple solutions for handling indicators conveniently Simple solutions for handling indicators conveniently
In this article, I will describe how to make a simple panel to change the indicator settings directly from the chart, and what changes need to be made to the indicator to connect the panel. This article is intended for novice MQL5 users.
Advanced Memory Management and Optimization Techniques in MQL5 Advanced Memory Management and Optimization Techniques in MQL5
Discover practical techniques to optimize memory usage in MQL5 trading systems. Learn to build efficient, stable, and fast-performing Expert Advisors and indicators. We’ll explore how memory really works in MQL5, the common traps that slow your systems down or cause them to fail, and — most importantly — how to fix them.
Automating Trading Strategies in MQL5 (Part 13): Building a Head and Shoulders Trading Algorithm Automating Trading Strategies in MQL5 (Part 13): Building a Head and Shoulders Trading Algorithm
In this article, we automate the Head and Shoulders pattern in MQL5. We analyze its architecture, implement an EA to detect and trade it, and backtest the results. The process reveals a practical trading algorithm with room for refinement.