Building Dropout in MQL5
After discussing the theoretical aspects, I suggest moving on to studying the implementation of this method in our library.
To implement the Dropout algorithm, we will create a new class called CNeuronDropout, which we will include in our model as a separate layer. The new class will inherit directly from the CNeuronBase neural layer base class.
class CNeuronDropout : public CNeuronBase
|
The first thing we encounter is the implementation of two different algorithms: one for the training process and another for testing and application. Therefore, we need to explicitly specify to the neural layer which algorithm it should use in each specific case. To do this, we introduce the m_bTrain flag which we will set to true during training and to false during testing.
To control the values of the flag, we will create a helper overload method TrainMode. In one version, when specifying a parameter, it will set a flag, and in the other variant, when called without parameters, it will return the current value of the m_bTrain flag.
virtual void TrainMode(bool flag) { m_bTrain = flag; }
|
While working with the library, we built a mechanism for overriding the methods of all classes. By doing so, we created a versatile class architecture, allowing the dispatcher class of our model to work uniformly with any neural layer, without spending time on checking the type of the neural layer and branching algorithms based on the type of the neural layer used. To support this concept, we will introduce a flag variable and methods for working with it at the level of the CNeuronBase base neural layer.
In the protected block of our class, we declare the following variables:
- m_dOutProbability specified probability for dropping out neurons
- m_iOutNumber number of neurons to be dropped out
- m_dInitValue value for initializing the masking vector, in the theoretical part of this article we denoted this coefficient as 1/q
Also, we will declare a pointer to the data buffer object for the m_cDropOutMultiplier masking vector.
The list of class methods is quite familiar. They all override the methods of the parent class.
Note that our new layer does not have weight matrices. The override of the CalcDeltaWeights and UpdateWeights methods which are responsible for distributing the error gradient to the weight matrix and updating the model parameters, is designed to maintain the overall architecture of the neural layers and the model as a whole. We cannot use methods from the parent class because the absence of corresponding objects would lead to a critical error. The creation of additional unused objects is an irrational waste of resources. Therefore, we override the methods. However, we create them as empty methods and they will simply always return a positive value.
virtual bool CalcDeltaWeights(CNeuronBase *prevLayer, bool read)
|
Now let's proceed with the class methods. We will start, as always, with the class constructor. In this method, we specify the default value of the variables. Using a static object for the mask vector buffer allows us to skip the operation of creating it in the constructor and deleting it in the destructor.
CNeuronDropout::CNeuronDropout(void) : m_dInitValue(1.0),
|
Note that the values of the m_bTrain class mode flag, unlike other variables, are specified in the body of the method. This is due to the declaration of a variable in the parent class.
The method destructor remains empty.
Next comes the initialization method of the CNeuronDropout::Init class. In the parameters, the method receives a pointer to an object of the class describing the created neural layer. In the body of the method, we immediately check the validity of the received pointer as well as the compatibility of the dimensions of the created neural layer and the previous one. The only role of the Dropout layer is to mask neurons, while the size of the tensor does not change in any way.
bool CNeuronDropout::Init(const CLayerDescription *description)
|
After successfully passing the control block, we reset the size of the input data window and call the initialization method of the parent class. Resetting the size of the input data window will instruct the parent class method not to create a weight matrix and other objects related to training the neural layer parameters. As always, we remember to check the results of the operations.
//--- calling a method of a parent class
|
After the successful execution of the parent class method, we save the main parameters of the neural layer operation, including the dropout probability, the number of neurons to exclude, and the initialization value of the masking matrix. We obtain the first parameter from the user, while the other two parameters should be calculated.
//--- calculation of coefficients
|
After that, we initialize the masking buffer with the initial values and set the training flag to true.
//--- initiate the masking buffer
|
This completes the work with the class initialization methods and proceeds to the actual creation of the algorithm of the Dropout method.
But first, let's recall that we don't have access to the neural layer directly from the main program. Now we have introduced a flag for the neural layer operation mode. Therefore, we need to go back to the dispatcher class of the model and add a method for changing the state of the flag.
void CNet::TrainMode(bool mode)
|
In this method, we will save the flag value into a local variable and iterate through all the neural layers of the model in a loop, calling a similar method for each neural layer of the model.