- Creating an OpenCL program
- Implementing functionality on the main program side
Implementing functionality on the main program side
The implementation of the functionality on the main program side will require some knowledge of process organization and effort. Let's start with the preparatory work. First, in our file of definitions, we need to add the loading of the OpenCL program written above as a resource and assign its contents to a string variable. Here, we will also add predefined macro substitutions for data types and the size of the local array to the program.
#resource "opencl_program.cl" as string OCLprogram
|
When declaring kernels in the main program, the CLKernelCreate function returns a handle. To work with OpenCL technology, we will use the CMyOpenCL class, which is derived from the standard COpenCL class. The aforementioned classes implement arrays for storing handles. A specific kernel is accessed by an index in the array. To simplify working with these indices and make the program code more readable, let's add constants for the indices of all the kernels created above. To explicitly identify the kernel index in the program code, we will start all named kernel constants with def_k.
//+------------------------------------------------------------------+
|
To specify parameters when calling kernels, we can also use indices. However, now they are not specified explicitly. Instead, the serial number in the list of OpenCL kernel parameters is used. All kernels use their own set of parameters, so we will define named constants for all created kernels. To avoid confusion between identical parameters of different kernels, we will include a pointer to the respective kernel in the constant name. For example, the parameter constants for the forward pass kernel of the basic fully connected layer will start with def_pff.
//--- perceptron feed forward pass
|
We will declare constants for all written kernels in a similar way.
//--- calculating the error gradient of the result layer
|
//--- calculating the error gradient of the hidden layer
|
//--- calculating the error gradient at the level of the weight matrix
|
//--- parameter optimization by stochastic gradient descent
|
//--- parameter optimization using the moment method
|
//--- parameter optimization using the AdaGrad method
|
//--- parameter optimization using the RMSProp method
|
//--- parameter optimization using the AdaDelta method
|
//--- parameter optimization using the Adam method
|
//--- activation functions
|
//--- adjusting the gradient to the derivative of the activation function
|
I intentionally provided a complete set of constants above to offer you a reference guide. It will assist in reading and understanding the code for our next steps in implementing OpenCL technology into the project.
After describing the constants, we will move on to creating classes that will be responsible for servicing OpenCL tools. We have already mentioned them multiple times. It's time to learn more about their features.
First, this is the CMyOpenCL class. It inherits from the COpenCL class from the MQL5 standard libraries. The standard library is well-written and has sufficient functionality to organize work. However, I found one aspect inconvenient personally: when working with buffers for data exchange between the main program and the OpenCL context, a similar approach is used as with other process objects. When creating a buffer, we have to specify its index in the general array of buffers. This is a perfectly workable option when we know all the buffers and their quantity in advance. However, our case is a little more complicated.
class CMyOpenCL : public COpenCL
|
Earlier, we discussed that the number of used buffers for accumulating moments can vary depending on the chosen method for updating weights. In addition, we cannot know in advance how many neural layers the user will use to solve their tasks. Hence, I needed a dynamic array to store handles of data buffers. This problem was solved by adding a small AddBufferFromArray method. The parameters of this method are similar to those of the BufferFromArray method of the parent class except for the buffer index. The body of the method body a loop to search for empty cells in the buffer handle storage array. The first empty cell is used to create the buffer. When there are no free elements in the array, the method expands the array. The buffer is directly created by calling the above parent class method.
As a result of the operations, the method returns the index of the created buffer. If errors occur during operations, the method will return the INVALID_HANDLE constant.
I'd like to point out another aspect, which is that the method is created using the function template pattern. This allows you to use one method to create buffers of different types of data.
template<typename T>
|
The method created above allows the creation of buffers from arrays of any data types but it is not applicable when working with matrices. Therefore, the method was overloaded. The method algorithm remains unchanged.
int CMyOpenCL::AddBufferFromArray(MATRIX &data,
|
m_buffers[result] = INVALID_HANDLE;
|
Anticipating a bit, I want to mention that we won't always be creating buffers based on ready-made arrays. Sometimes, we just need to create a buffer in the OpenCL context without duplicating it in the main memory. Or, for example, a specific buffer is only used to obtain results, and there is no need to load its data into the context before performing operations. As we've mentioned before, the data copying process is an expensive operation, and we would like to minimize such operations. Therefore, it would be easier for us to simply create a data buffer in the context of a certain size without copying the data. For such cases, we will create the AddBuffer method. As you can notice, the algorithm of the method is almost identical to the methods of the previous class. The only difference is that this method receives the buffer size in bytes as a parameter instead of an array. At the end of the method, we call the BufferCreate method, which will create a buffer of the specified size in the OpenCL context.
int CMyOpenCL::AddBuffer(const uint size_in_bytes, const uint flags)
|
else
|
We also created methods for reading (BufferRead) and writing (BufferWrite) data of the OpenCL context buffer to the main memory matrix. The method algorithm is completely identical. Let's consider the data reading method as an example. In the method parameters, it receives the buffer identifier in the dynamic array of our class, a matrix for writing data, and an offset in the context buffer.
Please do not confuse the buffer identifier in the dynamic class array and the buffer handle in the OpenCL context. The class operation is structured in such a way that we only pass the ordinal number of an element in the dynamic array of our class to the external program, which contains the handle of that buffer. As a result, when creating a buffer in the context using the class, the external program does not have direct access to the created buffer in the context. All work with the buffer should be done using class methods.
In the method body, we first check the received buffer ID for the size of our dynamic array. We then check the validity of the specified buffer handle. In addition, we will check the validity of the OpenCL context and program handles. Only after successfully passing all the controls, we call the function for reading data from the buffer. Don't forget to check the results of the operations at every step. At the end of the method, we will return the logical result of the operations.
bool CMyOpenCL::BufferRead(const int buffer_index, MATRIX &data, |
The second class that we will create and use to transfer data between the main program and the OpenCL context is the CBufferType data buffer class. The class was created as a descendant of the CObject base class. Since the parent class is the base class, we need to recreate all the necessary functionality.
In addition to creating new methods in the new class, two new variables have appeared:
- m_cOpenCL a pointer to an object of the CMyOpenCL class
- m_myIndex the index of the current buffer in the dynamic array for storing buffer handles in the CMyOpenCL class.
The m_mMatrix matrix for storing data has also been introduced. Here we have slightly deviated from the generally accepted rules for creating classes. It is usually customary to restrict access to internal variables, and all interactions with them are built through class methods. Each such method restricts the degree of freedom to internal variables and requires additional time for executing the method's additional operations. Of course, this approach allows for complete control over changes in variable states. However, in building neural models, we aim to minimize the time spent on each iteration, as milliseconds per iteration can result in significant time overhead due to repeated calls. That is why we announced the m_mMatrix data matrix in public space. Of course, the fact that the class will be used to store and transmit data within our global project and that all buffers will be private or protected objects of other classes, minimizes our risks.
class CBufferType: public CObject
|
bool Update(uint row, uint col, TYPE value)
|
The structure of the class methods is quite diverse. Some of them are similar to matrix functions and perform the same functionality designed to work with a data matrix. Others carry out the functionality of interacting with the OpenCL context. Let's take a closer look at some of them.
In the class constructor, we will only set the initial values of the new variables. They are filled with empty values.
CBufferType::CBufferType(void) : m_myIndex(-1)
|
In the class destructor, we will perform memory cleaning operations. Here we'll clear the buffer in the context of OpenCL.
CBufferType::~CBufferType(void)
|
We have already used the BufferInit buffer initialization method in the neural layer class constructor. The main functionality of this method is to create a matrix of a specified size and populate it with initial values. The buffer size and initial values are specified in the method parameters. As part of this project, we will fill arrays with zero values during the initialization of the neural network and reset the buffers of accumulated deltas after updating the weight matrix.
bool CBufferType::BufferInit(ulong rows, ulong columns, TYPE value)
|
The next method is to create a buffer in the OpenCL context. In parameters, the method receives a pointer to an instance of the CMyOpenCL class in the context of which the buffer should be created.
The method starts with a control block. First, we check the validity of the obtained pointer - in case of receiving an invalid pointer, we delete the buffer previously created in the OpenCL context and exit the method.
bool CBufferType::BufferCreate(CMyOpenCL *opencl)
|
Then we check that it matches the previously saved pointer. If the pointers are identical and the buffer index is already saved, we won't create a new buffer in the OpenCL context but will simply copy the data from the matrix to the data exchange buffer again. To do this, we call the BufferWrite method. This method has its own set of checks, which we will become familiar with a bit later, and it returns a logical result of the operation. We exit the method with the result of the method of writing data to the OpenCL context.
//--- if the received pointer matches the one previously saved,
|
The subsequent code of the method will be executed only if we have not exited the method during the preceding operations. Here, we check the validity of the previously saved pointer to an instance of the CMyOpenCL class and the presence of an index in the dynamic array storing handles of data buffers. If this condition is met, we must clear the memory and delete the existing buffer using the BufferFree method before continuing operations. Only after successfully deleting the old buffer do we have the right to open a new one. Otherwise, uncontrolled use of memory resources will lead to memory shortages and corresponding consequences.
//--- checking for a previously saved pointer to the OpenCL context
|
At the end of the method, we initiate the creation of a new data buffer in the specified context. To do this, we call the AddBufferFromArray method discussed above. The index obtained in response to the call will be stored in the m_myIndex variable. If the buffer opening operation is successful, we will save the CMyOpenCL instance pointer received as input to the method before exiting.
//--- create a new buffer in the specified OpenCL context
|
In this method, we used two new methods: one for clearing the buffer and the other for writing data. The BufferFree method is responsible for clearing the buffer. The method algorithm is quite simple. First, we check for the presence of a stored pointer to an instance of the CMyOpenCL class and an index in the dynamic buffer array. If they are available, call the CMyOpenCL class buffer cleaning method and specify the buffer index to delete. If the buffer is successfully removed from the context, clear the pointer to the CMyOpenCL class instance and the buffer index variable.
It should be noted that calling this method clears memory and deletes the buffer only in the context of OpenCL. At the same time, the data matrix itself and its contents remain in RAM. We will be able to exploit this property to use OpenCL context memory more efficiently a little later.
bool CBufferType::BufferFree(void)
|
Next, I suggest considering methods for transferring information between the main program and the OpenCL context. This work is done in two similar methods: BufferRead and BufferWrite. Despite the differences in the operation directions, the algorithm of the methods is identical. At the beginning of the methods, a control block is organized that checks the validity of the pointer to an instance of the CMyOpenCL class and the presence of an index in the dynamic buffer array. And only after the control block has been successfully passed, the OpenCL context class method of the same name is called, specifying the buffer index, matrix, and offset in the OpenCL buffer.
bool CBufferType::BufferRead(void)
|
bool CBufferType::BufferWrite(void)
|
We have separately created methods for obtaining and directly specifying the buffer index in the dynamic array of GetIndex and SetIndex buffer handles. Their code is straightforward, so I don't even move them outside the class declaration block.
We've added three GetData methods of the same name to the class. They all perform the same function which is copying matrix data into a given structure. The difference is in the data receiver. This can be a dynamic array, matrix, or another instance of the CBufferType class.
In the first case, the method parameters contain a reference to the array and a flag that indicates the need to read data from the OpenCL context before copying the data. The introduction of the flag is a necessary measure. As you may have noticed when considering a method for reading data from the context, if there is no pointer to the CMyOpenCL object or index in the dynamic buffer array, the method will return false. This will block receiving data from an array without a buffer created in the OpenCL context. The introduction of a flag allows you to control this process.
At the beginning of the method, we check the flag and read data from the context, if necessary. Only then do we change the size of the receiver array and create a data copying cycle. Finally, the method returns the number of copied items.
int CBufferType::GetData(TYPE &values[], bool load = true)
|
The other two methods are built on the basis of a similar algorithm but they take into account the specifics of the receiver object.
int CBufferType::GetData(MATRIX &values, bool load = true)
|
int CBufferType::GetData(CBufferType *values, bool load = true)
|
Now that we have prepared constants and classes for working with the OpenCL context, we can continue to work on organizing the process directly in our neural network classes.
When creating methods for our neural network base class, we did not add two methods, UseOpenCL and InitOpenCL. As can be seen from the names of the methods, they are designed to initialize and control the process of working with OpenCL. The first one is used to switch the operating mode and enables and disables the use of OpenCL. The second one initializes the operation of an instance of the CMyOpenCL class.
Let's take a step back and fill these gaps. In the parameters of the UseOpenCL method, we will specify the new state as a logical value. Using a logical value to convey a binary state to enable/disable a function seems intuitive to me. It is quite logical to use true to enable the functionality and false to turn it off.
In the method body, we will organize the algorithm to branch out depending on the state being set. When we receive a command to disable the functionality, we will check the current pointer to an instance of the CMyOpenCL class that is stored in the m_Copencl variable. If the pointer is invalid, the functionality has not been initialized before, and we have nothing to disable. In this case, we will just update the state of the technology usage flag and exit the method.
If the functionality was previously activated and a signal to deactivate it has now been received, we will initiate the process of cleaning up the object and deleting it. After that, we will distribute a new (empty) pointer to neural network objects, save the flag, and exit the method.
void CNet::UseOpenCL(bool value)
|
Further operations will be performed only when the OpenCL functionality is enabled. When we receive a signal to enable the use of OpenCL, we start the process of creating and initializing a new instance of the CMyOpenCL class, which is placed in a separate InitOpenCL method.
Before exiting the method, save the new flag for using OpenCL and distribute the pointer to the new object across all objects of the neural network. To do this, we will pass a new pointer into the dynamic array object storing the layers of the neural network, and from there, the pointer will be passed down the hierarchical chain to each object in the neural network.
//---
|
The actual process of creating a new instance of the CMyOpenCL class and initializing it is placed in a separate InitOpenCL method.
At the beginning of the method, we check for the existence of a previously saved pointer to an object of the CMyOpenCL class. At this point, the question arises about what we want to do next if there is a previously instantiated object. We can continue using a previously initialized instance of the class or create a new one. Using an existing facility seems less labor-intensive at this stage. However, in this case, we may need an additional method to restart the functionality in the event of an error of some kind. This is an additional effort that is likely to require developing an additional control system for the entire project code.
We chose the forced restart option. Therefore, if we have a valid pointer to a previously created instance of the CMyOpenCL class, we start the process of deleting its contents from memory, and then the object itself. Only after clearing the memory, we start the process of creating and initializing a new object. The process of creating an OpenCL context and program is implemented in the COpenCL::Initialize method. As parameters to this method, we will pass a text variable containing our program. Remember, we wrote our program code from a file resource into it?
bool CNet::InitOpenCL(void)
|
Next, let's specify the number of kernels and buffers used. Above, we have declared constants for 20 kernels, each using no more than 4 data buffers. I intentionally don't specify a large number of buffers at this stage, as thanks to our new method, the array will automatically expand when a new data buffer is created. However, the number of kernels in the program is static and does not depend on the neural network architecture.
if(!m_cOpenCL.SetKernelsCount(20))
|
After that, we will initialize all program kernels and save the handles for calling them into an array within the CMyOpenCL class object.
We are not creating all the data buffers one by one at this stage for one simple reason: their quantity depends on the architecture of the neural network and may exceed the available OpenCL context memory capacity. If it is insufficient, dynamic memory allocation can be used. This implies loading buffers as needed and subsequently freeing memory when a specific data buffer is not planned to be used. However, this approach leads to an increase in the overhead of copying data between the main memory and the OpenCL context. Therefore, its use is justified only if there is a lack of GPU memory.
The kernel creation algorithm is identical. Here are just a few examples.
if(!m_cOpenCL.KernelCreate(def_k_PerceptronFeedForward, "PerceptronFeedForward"))
|
if(!m_cOpenCL.KernelCreate(def_k_CalcOutputGradient, "CalcOutputGradient"))
|
if(!m_cOpenCL.KernelCreate(def_k_CalcHiddenGradient, "CalcHiddenGradient"))
|
if(!m_cOpenCL.KernelCreate(def_k_CalcDeltaWeights, "CalcDeltaWeights"))
|
So we have come to the stage of organizing work with the OpenCL context directly in the neural layer class. When creating many class methods, we branched the method algorithm depending on the device for performing operations. Then we created the process organization code using MQL5 and left gaps in the process organization on the OpenCL side. Let's go back and fill in these gaps.
We will start with the direct pass method. We have previously discussed the organization of operations using MQL5. Now let's look at the implementation of working with the OpenCL context.
bool CNeuronBase::FeedForward(CNeuronBase * prevLayer)
|
First, we'll check that the initial data array, the weight matrix, and the result buffer have a buffer index. The logic here is simple. If we receive a pointer to a data array with an existing buffer in the method's parameters, we assume that the data is already loaded into the OpenCL context. Above, when creating a data buffer in the CBufferType class, we immediately created a buffer in the OpenCL context. Therefore, the absence of a buffer index may indicate an error. Because of this, in such a case, we end the method with a false result. If you use dynamic memory allocation, then at this point you will need to create copies of all data buffers used in this kernel and copy the contents of the source data buffers into the OpenCL context.
else // OpenCL block
|
Then we will specify the parameters for the feed-forward kernel. Here we will specify their indices for buffers and specific values for discrete parameters.
//--- passing arguments to the kernel
|
if(!m_cOpenCL.SetArgumentBuffer(def_k_PerceptronFeedForward, def_pff_weights, |
if(!m_cOpenCL.SetArgumentBuffer(def_k_PerceptronFeedForward, def_pff_outputs,
|
if(!m_cOpenCL.SetArgument(def_k_PerceptronFeedForward, def_pff_inputs_total, |
In the NDRange array, we will specify the number of parallel threads required by the number of neurons in the current layer and launch the kernel for execution. Note that the Execute method does not literally start kernel execution, but only queues it for execution. The kernel is launched directly when you try to read the results of its operation. However, we will not download the results of each kernel's operations. Instead, we'll queue up a forward pass through the entire section and download only the result of the model's work from the last layer. This will take up the entire queue of operations. Thus, we will reduce the amount of data transferred and the time it takes to download it.
In the case of dynamic memory allocation, after queuing the kernel, it will be necessary to load all changes from the OpenCL context into the data matrices and delete unused buffers from the context. Note that you need to download the contents of all buffers whose data changes during the kernel operation.
//--- putting the kernel in the execution queue
|
After performing the above-described operations, we call the activation method of the required activation function class and exit the method.
It is also necessary to supplement the code for backpropagation methods. In the gradient computation kernel at the output of the neural network, three buffers are used: for target values, for the results of the last feed-forward pass, and for writing the obtained gradients. We'll check them at the beginning of the OpenCL block.
bool CNeuronBase::CalcOutputGradient(CBufferType* target, ENUM_LOSS_FUNCTION loss)
|
//--- algorithm branching depending on the operating device
|
else // OpenCL block
|
Next, we will specify their indices in our kernel parameters. We will also specify the loss function used in the kernel parameters.
//--- pass arguments to the kernel
|
The number of independent operation threads launched equals the number of neurons at the output of our model.
Start the kernel execution and complete the method.
//--- put the kernel in the execution queue
|
The process of distributing the gradient through the hidden layer to the neurons of the previous layer is divided into two sub-processes. In the first buffer, we will adjust the error gradient based on the derivative of the activation function, and in the second one, we will distribute the error gradient values to the neurons of the previous layer according to their influence on the final result. We have created a separate kernel for each sub-process. We placed the correction of the error gradient for the derivative of the activation function into a separate class of the activation function. Therefore, in the CalcHiddenGradient method, we will only have to launch the error gradient distribution kernel in the OpenCL program.
bool CNeuronBase::CalcHiddenGradient(CNeuronBase *prevLayer)
|
Again, at the beginning of the OpenCL block, we check for the availability of previously created buffers in the OpenCL context for the current kernel to work.
else // OpenCL block
|
After successfully passing the control block, we will pass the buffer handles and the number of neurons in the layer to the kernel.
//--- pass arguments to the kernel
|
The number of threads in this case will be equal to the number of neurons in the previous layer. We will write their value to the first element of the NDRange array. Let's start kernel operations.
//--- put the kernel in the execution queue
|
After propagating the error gradient across all neurons in our network based on their influence on the final result, the next step is to organize the process of updating the weight matrix. We have divided this process into two sub-processes. The weight matrix will not always be updated after every iteration. Therefore, at each iteration, we calculate the error gradient for each weight and add it to a separate buffer. Upon receiving a command from the main program, we adjust the weight matrix by the size of the batch, which gives us the average value from the accumulated error gradient.
Error gradients are accumulated in the CalcDeltaWeights method. To perform the kernel operations of this method, we need three buffers:
- the buffer of the results of the last direct pass of the previous layer,
- the current layer's gradient buffer,
- the buffer for accumulating weight gradients.
bool CNeuronBase::CalcDeltaWeights(CNeuronBase *prevLayer, bool read);
|
First, as usual, we check the availability of used buffers in the OpenCL context.
else // OpenCL block
|
We pass the pointers to them to the kernel parameters.
//--- pass arguments to the kernel
|
In this case, we will use a two-dimensional task space to launch the kernel. In one dimension, we specify the number of neurons in the current layer, and in the other dimension, the number of neurons in the previous layer.
After the preparatory work is completed, we will start the kernel execution.
Then we will check the data reading flag and, if necessary, load the result of operations from the context.
And of course, do not forget to monitor the process of performing operations at every step.
//--- put the kernel in the execution queue
|
We are successfully moving forward in the process of creating our project. To complete the work on the fully connected neuron, we need to describe the sub-process of updating the weight matrix. In our project, we decided to implement several algorithms for updating the weights. We have created our own kernel for each algorithm for updating the weight matrix. Let's add calls to these kernels to the corresponding methods of our class.
We will start with the stochastic gradient descent method. The implementation of this method requires only two buffers: accumulated deltas and the weight matrix. We check the availability of these buffers in the OpenCL context.
bool CNeuronBase::SGDUpdate(int batch_size, TYPE learningRate, VECTOR &Lambda)
|
Then we will pass pointers to them to the kernel parameters. In addition, we need to transfer training parameters to the kernel:
- batch_size
- learningRate
- Lambda vector (regularization parameters)
//--- pass arguments to the kernel
|
Let's determine the number of threads to be launched. There will be four times fewer elements in these buffers than in the weight matrix. This effect is achieved through the use of vector operations.
Please note the following while working with the algorithm for determining the number of threads. We can't just divide the number of neurons by four because we can't be sure that the number of neurons will always be a multiple of four. But we must be sure that the number of threads covers all neurons in our layer. So we need a function similar to rounding up to an integer. Instead, we will use the property of integer division to discard the fractional part, in other words, rounding down. To get the result we want, before dividing by the vector size, we'll increase the number of neurons by a value one greater than the vector size. After such a small mathematical trick, the result of integer division will be the required number of threads. When using this trick, you should be particularly careful with the data type used because the desired effect can only be achieved when all variables in the operation are integers.
//--- put the kernel in the execution queue
|
After the preparatory work, we will request the kernel to be completed.
In the description of the weight matrix update process using the accumulated momentum method, we have an additional buffer for storing moments and a momentum averaging coefficient. For the rest, the principles of constructing the algorithm laid down in the previous method are preserved.
bool CNeuronBase::MomentumUpdate(int batch_size, TYPE learningRate,
|
else // OpenCL block
|
//--- pass arguments to the kernel
|
We will set the number of threads to 4 times less than the number of elements in the weight matrix and start performing operations.
//--- put the kernel in the execution queue
|
Please note the constants used in kernels and their parameters. Despite the similarity of operations, a small detail or a typo with a constant can often lead to a critical error and program termination.
Let's move on to the next implementation. The AdaGrad optimization method is implemented in the AdaGradUpdate method and in the respective kernel, which we will identify by the def_k_AdaGradUpdate constant. To avoid possible errors when specifying parameters, all parameter constants for this kernel start with def_adagrad_. As you can see, all constant names are intuitive and logically connected. This reduces the risk of a possible error. This method is very convenient when there are a large number of constants.
The AdaGrad method, like the cumulative pulse method, uses a moment accumulation buffer. However, unlike the previous method, there is no averaging factor here. At this point, we don't care about differences in the use of parameters and buffers. We are only interested in their availability: the use of buffers and parameters is already described in the OpenCL program kernel, and here we organize the process of transferring data from the main program to the OpenCL context.
The algorithm for organizing the process of working with the OpenCL context in the AdaGradUpdate method is similar to that used in the methods described earlier.
- First, check for buffers in the OpenCL context.
- Then we will send pointers to buffers and optimization parameters to the kernel.
- Start kernel execution.
bool CNeuronBase::AdaGradUpdate(int batch_size, TYPE learningRate, VECTOR &Lambda)
|
else // OpenCL block
|
//--- pass arguments to the kernel
|
//--- put the kernel in the execution queue
|
The RMSProp optimization method is functionally similar to AdaGrad, but it includes a coefficient for averaging the accumulated momentum.
We're following the established framework: check the availability of OpenCL context buffers, then send pointers to buffers and optimization parameters to the kernel while also ensuring the use of the proper method and constant naming:
- RMS PropUpdate method
- def_k_ RMSPropUpdate kernel constant
- def_rms_ parameter constants
After specifying the parameters, launch the kernel.
bool CNeuronBase::RMSPropUpdate(int batch_size, TYPE learningRate,
|
else // OpenCL block
|
//--- pass arguments to the kernel
|
//--- put the kernel in the execution queue
|
The developers of the AdaDelta method opted to not use a learning rate but compensated for it by introducing an additional buffer for moments with an additional averaging coefficient. Accordingly, we will use one more buffer in this kernel.
When setting kernel parameters, again, mind the naming:
- AdaDeltaUpdate method
- def_k_AdaDeltaUpdate kernel constant
- def_adadelt parameter constants
Furthermore, for seamless portability of the constructed neural network, we need to ensure the consistency of buffer usage in terms of performing operations using MQL5 and in the OpenCL context. When used within the same platform, changing the sequence in which the momentum arrays are used will not have an effect. Whatever we call them, their content will be appropriate to the context of use. However, when transferring a pre-trained neural network to another platform, we will likely get unexpected results. At the same time, we should remember the purpose and functionality of arrays. The moments are only used during the weight matrix update process in the training of the neural network and do not participate in the feed-forward pass. So, the impact of mixed-up buffers will only become apparent when attempting to retrain the neural network. This should not be neglected. If we use a once built neural network for a long time, we will need to periodically refine it. This is necessary to keep weights relevant in our changing world.
Taking into account the above, we will pass pointers to the loaded buffers and training parameters to the kernel.
Let's calculate the number of required threads and launch the kernel.
bool CNeuronBase::AdaDeltaUpdate(int batch_size, VECTOR &Beta, VECTOR &Lambda)
|
else // OpenCL block
|
//--- pass arguments to the kernel
|
//--- put the kernel in the execution queue
|
Our description of the operations performed in the fully connected neural layer is nearing completion. One method remains to be described, and it's the weight update method specifically, the Adam optimization algorithm. This method, though the last on the list, is not of lesser importance. Like AdaDelta, the Adam method also employs two momentum buffers, but in addition, it returns the learning rate.
Let's recap the main stages of our algorithm and highlight key checkpoints:
- Verify the presence of the necessary data in the OpenCL context memory.
- Pass pointers to data buffers and training parameters to the kernel. Ensure naming consistency: Method AdamUpdate а kernel constant def_k_AdamUpdate а parameter constants def_adam_...
- Monitor the consistent use of buffers between MQL5 and the OpenCL context.
- Execute the kernel.
bool CNeuronBase::AdamUpdate(int batch_size, TYPE learningRate,
|
else // OpenCL block
|
//--- pass arguments to the kernel
|
//--- put the kernel in the execution queue
|
We have completed a description of the processes of a fully connected neural layer. Now, we've reached the stage where we can look at the work done and assess the initial results. In fact, we already have enough created base classes to build a small perceptron model with several fully connected layers. One of them will serve as the receiver of input data (input layer), the last neural layer will produce the results (output layer), and hidden layers will be in between.