Discussing the article: "MQL5 Wizard Techniques you should know (Part 32): Regularization"

 

Check out the new article: MQL5 Wizard Techniques you should know (Part 32): Regularization.

Regularization is a form of penalizing the loss function in proportion to the discrete weighting applied throughout the various layers of a neural network. We look at the significance, for some of the various regularization forms, this can have in test runs with a wizard assembled Expert Advisor.

Regularization is another facet of machine learning algorithms that brings some sensitivity to the performance of neural networks. In the process of a network, there is often a tendency to over assign weighting to some parameters at the expense of others. This ‘biasing’ towards particular parameters (network weights) can come to hinder the network’s performance when testing is performed on out of sample data. This is why regularization was developed.

It essentially acts as a mechanism that slows down the convergence process by increasing (or penalizing) the result of the loss function in proportion to the magnitude of weights used at each layer junction. This is often done either by: Early-Stopping, LassoRidge, Elastic-Net, or Drop-Out. Each of these formats is a little different, and we will not consider all the types, but we will instead dwell on Lasso, Ridge and Drop-Out.

MQL5 Wizard Techniques you should know - Regularization

Author: Stephen Njuki