Techniques for improving the convergence of neural networks
In the previous chapters of the book, we have learned the basic principles of building and training neural networks. However, we also have identified certain challenges that arise during the training process of neural networks. We have encountered local minimums that can stop training earlier than we achieve the desired results. We also discussed issues of vanishing and exploding gradients and touched upon the problems of co-adaptation of neurons, retraining, and many others which we'll discuss later.
On the path of human progress, we continually strive to refine tools and technologies. This applies to the algorithms of training neural networks as well. Let's discuss methods that, if not completely solve certain issues in neural network training, at least aim to minimize their impact on the final learning outcome.