New article: Deep neural network with Stacked RBM. Self-training, self-control

 

New article Deep neural network with Stacked RBM. Self-training, self-control has been published:

This article is a continuation of previous articles on deep neural network and predictor selection. Here we will cover features of a neural network initiated by Stacked RBM, and its implementation in the "darch" package.

Structure of a deep neural network initialized by Stacked RBM (DN_SRBM)

I recall that DN_SRBM consists of n-number of RBM that equals the number of hidden layers of neural network and, basically, the neural network itself. Training comprises two stages.

The first stage involves PRE-TRAINING. Every RBM is systematically trained without a supervisor on the input set (without target). After this weight of hidden layers, RBM are transferred to relevant hidden layers of neural network.

The second stage involves  FINE-TUNING, where neural network is trained with a supervisor. Detailed information about it was provided in the previous article, so we don't have to repeat ourselves here. I will simply mention that unlike the "deepnet" package that we have used in the previous article, the "darch" package helps us to implement wider opportunities in building and tuning the model. More details will be provided when creating the model. Fig. 1 shows the structure and the training process of DN_SRBM

Fig.1 Structure and the training process of DN_SRBM

Fig. 1. Structure of DN SRBM

Author: Vladimir Perervenko