Discussion of article "Programming a Deep Neural Network from Scratch using MQL Language" - page 2

 
Anddy Cabrera #:

Hi Li,

I have updated the article with 2 demos files. One for MQL5 and one for MQL4. The DeepNeuralNetwork.mqh can be used for both, mql5 and mql4. 


In anyway, I'm attaching this files here to show how to use it. 


Let me know if you have more questions.

Really Very Nice article, I tried the demo and noted that only yValues[1] only may give values more than .5, yValues[0] and yValues[2] values are of maxumum of .2 or .3. even in optimization no more than one trade, which is one sell order, will open.  
 
nail sertoglu #:

Thanks for the codes you shared. Tried to understand the way of your.


I have some hesitation since about yValues[0] ,  yValues[1],  yValues[2] since they are NOT changing and always  0,33333 but  _xValues[1,2,3] are changing with new bar ; so if trade based on yValues I did not see ANY TRADE while TRADE OCCURS when conditions based on _xValues.

Is it my fault or simply miscoding error in your original code ? 

Update the following function to return `bool` instead of `void` and you will see that there was a bad number of weights given.

bool SetWeights(double &weights[])
     {
      int numWeights=(numInput*numHiddenA)+numHiddenA+(numHiddenA*numHiddenB)+numHiddenB+(numHiddenB*numOutput)+numOutput;
      if(ArraySize(weights)!=numWeights)
        {
         printf("Bad weights length (%i), expected %i", ArraySize(weights), numWeights);
         return false;
        }

Note that you also need to update the wheights on top of the file (it's not enough to update them only when you initialize the network :P 

#define SIZEI 25 // input * hidden A
#define SIZEA 25 // hidden A * hidden B
#define SIZEB 15 // hidden B * output
 
Hi Anddy,

This is a very good article.
I have a few questions.
1) Can I use both Sigmoid and Tan-h activation leads to output in a multilayer neuron by combination?
2) Do you need to update the wheights at the top of the file as suggested by Karlis Balcers?
#define SIZEI 20 // (input * hidden A)+hidden A
#define SIZEA 25 // (hidden A * hidden B)+hidden B
#define SIZEB 18 // (hidden B * output)+output
Note: SIZEI should be 20 and SIZEB should be 18, is that correct?
3) I have attached a deep neural network diagram as described in this article, is that correct?

many thanks


EADNN

 
It works like genetic algo ith optimizer, selecting values that better the final result
 
<Deleted>
 

Nice animation.

A two-layer neural network is a "shallow" neural network, not a deep neural network. Deep neural networks include networks with more than three hidden layers. Due to the peculiarities of training such neural networks, deep learning methods have been developed.

The article as an example of programming on MCL is probably useful. For familiarisation with the topic of MLP - certainly necessary. As an example of neural network application - not complete and far behind the current state of the topic.

As a rule without optimisation of hyperparameters neural network does not give satisfactory quality.

I just don't understand why to build a bicycle from improvised means, if there is a sea of ready programmes on this topic?

 
For understanding NS is good and clear.
 

Correction. There is a definition of a deep net in the article. I didn't see it.

The process of optimising the weights of a neural network using genetics is not literally "learning". It is optimisation after all. Training uses completely different methods. Although this variant of neural network use is also practiced and quite successfully.

To understand the work of a neural network, it is important to understand how a neural network is trained by back propagation of error. Well, I'm already picking on you :)

Good luck

 
Many thanks to the author for the article. As a person far from neural networks, it helped me a lot to get into the essence of things and was very interesting from the point of view of further familiarisation with the topic. Thanks again!
 

Is there any way to incorporate error back propagation into this network?

For some reason there is an opinion that such a network would be more flexible and provide better inputs - outputs....

A network with error back propagation is not a completely different network, is it?