Probabilistic neural networks, packages and algorithms for MT4 - page 11

 
renegate:
Gentlemen!
So what shall we feed to the input of the neural network? What error function shall we select?


Judging by the content, not many people are interested. Many people think it's about the software....

I suggest you start with the slope of the regression line with different periods. And you can start with different TFs. :)

Functionality of the error - maximum profit.

 
klot:

Judging by the content, not many people are interested. A lot of people think it's the software....


Yeah, there's a lot of neuronics threads going on. And all over the place the flooders clog any discussion with stupid requests for software and dozens of messages "me and me and me".

So there's nowhere to discuss it properly. Maybe there's a strictly moderated forum where they shoot the flunkies? If you say one word, you get a reprimand; if you say two things off-topic, you're ignored forever.

Cloth, you got a lot of places to hang out, maybe you have a place in mind?

 
TedBeer:
klot:

Judging by the content, not many people are interested. A lot of people think it's about the software....


Oh man!!!, so many topics about neuronics. And all over the place the flooders clog any discussion with stupid requests for software and dozens of messages "me and me and me".

So there's nowhere to discuss it properly. Maybe there's a strictly moderated forum where they shoot the flunkies? If you say one word, you get a reprimand; if you say two things off-topic, you're ignored forever.

Cloth, you got a lot of places to hang out, maybe you've got a place you'd like to try.


I've started a forum. I'm going to graze there now :) And I'm inviting everyone to have a practical discussion. I'm going to shoot down the flooders myself. :)

http://www.fxreal.ru/forums/index.php

I'm transferring my findings from different places there little by little.

 

An example of a simple non-network

MICROSOFT VISUAL C++ 6.0

network learn XOR operation

 
/* ========================================== *
 * Filename:    bpnet.h                       *
 * Author:        James Matthews.               *
 *                                              *
 * Description:                                  *
 * This is a tiny neural network that uses      *
 * back propagation for weight adjustment.      *
 * ========================================== */
 
#include <math.h>
#include <stdlib.h>
#include <time.h>
 
#define BP_LEARNING    (float)(0.5)    // The learning coefficient.
 
class CBPNet {
    public:
        CBPNet();
        ~CBPNet() {};
 
        float Train(float, float, float);
        float Run(float, float);
 
    private:
        float m_fWeights[3][3];        // Weights for the 3 neurons.
 
        float Sigmoid(float);        // The sigmoid function.
};
 
CBPNet::CBPNet() {
    srand((unsigned)(time(NULL)));
    
    for (int i=0;i<3;i++) {
        for (int j=0;j<3;j++) {
            // For some reason, the Microsoft rand() function
            // generates a random integer. So, I divide by the
            // number by MAXINT/2, to get a num between 0 and 2,
            // the subtract one to get a num between -1 and 1.
            m_fWeights[i][j] = (float)(rand())/(32767/2) - 1;
        }
    }
}
 
float CBPNet::Train(float i1, float i2, float d) {
    // These are all the main variables used in the 
    // routine. Seems easier to group them all here.
    float net1, net2, i3, i4, out;
    
    // Calculate the net values for the hidden layer neurons.
    net1 = 1 * m_fWeights[0][0] + i1 * m_fWeights[1][0] +
          i2 * m_fWeights[2][0];
    net2 = 1 * m_fWeights[0][1] + i1 * m_fWeights[1][1] +
          i2 * m_fWeights[2][1];
 
    // Use the hardlimiter function - the Sigmoid.
    i3 = Sigmoid(net1);
    i4 = Sigmoid(net2);
 
    // Now, calculate the net for the final output layer.
    net1 = 1 * m_fWeights[0][2] + i3 * m_fWeights[1][2] +
             i4 * m_fWeights[2][2];
    out = Sigmoid(net1);
 
    // We have to calculate the deltas for the two layers.
    // Remember, we have to calculate the errors backwards
    // from the output layer to the hidden layer (thus the
    // name 'BACK-propagation').
    float deltas[3];
    
    deltas[2] = out*(1-out)*(d-out);
    deltas[1] = i4*(1-i4)*(m_fWeights[2][2])*(deltas[2]);
    deltas[0] = i3*(1-i3)*(m_fWeights[1][2])*(deltas[2]);
 
    // Now, alter the weights accordingly.
    float v1 = i1, v2 = i2;
    for(int i=0;i<3;i++) {
        // Change the values for the output layer, if necessary.
        if (i == 2) {
            v1 = i3;
            v2 = i4;
        }
                
        m_fWeights[0][i] += BP_LEARNING*1*deltas[i];
        m_fWeights[1][i] += BP_LEARNING*v1*deltas[i];
        m_fWeights[2][i] += BP_LEARNING*v2*deltas[i];
    }
 
    return out;
}
 
float CBPNet::Sigmoid(float num) {
    return (float)(1/(1+exp(-num)));
}
 
float CBPNet::Run(float i1, float i2) {
    // I just copied and pasted the code from the Train() function,
    // so see there for the necessary documentation.
    
    float net1, net2, i3, i4;
    
    net1 = 1 * m_fWeights[0][0] + i1 * m_fWeights[1][0] +
          i2 * m_fWeights[2][0];
 
    net2 = 1 * m_fWeights[0][1] + i1 * m_fWeights[1][1] +
          i2 * m_fWeights[2][1];
 
    i3 = Sigmoid(net1);
    i4 = Sigmoid(net2);
 
    net1 = 1 * m_fWeights[0][2] + i3 * m_fWeights[1][2] +
             i4 * m_fWeights[2][2];
    return Sigmoid(net1);
}
 
//---
 
#include <iostream.h>
#include "bpnet.h"
 
#define BPM_ITER    2000
 
void main() {
 
    CBPNet bp;
 
    for (int i=0;i<BPM_ITER;i++) {
        bp.Train(0,0,0);
        bp.Train(0,1,1);
        bp.Train(1,0,1);
        bp.Train(1,1,0);
    }
 
    cout << "0,0 = " << bp.Run(0,0) << endl;
    cout << "0,1 = " << bp.Run(0,1) << endl;
    cout << "1,0 = " << bp.Run(1,0) << endl;
    cout << "1,1 = " << bp.Run(1,1) << endl;
}
Files:
bp_case.zip  41 kb
 

Another simple network !

MICROSOFT VISUAL C++ 6.0

this version allows you to add layers - change the number of neurons in layers

initially there are 3 layers in the source

2 neurons at the input - two in a hidden layer and one at the output!

// Create a 3-layer neural net to solve the XOR problem, with 2 nodes in the first two layers,
// and a single node in the output layer.
CBPNet XOR( 3 /* num layers */, 2 /* inputs */,2 /* hidden */,1 /* outputs */ );

// connect the neurons up
//
// O - Output
// // \
// // O O - Hidden
// ||
// | | X
// |/ \|
// O - Input
//

for example if you put 3 neurons in a hidden layer, the result becomes more accurate


CBPNet XOR( 3 /* num layers */, 2 /* inputs */,3 /* hidden */,1 /* outputs */ );

increasing the number of layers also requires increasing the number of neurons in the hidden layers

CBPNet XOR( 4 /* num layers */, 2 /* inputs */,20 /* hidden 1 */ ,5 /* hidden 2 */ ,1 /* outputs */ );

when increasing the layers you also need to add a call

// apply the bias to the hidden layer, and output layer
XOR.SetBias( POINT2D(0,1),BIAS_GLOBAL );
XOR.SetBias( POINT2D(1,1),BIAS_GLOBAL );
XOR.SetBias( POINT2D(1,1),BIAS_GLOBAL );
XOR.SetBias( POINT2D(0,2),BIAS_GLOBAL );

I managed to get the result, but network is much slower to learn!

Files:
cftai.zip  14 kb
 
klot:
I've started a forum. I'm going to graze there now :) And I'm inviting everyone to have a practical discussion. I'm going to shoot down the flooders myself. :)
I'm gonna go register.
 

and an absolutely beautiful tutorial!

Files:
summing1.zip  48 kb
 

It works beautifully. You could also try teaching the network a multiplication table, in the form of pattern recognition.

Like: in[0]=2; in[1]=2; out_des[0]=4; etc.....

 
YuraZ:

and an absolutely beautiful tutorial!

Who else would compile it for non-programmers...
 
klot:
Renegate:
Gentlemen!
So, what should we feed to the input of the neural network? What error function shall we choose?


Judging by the content, not many people are interested. Many people think it's about the software....

I suggest you start with the slope of the regression line with different periods. And you can start with different TFs. :)

Functional error - maximum profit.

Maybe it would be better to use Error Function ality - not maximum profit, but unlinkage (difference between forecast and Close[0]).