Hybrid neural networks. - page 19

 
registred >> :

It's in the first year of university. I actually went through it in high school. The only thing that matters is the teacher, that is essentially the type of error in the output of the network.

What is important is the problem statement. How we teach (the teacher) the error at the output of the network is of secondary importance.

 
rip >> :

The problem statement is important. How we train (teacher) the error in the output of the network is secondary.


A neural network learning to add 2+3 will have an MSE error. A neural network learning pattern recognition will have a different error. Or are you suggesting to interpret the problem statement in some other terms?

 
registred >> :


A neural network learning to add 2+3 will have an MSE error. A neural network learning pattern recognition will have a different error. Or are you suggesting to interpret the problem statement in some other terms?


The problem statement is what you are trying to do, with a network. Let's use an example, have a function x(t) = 4*x(t-1)*(1 - x(t-1)).

We are going to approximate its value for t=100,150; respectively build a training sample and a test sample as an extension of the training sample.

X0 = 0.2, training sample - 100, items from 1-100. The training one has 50 elements from 100 to 150.


In the atacha .rar there are graphs:

learning-1.gif - training sample

test-1.gif - test

learning-2.gif - distribution of values of the training sample


Let's start training, respectively, input X and expect X+1 at the output, the network 1-6-1. Train by gradient method with adaptive step.

So, the training pair {X,D}, where D=X(t+1)


In the process of training we have

MSE: 0.3549103488
Epoch: 3375

error.gif - error graph


Let's test on the test sample

Testing error
MSE: 0.7089074281

test-2.gif - test plot, expected output data and what the network model shows.

test-3.gif - graph of the test sample value distribution


I.e. the goal has been reached

Files:
testu1.zip  60 kb
 
how to adjust the learning rate non-linearly?
 
gumgum >> :
how to adjust the learning rate non-linearly?

Well in this case I used an adaptive step, which is calculated according to dE/dW.

 
delw=n(DE/DW) how can this n be adjusted by an approximating polynomial of the third power?
 

rip, how do you apply this function to forex? Are you calculating MSEs as well?

 
registred >> :

rip, how do you apply this function to forex? Do you calculate MSE as well?


No way :) It's just one of the test functions to see if the network is working properly.

 
rip >> :

No way :) It's just one of the test functions, which allows to see, if the network works correctly.


I am referring to the stop learning method. What criterion do you use for this in relation to forex? In this example you used root mean square error.

 
registred >> :


I am referring to the stop learning method. What criterion do you use for this in relation to forex? In this example you have used the root mean square error.


MSE