Neural network in the form of a script - page 3

 
sprite:
kombat:
Something about the logic of this script resembles a simple 4in2 encoder

A scrambler is a non-trainable system.

And the network in this script is teachable. And the learning process is shown on the screen dynamically from epoch to epoch.

You can see how the weights of neurons in each layer change, and how the grid gets more and more accurate as it is trained.

Above are three posts where the same algorithm has learned

to work with three different sets of data.

In the case of an encoder, three encoders would be needed for each data set.

I've tweaked the learning algorithm a little bit.

1st dynamically changing the number of neurons something like a GENETIC ALGORITHM, though without selecting the best and without creating from them generations :-) without correction of connections

2) stopping the learning process when high accuracy of results is reached

 
YuraZ:

I have slightly fine-tuned the learning algorithm

1) dynamically changing the number of neurons, something like a GENETIC ALGORITHM, but without selecting the best and without producing from them :-) without correcting connections

2-nd stop learning when high accuracy of the results is achieved.



And the finalized version will not be available to the public?

 
YuraZ:
sprite:
kombat:
Something about the logic of this script resembles a simple 4v2 encoder

The encoder is a non-training system .

And the network is trained in this script. And the learning process is shown in dynamics from epoch to epoch on the screen.

You can see how the weights of neurons in each layer change, and how the grid gets more and more accurate as it is trained.

Above are three posts where the same algorithm learned

to work with three different sets of data .

In the case of an encoder, it would require three encoders for each data set.

I've tweaked the learning algorithm a little bit.

1) dynamically changing the number of neurons, something like a GENETIC ALGORITHM, but without selecting the best neurons or creating offspring from them :-) without correcting connections

2nd stop training when high accuracy of results is achieved


Here we go !!! The ice has broken! :))))

 
Vinin:
YuraZ:

I have slightly fine-tuned the learning algorithm

1) dynamically changing the number of neurons, something like a GENETIC ALGORITHM, but without selecting the best and without producing from them :-) without correcting connections

2-nd stop learning when high accuracy of the results is achieved.



Will there be a finalized version for the public?

Sure, I'd like to have a look. Maybe someone will add something else :)


As for accuracy, I don't think that's the goal .

The aim is that the network at the end of the training distinguishes all training sets

among themselves . And this is available with normal accuracy .


As the experiments have shown, this network needs only 300 epochs of training to learn to "think".

with the above sets. Yes, you can see this visually during training,

The network quickly begins to distinguish between the data sets.


And it would be interesting to see how to dynamically change the number of neurons in an Expert Advisor during testing,

in which this grid will be built in. And the number of neurons to pick up in the MT optimizer.

 
Topor:

How do you make it predict the price?

You can't. You should not expect a miracle from a neural network. The prognosis is not given by the NS, but by the algorithm incorporated into it, the algorithm is based on trading conditions, and trading conditions .... are determined by YOU.

 
sprite:
kombat:

Not against it, but not yet in favour of using neural networks in trading...

Likewise :) !!!

But the algorithm is working and learning :) And then we'll see :)


The interest in networks is further fueled by the EA winning the Championship with the network .

Of course, we had a different network there. But the man did the work and obtained the result.


The question is not what network, but what you want to get from it. And the result was obtained not because of the NS, but because of the trading conditions, which the NS composed and gave a certain probable forecast. NS is in fact a filter, which can be adaptive (self-learning NS) and therefore has a lag (for the learning period). The advantage of NS is that it can merge disparate components of your TS into one result and independently arrange the significance ratios for these components (learn).

 
Vinin:
YuraZ:

I have slightly fine-tuned the learning algorithm

1) dynamically changing the number of neurons, something like a GENETIC ALGORITHM, but without selecting the best and without producing from them :-) without correcting connections

2-nd stop learning when high accuracy of the results is achieved.


Won't you provide an improved version for the public?

Victor, there will be.


By the way, the forecast accuracy has increased by several times! The trouble is that the algorithm takes too long to train :-)))

I have not solved this problem yet - I'll be sure to post the code!

I miss the code based on a timer! I don't have it in MQL4!

 
Xadviser:
Topor:

How do you make it predict the price?

You can't. You should not expect a miracle from a neural network. The prognosis is not given by the NS, but by the algorithm incorporated into it, the algorithm is based on trading conditions, and trading conditions .... are determined by YOU.

Have you come across any decent algorithms?

Just a word or two about them .

 
Xadviser:
Topor:

How do you make it predict the price?

You can't. You should not expect a miracle from a neural network. The prognosis is not given by the NS, but by the algorithm incorporated into it, the algorithm is based on trading conditions, and trading conditions .... And the trading conditions are defined by YOU.

The gurus advise to predict not the price but the change .

 
YuraZ:

It is not always necessary to normalize, who says that the grid MAY and MUST only work with 0 and 1?


I can attach a simple grid with an example, (unfortunately there are no materials at hand right now) - I will do it later

where a simple NN solves this problem without data preparation with normalization

unfortunately this is not the source


however, the example I gave! it is sort of already normalised

the condition has two ranges


1 0-100

2 10-30


you just need to find the ratio of the position in one range - which is known to

in essence this is scaling.

Normalisation is almost always needed. The data must be within the definition range of the activation function.

In the script the sigmoid is [-1;+1]. If you replace it with, say, an exponential ... or square root.


http://www.statsoft.ru/home/portal/applications/NeuralNetworksAdvisor/Adv-new/ActivationFunctions.htm