A quick and free library for MT4, much to the delight of neuralnetworkers - page 13

 
Figar0 >> :

The result is just a sane one, it is used, but I can not say that this is the merit of ZZ).

Issued there is not commit, and just a few (kol'ton set) nets selected by minimum error, but that Statistisu should not feed unprepared data is a delusion. Certainly in search of decisions it is possible to try different things, but to dump everything in a grid, and let it boil, does not pass, any one here will tell you so.

And in what direction it is worth to work, tell me please, if the results of your work are not classified from the public?

 
Kharin >> :

Are you guys sure the library isn't "crooked"?

The situation is this: The error message pops up regularly and that the terminal will be closed.

Read the thread. It is written in Russian:

static bool Parallel = true;

Если параллельный режим не поддерживается процессором, то это может привести к тому, что терминал выгрузится с сообщением об ошибке
 
lasso >> :

Fuck the councillor. There's no need to rewrite and deal with it.

There is a suspicion of incorrect operation of the library itself, and it is necessary to find out if this is the case or not. Otherwise there is no point in moving on.

I have modelled your situation. Indeed, when using ann_train, the responses are the same even after training the network with ~10,000 signals. The network responses when using ann_train_fast are sometimes different and sometimes not.

I think there really is a problem, at least with the randomisation of the weights.

 

Try to comment out f2M_randomize_weights. If the nets answers are different, then there is an error in FANN2MQL. If they are still the same, there is an error in FANN.

The default is randomization (-0.1 ... 0.1)


UPD: Checked it myself. There is randomization in both cases. Only an oddity - 2 to 5 nets in a row respond the same, then a group of other identical nets. So there are 5 different answers per 16 nets. If the error is not corrected, a committee of N members needs 3*N nets, the rest are ballast.

 
Henry_White >> :

Read the thread. It's written in Russian:

Who told you I use parallel mode?!

Why do you have to "throw poop" in a preachy way? If you cannot say what is written in the stack and what exactly the error is,

don't say anything. Yes, now we'll start talking about telepaths and so on, I'm warning this cry: an error message was given,

Its full text, functions used are the same as in Yury Reshetov's, parallel mode is not used.

What could be the reason for THIS message?

 
Kharin >> :

And who told you I was using parallel mode?!

I beg your pardon ))

 
Kharin писал(а) >>

That's why I think your last phrase is nonsense.

You think so because you took it personally, while I was only trying to say that the library does not have a powerful error handler, and does not forgive incorrect handling of objects and pointers to them. So, let's be friends! )))

Have you tried Reshetov's Expert Advisor? Does it bounces the terminal too?

 
Dali писал(а) >>

UPD: Checked it myself. There is randomisation in both cases. Only there is an oddity - 2 to 5 nets in a row answer the same, then a group of other identical ones. So there are 5 different answers per 16 nets. If the error is not corrected, the committee of N members needs 3*N nets, the rest are ballast.

Noted. The randomisation is from -1 to 1, and the network profile has weights from -10.0e--003 to 10.0e-003

Example: (12, -7.35577636217311400000e-003) (13, 7.63970005423449810000e-002)

Is this correct?

.....

Yes, the same thing happened, that the output values of the first two or four nets are different from the subsequent ones.

Tried multiplying the inputs by different coefficients. This does not solve the problem.

 
lasso >> :

You think so because you took it personally, and I was only trying to say that the library does not have a powerful error handler, and does not forgive incorrect handling of objects and pointers to them. So, let's be friends! )))

Have you tried Reshetov's Expert Advisor? Does it also bounces the terminal?

I like friendship better)))

Reshetov's Expert Advisor knocked out the terminal before I had abolished parallel computing.

Now I try my Expert Advisor in all directions and check success of operations with Print.

I have noticed the following peculiarity: creation of a grid may be unsuccessful((

a = f2M_create_standard (nn_layer,nn_input,nn_hidden1,nn_hidden2,nn_output);

this line very often returns -1.

To be more precise, some number of times it returns not -1, and then only -1.

The only way to get rid of it is to reboot the computer. I figured the reason was that the previous mesh wouldn't get deleted and there's nowhere for the new one to

and there's nowhere for the new one to go, so I made this piece of code:

for (int i=0;i<1024;i++)
{ann_destroy (ann[i]);}
int a=-1;
while(a<0)
{
a = f2M_create_standard (nn_layer,nn_input,nn_hidden1,nn_hidden2,nn_output);
Print(a);
}


Well now for sure the first 1024 nets should be deleted! (I may be wrong).

But again, it writes -1 in the log. Before rebooting...

 

Well, I think I'll add FANN's oddities too...

As an experiment, I decided to bruteforce train the committee of 46 nets of 30/N/N/1 dimension (i.e. on each bar: ~300k). One NS per data channel. A time pattern is fed to the input. I messed with dimensionality, that's why I specify N/N (I tried different ones). I also did a little bit with layers.
Activator - FANN_SIGMOID_SYMMETRIC_STEPWISE. Tried others, but the network is not converging as quickly as with this one.
I only teach 1 or -1. Learning at each iteration.
Number of positive and negative iterations of learning are almost equal. To be exact, it is 132522/-112221.
The data is normalized to a range of 1:-1.
Almost all the meshes converge to a RMS within 0.09 by the end of training. This is a lot of course, but this is not the main point.

But here's the odd thing, on the test section, the whole committee came out with values close to -1. Imho, this is unhealthy behavior for NS. Perhaps there is an algorithmic error in the library as well...

One more observation... With normal training (only the signal sections), the committee still tends to pile up in the negative values, though not as clearly as in bruteforce.

Has anyone observed similar phenomena with themselves? Maybe you have some ideas?


Example of visualized input data (below):