A quick and free library for MT4, much to the delight of neuralnetworkers - page 15

 
lasso >> :

I'll add. Or in the case of a terminal crash. But a solution seems to have been found.

The question is different. Vladislav, you seem to read C++ code without "intermediaries".

Could you comment on the problem with identical grid committee responses and correct initialization of weights values? (detailed here and more logs, and here a question on weights)

Looked through the codes. The randomisation is there. What not : since standard C/C++ has quasi-random number generator, it is recommended to reinitialize kernel (seed - srand(int ) - move reference point before each call to rand())

int rand( void );
  Return Value 
rand returns a pseudorandom number, as described above. There is no error return.

  Remarks 
The rand function returns a pseudorandom integer in the range 0 to RAND_MAX (32767). 
Use the srand function to seed the pseudorandom- number generator before calling rand.

//------------------------------------------------

Sets a random starting point.


void srand(
   unsigned int seed 
);
  Parameters 
seed
Seed for random- number generation

  Remarks 
The srand function sets the starting point for generating a series of pseudorandom integers in the current thread. 
To reinitialize the generator, use 1 as the seed argument. Any other value for seed sets the generator to a random 
starting point. rand retrieves the pseudorandom numbers that are generated. 
Calling rand before any call to srand generates the same sequence as calling srand with seed passed as 1.

  


 
Henry_White писал(а) >>

Yes. The inputs are different for each grid, although this is not crucial. You can take a standard signal, e.g. the same RSI and one grid, and still get negative values at any inputs on the bruteforce.

Initial initialisation of scales -1, 1.

About the profile... Do you mean the resulting file of trained net?

No. We are talking about different things. I asked you about outputs !? Please, look into it here. I.e. we have a committee of 16 nets, initialize them with random weights, put a single &input_vector[] at each input and as a result the outputs are the same! (at the link all logs are laid out).

Here is the question!!!

........

Yes. The resulting file of the trained network show here, or email .... Interested in the values of the weights. It would also be good to have a profile of the network immediately after initialization, without training. >> OK?

 
lasso >> :

No. We're talking about different things. I asked you about the exits!? Please dig in here. I.e. we have a committee of 16 nets, initialize them with random weights, put a single &input_vector[] at each input, as a result the outputs are the same!!! (at the link all logs are laid out).

Here is the question!!!

........

Yes. The resulting file of the trained network show here, or email .... Interested in the values of the weights. It would also be good to have a profile of the network immediately after initialization, without training. Ok?

We really talk about different things )) I understand your problem. And checked it. And confirmed that yes, such effect is present.

In my last post I wrote "Another oddity", which means that it has nothing to do with the problem of randomisation of initial weights and the identity of the operation of the committee meshes with a single input vector.

I'm saying that with redundant learning (although the effect is also present with normal learning), with, according to MSE, positive convergence, the network does not "find" an absolute minimum, and not even a local one, but banally rolls to the range boundary, which indicates a problem in the learning algorithm...

 

By the way, checked the initialisation of the initial weights (recorded immediately after creation). Everything works. The randomisation is there.

But here's a strange entry I found in the profile:

layer_sizes=31 31 17 2

And this at:

ann = f2M_create_standard (4, AnnInputs, AnnInputs, AnnInputs / 2 + 1, 1); with AnnInputs=30

The hidden layers are for some reason specified one more than declared. But what confuses me even more is the size of the output layer "2" when "1" is declared !!!

 
Henry_White >> :

By the way, checked the initialisation of the initial weights (recorded immediately after creation). Everything works. The randomisation is there.

But here's a strange entry I found in the profile:

layer_sizes=31 31 17 2

And this at:

ann = f2M_create_standard (4, AnnInputs, AnnInputs, AnnInputs / 2 + 1, 1); when AnnInputs=30

For some reason there is one more hidden layer than declared. But what confuses me even more is the size of the output layer "2" when "1" is declared !!!

Everything is correct there. There are 4 layers in total : input layer, 2 hidden layers, output layer. Each layer has bias = 1, it does not participate in the "user" dimension. This is taken from the FANN documentation.


Good luck.

 
Yes, I read about bias... But I didn't see anything about it appearing in the profile of the grid. Maybe you're right and it really is an additional feedback neuron. Anyway, that would logically explain the incremental increase in layers... And I was already glad that I found a clue about the "floating away" of the mesh to the boundary of the range )))
 
Henry_White писал(а) >>

By the way, checked the initialisation of the initial weights (recorded immediately after creation). Everything works. There is randomisation.

Yes there is randomisation of weights. But still I repeat:

Noted. Randomization is -1 to 1, and in the network profile, the weights are -10.0e--003 to 10.0e-003

Example: (12, -7.35577636217311400000e-003) (13, 7.639700053449810000e-002)

Is this correct?

That's why I asked to show your network profiles....

 
lasso >> :

Yes, the randomisation of the scales is there. But I'll say it again:

Noted. Randomisation is from -1 to 1, and in the network profile the weights are from -10.0e--003 to 10.0e-003

Example: (12, -7.35577636217311400000e-003) (13, 7.639700053449810000e-002)

Is this correct?

That's why I asked to show your network profiles....

Checked - my values are different and scattered almost evenly. Here is one of the initialization :

connections (connected_to_neuron, weight)=(0, -9.946899414062500000000e-001) (1, -6.88415527343750000e-001) (2, 6.51367187500000000e-001) (3, -8.2067871093750000e-001) (4, 9.83703613281250000e-001) (5, -6.84936523437500000000e-001) (6, 3.6010742187500000000e-001) (7, 2.90527343750000e-001) (8, 7.546386718750000e-001) (9, -7.60314941406250000e-001) (10, -7.78137207031250000e-001) (11, 7554321289062500000000e-001) (12, -6.61560058593750000e-001) (13, 1.657714843750000e-001) (14, 5.710449218750000e-001) (15, -1.54785156250000e-001) (16, 9.851074218750000e-002) (17, -5.269165039062500000000e-001) (18, 8.58947753906250000e-001) (19, -5.6652832031250000e-001) (20, 7.3144531250000e-001) (21, -8.80310058593750000e-001) (22, 6.823730468750000e-002)

................................................................................................................................................................................................................................

(42, -6.953735351562500000000e-001) (43, -7.0153808593750000e-001) (44, -7.38952636718750000e-001) (45, -3.44238281250000e-002) (46, -1.994018554687500000000e-001) (47, 2.73132324218750000e-001) (48, 4.53186035156250000e-001) (49, -4.709472656250000e-001) (50, -7.741699218750000e-001) (51, -9.54711914062500000000e-001) (52, 8.09509277343750000e-001) (53, 9.92370605468750000e-001) (54, -4.13391113281250000e-001) (55, 6.672973632812500000000e-001) (56, 9.59289550781250000e-001) (57, 1.0925292968750000e-001) (58, -3.02551269531250000e-001) (59, -5.29785156250000e-001) (60, 5.857543945312500000000e-001) (61, 7.999877929968750000e-001) (62, -1.11999511718750000e-001) (63, -8.0749511718750000e-001) (64, -7.08862304687500000000e-001) (65, 8.05236816406250000e-001) (66, 2.9260253906250000e-001) (67, 3.6163333300781250000e-001) (68, -2.99011230468750000e-001) (69, 6.248168945312500000000e-001) (70, -7.15759277343750000e-001) (71, -7.5720214843750000e-001) (72, -1.31774902343750000e-001) (73, 5.53894042968750000e-001) (74, -3.85009765625000000000000e-001) (75, -3.3361816406250000e-001) (76, -9.587402343750000e-001) (77, -3.70544433593750000e-001) (78, 8.2690429468750000e-001)


SZZ The truth assemble the library itself. Somewhat different from f2M. Although, the ideology of the author of f2M I liked and led to a similar view. Just added generator re-initialization today - don't know how much it affects it.


 
VladislavVG писал(а) >>

Checked - my values are different and scattered almost evenly. Here is one of the initialisations :

Your weights are more than correct, but what are the values of the network outputs when the input vector is the same?

 
lasso >> :

Your weights are more than correct, but what are the network outputs for the same input vector?

I haven't fully tested everything yet - while the optimisation tester is running. I don't want to interrupt it.

I can attach dll, mqh and EA - there are some differences from the original ones, maybe it will be faster to make it work.