Market etiquette or good manners in a minefield - page 27

 
:-)
 

A hundred epochs is enough if the sign is predicted?

One more thing: when the network is just initialized it needs N epochs of training, and when the network is already trained, i.e. in each subsequent step (after the next prediction) it needs N epochs too, or is one enough?

 

Good question, paralocus.

I can only give recommendations. So, according to my experimental data the number of training epochs for NS with binary input ranges from 10 to 100 iterations for 2 and 8 neurons in the hidden layer respectively. For the analog input - 300-500. Everything should be tested experimentally.

N epochs are needed each time.

 

I see.

Here's the grid code:

код сюда не влез, поэтому в аттаче

I, to my shame, am still confused by a simple question - epoch length calculation.

It seems clear - P = k*w^2/q, where k = (2...4); w - number of synapses, q - number of inputs. Apparently, I have some kind of terminological confusion in my head. What to call an input and what to call a synapse.

Could you clarify it one more time? It always happens in life that the simplest things are the most difficult to understand -:)

Files:
nero2.mqh  7 kb
 

It seems to be working -:)


 

Synapse(w), is what the neuron on the left has. Input(d), refers to the number of synapses at each neuron from the first (hidden) layer. For a single neuron NS, the number of synapses is equal to the number of inputs. For the NS consisting of two layers and containing two neurons in the first layer (hidden) and one in the second (output): w=2d+3 . The input of a neuron with a constant offset of +1 is considered to be a regular input. For such a network with d=100, the number of synapses w=2*100+3=203. Optimal length of training vector P=k*w^2/d=2*(2d+3)*(2d+3)/d=(approx.)=2*2d*2=8d=8*100=800 samples.

 

Thank you!

Reworked the inputs to binary - everything went much better! Now I'm running the grid in the tester with different combinations of inputs. What a great job... -:)

 

Guten morgen,

I'd like to share my joy: the first decent result, not least thanks to some advice from Neutron in the past... The blue bar is the new data, ordinate in pips. Abscissa: 10,000 EURUSD60.

Long positions:

Short positions are not as impressive:

Neural network, 13 inputs, no hidden layer. Genetic algorithm training

 

Neutron, looks like you were right about the 25 readiness count... -:)

Something about my network is not learning. After 100 epochs, the weights are almost the same as the network was initialised with.

On a related note, another silly question:

Is the learning vector the same in each epoch or not?

Anyway, it turns out that the ratio of accumulated correction to accumulated squared correction tends to zero very quickly. So after the 10th iteration the learning practically stops.

 
YDzh писал(а) >>

Neural network, 13 inputs, no hidden layer. Genetic algorithm training

Awesome, YDzh!

My results are much more modest. You should put it on demo and see what the grid will cut.