Market etiquette or good manners in a minefield - page 30

 
FION >> :

In your code - if(NormalizeDouble(SquarHid[i,j],4) != 0.0), comparison doesn't work.

Thank you, but there is no "division by zero" error in my code either.

 
Neutron >> :

Hi, Sergei!

But Grasn said that the 14th version is corrupted and the engine is dead and has errors in diphires. The main thing is that you can't use it for the first time in your life. The Matkad distribution kit costs 100 roubles.

It's not exactly "scratchy". What I was saying is that in 14 the engine is cheaper and it all comes down to the old argument, which engine is better Waterloo Maple vs SciFace Software. In reality, some are better and some are worse - you have to see what you need. Here is an example, there is such a classic problem about the movement of 3 planets. Under certain initial conditions one planet captures the trajectory of the other.


Here is the solution in 13 Mathcadet (captures)


Here is the solution in 14 matcadet (screw it).



But version 14 has a lot of advantages, a lot. I'll probably have to switch to it, because 13 just crashes in Vista. But 13 has enough bugs even, so what to say about the older versions.

 

Hi, Sergei!

Those are some nice questions you've brought up. They're beautiful. Maybe it's about accuracy for the numerical method. Get a higher precision and the solutions will converge...

 
Neutron, if I enter this adjusting factor (1 - J/N), should I use it for all weights, or can I use it for hidden layer weights, for example, and not use it for output neuron weights? Right now I only use it for hidden layer weights. The weights have more or less stabilized +/-50. I use the number of epochs as N.
 
Neutron писал(а) >>

Hi, Sergei!

Those are some nice questions you've brought up. They're beautiful. Maybe it's about accuracy for the numerical method. If you set the accuracy higher and the solutions converge...

No, it's not about accuracy.

 
paralocus писал(а) >>
Neutron, if I input this regulating factor (1 - J/N), should I use it for all scales, or can I use it for hidden layer scales for example, and not use it for output neuron scales? Right now I only use it for hidden layer weights. The weights have more or less stabilized +/-50. I use the number of epochs as N.

Try it this way and that way. I apply it to all my scales without exception. But this is due to my love for simplicity in all things, the desire for sameness. Maybe it'll work for you. N is the number of eras.

 

One thing confuses me:

Correction in this system happens all the time, whether you need it or not. This is all the more stagnant for a binary input grid.

That is, if the sign is guessed correctly, but there is a difference between the amplitude at the output of the grid and the test signal, the correction takes place anyway. But is it really necessary?

After all, the grid, in this case, is not mistaken...

 
Neutron >> :

... But, it has to do with my love of simplification in all things, the stemma of sameness....

Not a branch, but a master class! Thanks for a lot of useful stuff! I don't do ORO, but the training recommendations work great on PNN as well. Thanks again: thanks Neutron!

 

Thank you, rsi, for your kind words. Always happy to share knowledge!

paralocus писал(а) >>

One thing confuses me:

Correction in this system happens all the time, whether you need it or not. All the more stagnant for a grid with binary input.

I.e. if the sign is guessed correctly, but there is a difference between the amplitude at the output of the grid and the test signal, the correction takes place anyway. But is it really necessary?

After all, the grid, in this case, is not mistaken...

I have the same behaviour.

And it is correct, because the grid constantly sharpens the accuracy of prediction not only the sign...

 

Hi Neurton! Anyway, no luck with the double layer yet.

I wrote a single-layer perceptron with ORO and ran it all day yesterday. It's behaving strangely. It learns and does not learn, and is catastrophically dependent on the number of epochs.

So my results are as follows: 8 epochs - the grid does not learn, 12 epochs - the grid learns, 13 epochs - the grid does not learn.

In short, the results boast that I can not yet.

In any case, I'll describe an algorithm that I have implemented. See if I've missed something.


1. The perceptron has D binary inputs, one of which is a constant +1.

2. The BP used is sequential cotier increments over a number of Open.

3. All weights are initialised with small random values from the +/-1 range before starting.

4. The length of the training vector is calculated as P = 4 * D*D/D = 4*D.

5. The training vector is fed to the grid input and the grid error is calculated as Qs = Test - OUT, where Test is the value of BP at n+1, i.e. the next, readout, OUT is the grid output at n readout.

6. To obtain the value of the error at inputs Qs, the grid error Qs is multiplied by the derivative of the squeezing function (1 - OUT+OUT): Q = Qs *(1 - OUT*OUT).

7. The correction vector for each weight entering a neuron COR[i] += Q*D[i] is calculated and accumulated during the epoch.

8. SQR[i] += COR[i]*COR[i] is separately calculated and accumulated during entire epoch for each weight, included into neuron.

9. At the end of an epoch, for each weight the personal correction is calculated and added to that weight W[i] += COR[i]/SQR[i]


Tried using a coefficient of (1 - j/N), as well as randomizing weights whose absolute values have grown over 20. Randomizing helps better.

P.S corrected a mistake in the text