How do you work with neural networks? - page 7

 
Swetten:

Let me remind you of the original statement: NS and GA are completely different things, unrelated to each other in any way.

One does not change into the other in any way.


Why not: ORO <-> GA ;).

Good luck.

 

VladislavVG:

Why not: ORO <-> GA ;).

If you remove ORO - GA will work without it?

Let me ask -- what exactly would it be GA?

Yeah. That's a tough class they're giving you.

Good luck. :)

 
LeoV:

It is not about dynamics. The dynamics can be anything - as long as the patterns that the network found during the training period work in the future. That's the problem .....)))

i meant the dinamika as long as you put the same pair into the net... as for the other regularities - i agree...but where to get them to work all the time ))))
 
Swetten:

And if you remove the ORO, does that mean the GA will work without it?

Well, well, well. They teach you hard over there.

Good luck with that. :)

Explain exactly what you mean by "ORO removed" and "GA will work without it"?

If you mean replacing/extending the ORO learning algorithm with GA, which is what I meant, then it will, if something else, then I must have misunderstood. In many cases the conjugate gradient method will work too and is much more efficient than ORO. It all depends on the functional being minimised.

And as for "taught harshly", I can say the same thing: You seem to be severely undereducated ;).

Good luck with that.

ZZY Are you able to find articles about modification of learning algorithms using GA? There are some in Russian ;).

 
VladislavVG:

Explain exactly what you mean by "ORO removed" and "GA will work without it".

You should be the one to ask what it is. And Reshetov. Or how to understand your "Well, why: GRO <-> GA"?

What I personally understand: GA is just an optimization mechanism. Optimization of anything. GA allows to get rid of brute force optimization.

It has nothing to do with neural networks. Actually, it has, but only as a mechanism of search for optimum weights.

Therefore GA is not and never will be an NS.

 

VladislavVG:

If you mean replacing/extending the ORO learning algorithm with GA, as I said, it will, if something else, I must have misunderstood. In many cases the conjugate gradient method will also work and is much more efficient than ORO. It all depends on the functional being minimised.

And concerning "severely taught" I can answer with the same: it seems that you are severely undereducated ;).

I'm racking my brain trying to understand your first sentence.

Judging by the options you've listed, it could be structured like this: "If you mean replacing the ORO learning algorithm with the use of GA, which is what I was talking about - then there will be".

Will what? Work instead of NS?

 
Swetten:

You're the one who should be asking what it is. Or how to understand your "Well, why: ORO <-> GA"?

What I personally understand: GA is just an optimization mechanism. Optimisation of anything. GA allows to get rid of brute force optimization.

It has nothing to do with neural networks. Actually, it has, but only as a mechanism of search for optimum weights.

Therefore, GA is not and will never be an NS.

What do you mean by "optimization"? If it's just a search of variants, it's not exactly the same thing. It's MT that's confusing you.

Now about GA: it's a search method, in case of training a network we are looking for a minimum of some functionality. Often mistakes are made. In the process of network training both ORO and GA, and gradients and annealing (there is such a method - similar to GA) try to find an extremum. Which method will be more effective depends on the functional and quality criterion (i.e. criterion according to which the best variants are selected). GA is the most universal of all methods. None of them guarantees finding the global extremum.

With use of GA it is possible, for example, to simultaneous selection of network architecture, i.e. to include it (architecture) into optimized parameters and set quality criterion (fitness function in terms of GA). There are more possibilities. And if necessary you may use ORO together with GA as well.

Good luck.

 
Swetten:

Will what? Work instead of NS?

The training method will not "work instead". After all, EDC does not work "instead of NS". Answered in more detail above.

Good luck.

 
VladislavVG:

What do you understand by "optimization" ? If it's just going through options, that's not really what it is. It's MT who is confusing you.

Now about GA: it is a search method, in the case of learning a network we are looking for a minimum of some functionality. Often there are errors. In the process of training the network, whether it is ORO or GA, whether it is gradients or annealing (there is such a method - similar to GA), one tries to find an extremum. Which method will be more effective depends on the functional and quality criterion (i.e. criterion according to which the best variants are selected). GA is the most universal of all methods. None of them guarantees finding the global extremum.

With use of GA it is possible, for example, to simultaneous selection of network architecture, i.e. to include it (architecture) into optimized parameters and set quality criterion (fitness function in terms of GA). There are more possibilities. And if necessary you may use ORO together with GA as well.

Good luck.

Ugh! You are completely confused!

You should re-read this thread.

Reshetov claims: "It's quite possible, since we are dealing with a black box, that instead of neural network in a firmware package a genetic algorithm will be used, or maybe some regression or some other extrapolation method".

https://www.mql5.com/ru/forum/108709/page4

That's what I'm trying to find out what it was. And the ORO mistook it for GRNN.

 

VladislavVG:

The training method will not "work instead". After all, EDC does not work "instead of NS".

That's what I've been trying to tell you. :)

And someone else.

It's just that your original message was wrong.