"New Neural" is an Open Source neural network engine project for the MetaTrader 5 platform. - page 80

 

Yes!!! the topic is very interesting. I would like to cover the topic of neural network optimization using natural algorithms, using a swarm of bees, or as they call it, a swarm of particles as an example. When I read the article about this method I had a thought what it is similar to, it is nothing like OpenCL but directly applicable to neuronc, so why not try to implement it with additional graphical DDL OpenGL for example, especially as tester allows. If anyone is interested I can give a link to an interesting article, there is also a link to the source code.

 
MetaDriver:
But. This approach doesn't even contain the germ for trying to genetically crossbreed topologies.

I have not heard of such a thing. Will crossing different topologies lead to something better than the evolution of the same topology?

 
gpwr:

I have not heard of such a thing. Would crossing different topologies lead to anything better than the evolution of the same topology?

Mules and horses are more suitable for some activities than donkeys and horses. That's just a thought. The idea is still to mature.
 
gpwr:

I have not heard of such a thing. Would crossing different topologies lead to anything better than the evolution of the same topology?

It happens. For example a cohonen with a perseptron.
 
her.human:
It seems more logical to take inputs from where, and let whoever wants take the outputs.

You can do it this way, it does not change the essence of the model. Although it is necessary to check the speed of which option will be faster.

gpwr:
Layers are needed because in some networks different layers have different processing of inputs and different connections between neurons within a layer. Actually, I do not understand the practicality of the goal of building a universal network. There are many networks with their nuances (functions of neurons in different layers, their connection, training weights, etc.). Describing them all in one model seems to me impossible or inefficient. Why not create a library of different networks?

Each neuron can be defined by its own type (not to mention the layer), which will be set by extensible functionality of descendants of the base class CProcessing. By the way, I thought up the idea, but made the implementation on my knees. I wanted to discuss it, and then to implement something already thought out. I thought it would be clear that in CProcessing there should be a function Trening, and for each type of neuron it can be different, depending on the derivative and God knows what else. And there can be both forward propagation and reverse propagation.

Along with the universal topology we get a universal learning scheme, what needs to be done with a neuron for it to learn is described in the neuron itself and the learning processing is a single standardized progression through the grid.

Have you seen somewhere, for example in MLP there are radial basis neurons embedded, and everything was learned normally? It's feasible here.

 

The idea of a universal description (including an exhaustive description of topology) is promising at least because it will be possible (finally!) to abstract away from the very "grid type" - a rather artificial concept, if you understand it.

I'm not against giving some specific topologies specific names (purely for convenience). But only as long as these names do not begin to hypnotize and create barriers to the perception of the essence, and interfere with crossing ideas with each other. And this is observed at every step in all areas of life.

 

Reading your comments on the topic came to a conclusion - I want to see the beta project when it will be? and an article with a detailed description.

 
Urain:
...
So there may be, somewhere, feedbacks?
 
GKS:

Reading your comments on the topic came to a conclusion - I want to see the beta project when it will be? and article with a detailed description.

While working discussion, time will show.

MetaDriver:

The idea of a universal description (which includes a comprehensive description of the topology) is promising at least because it will be possible (finally!) to abstract away from the very "type of grid" - a rather artificial concept, if you understand it.

I'm not against giving some specific topologies specific names (purely for convenience). But only as long as those names do not begin to hypnotize and create barriers to perception of the essence, and interfere with crossing ideas with each other. And this is observed at every step in all areas of life.

It would be possible to make bootstrap templates by name, e.g. the "MLP" template spells out the types of all neurons by "perseptrons".

 
GKS:

Reading your comments on the topic came to a conclusion - I want to see the beta project when it will be? and article with a detailed description.

G. Already beginning to feel a little guilt, despite the accompanying mild irritation.

Aftar, peschi echo.

;)