"New Neural" is an Open Source neural network engine project for the MetaTrader 5 platform. - page 70

 

How I escaped in time :))) ...

yu-sha:

Thanks for the lib :)

 
TheXpert:

How I escaped in time :))) ...

I haven't posted everything yet, I'm still looking for the good stuff, it'll be more fun later :)
 
TheXpert:

How I got away just in time :)) ...

Thanks for the lib :)

Not for the sake of advertising, but for a good cause:https://www.mql5.com/ru/code/712 - native Xml-parser

I use it for a long time. I fixed all bugs

XML parser
XML parser
  • votes: 7
  • 2011.11.29
  • yu-sha
  • www.mql5.com
Библиотека для парсинга XML-документов. Реализация на MQL5 без использования сторонних библиотек.
 
I should have fixed all my bugs in the system butI will not be able to fix them later:

Not for the sake of advertising, but for a good cause:https://www.mql5.com/ru/code/712 - native Xml-parser

I am using it for a long time now, I fixed all the bugs I had.

Yes, already downloaded, but just run, and no reaction, left for later to sort out.

I'm gonna finish parsing the literature on adaptive control systems, and then I'll get to it.

 
yu-sha:

Tomorrow from my work computer I will copy my work on storing prototypes of networks, setting up training tasks, storing the solutions found

???

 
Urain:

Well, actually GPU in my model appeared at the stage of NS computation, if carefully read what I wrote earlier, you noticed that in my model of a universal network the processing itself is divided into layers, thus neurons are combined into layers not formally (by their belonging), but actually (the layer has a memory, and a neuron does not, the neuron needs to be just an informational entity supplying information to a layer from where to why). So, the parallelism is defined by the very structure of the engine (the information inside the layer is processed in parallel). I already did NS-trained GA, and the biggest loss of performance was exactly on calculation of NS (especially on large networks). Well, and as an advertisement, I can say that for UGA proposed by joo learning NS is a piece of cake.

But if you can also parallelize the calculations of the FF (and the NS for the GA is part of the FF), then I'm all for it. Although I do not think it will be a simple task, in the layers are made simple actions, and the calculation of the FF may involve a fairly complex sequence.

In the remainder we have: the idea of a universal engine for any topology, the idea of a universal initialization method for any topology, and GA as a universal tutorial for all this.

We can stop there for now, imho.

The standard GA and cloud would help to parallelize the calculation of FF. Especially as Renat promised:


Admin
2516
Renat2011.10.18 10:50
The current state of affairs is quite good with the runtime ofMQL5 programs+ we are preparing a new version of the compiler with optimizations enabled, which will give a multiple increase in speed.

Just in parallel with the development of a neural network, we will expand the functionality of agents to support mathematical calculations and exchange large volumes (files) of data.

But as the saying goes, you can't keep your promises for three years.

So, for the time being, you can optimize the joo algorithm specifically for neural networks, it will work even faster. I hope Andrey will not mind.

 
her.human:

???

a) parsing XmlParser

b) https://www.mql5.com/ru/forum/4956/page32#comment_110831

We will move on as we get more questions

 
yu-sha:

a) parsing XmlParser

b) https://www.mql5.com/ru/forum/4956/page32#comment_110831

We will move on from there as we get more questions.

Can you give me a little example of how to use it for MT5?

 

If we consider neuron training as microlevels (independent cycles of array processing in GA, calculation of individual neurons of the network, etc.) and macrolevels (the whole PF), then there are no questions and problems with the first one - everything is perfectly parallel and will work fine on GPU.

But there is a problem with the macro level. First of all, I suspect that this is not possible due to limitations on the amount of information processed on the GPU. It could be possible to bypass this by using the built-in tester and the cloud (each macro level will be transferred to separate agents, and there it will be processed on a micro level - if the host allows it of course). But we don't have tools to control the tester from the outside to use external GA.

Therefore we will have to be limited to acceleration on the micro level. The acceleration will also be very decent, since the meshes and GA themselves abound in calculations independent of each other.

As for UGA itself, if you don't touch upon refining it for OpenCL, you have practically nothing to improve (unless you have some code fragments but that won't make the difference thanks to those involved in the algorithm thread in the article). You can only try to select settings of UGA specifically for network training.

 
joo:

If we consider neuron training as microlevels (independent cycles of array processing in GA, calculation of individual neurons of the network, etc.) and macrolevels (the whole PF), then there are no questions and problems with the first one - everything is perfectly parallel and will work fine on GPU.

But there is a problem with the macro level. First of all, I suspect that this is not possible due to limitations on the amount of information processed on the GPU. It could be possible to bypass this by using the built-in tester and the cloud (each macro level will be transferred to separate agents, and there it will be processed on a micro level - if the host allows it of course). But we don't have tools to control the tester from the outside to use external GA.

Therefore we will have to be limited to acceleration on the micro level. The acceleration will also be very decent, since the meshes and GA themselves abound in calculations independent of each other.

As for UGA itself, if you don't touch upon refining it for OpenCL, you have practically nothing to improve (maybe just some code fragments but that won't make the difference, thanks to those who took part in the algorithm discussion thread in the article). You can only try to pick up settings of UGA specifically for network training.


Modern GPUs have from 1 GBytes of RAM

I can hardly imagine a training sample of larger size.

I am fine with macro level - I checked it )

For GA it is reasonable to use the following architecture: GA itself on CPU, and heavy FF computations on GPU