"New Neural" is an Open Source neural network engine project for the MetaTrader 5 platform. - page 74

 
yu-sha:

~6 MBytes.

Attached ))

And now tell me, please, how are you going to teach it?

Yes, to teach just no problem, I wrote above that for UGA 100 000 parameters is quite an acceptable task, I will not say that the lazy but feasible.

But the size is quite normal, more on this issue can not bother, as they say the question is closed.

 
Urain:

Yes to teach just no problem, I wrote above that for UGA 100 000 parameters is quite an acceptable task, not to say that a trifle, but feasible.

But the size is quite normal, you can not bother about it anymore, as they say the question is closed.

Even if discreteness of each parameter is set to 0.1, the number of combinations of all possible combinations of full search is 10^100000

You have a very bright idea of GA.

 
yu-sha:

Even if you set the discreteness of each parameter to 0.1, the number of combinations of all possible combinations of a complete search is 10^100000

You have very bright ideas about GA

I have not rainbow ideas, but practical knowledge of using this algorithm, UGA is not a binary algorithm which needs to divide search space into graphs.

UGA conducts parallel search in all measurements simultaneously, step by step decreasing step automatically, that gives it an opportunity to reach robust result in reasonable time, and it doesn't need more for training of grid, further it will be retraining. Usually for 10000-50000 FF the result is achieved regardless of the number of parameters.

 
yu-sha:

~6 MBytes

Network 100x1000x1 - full mesh

Attached ))

And now tell me, please, how you are going to train it ???

I know only one kind of networks which can have similar size, but they don't train and don't need to be stored. They are formed in one pass by training sample and stupidly remember it.

Such a network cannot be trained with GA by all computers of all sci-fi movies combined: the dimensionality of the search space is 100 000

Or rather, I believe that such a network will simply memorize the training sample, and you will get a screenshot of the whole story instead of a generalization.

You have to be more careful with your choice of architecture.)

It's clear that no one needs such a network (useless). That's why we're talking about a free architecture (scalable).

For the sake of experiment, I will try to teach it with GA on my small N450 hardware. What to teach, how many examples, error, etc.?

P.S. While it will be learning, I will study your codes.

 
her.human:

It is clear that no one needs such a network (useless). That's why we are talking about a free architecture (scalable).

For the sake of experiment, I will try to teach with GA on my small iron with N450. What to teach, how many examples, error, etc.?

What GA are you going to teach?
 
Urain:

I don't have rainbow ideas, but practical knowledge about using this algorithm, UGA is not a binary algorithm that needs to partition the search space into graphs.

UGA carries out parallel search on all measurements simultaneously, step by step reducing a step automatically, that enables it in reasonable time to reach robust result, and it does not need more for training a grid, further there will be retraining. Usually for 10000-50000 FF the result is achieved regardless of the number of parameters.

Acknowledged. Highlighted in bold is a robust result (not necessarily an absolute maximum).

The main thing is that it is possible to train meshes of huge sizes. And whether these huge meshes are needed is up to each individual node. :)

 
joo:

Acknowledged. Bold is a robust result (not necessarily an absolute maximum).

The main thing is that it is possible to train meshes of huge sizes. And whether these huge meshes are needed is up to the conscience of each individual node. :)

Well, not to paint a completely rosy perspective, we should add that though the number of FF runs almost does not grow (to reach robust solution) time of finding result increases, because the algorithm has to run by orders of magnitude more arrays (so it actually performs more operations), but in first the time increases linearly, and in second the main stumbling block during tests was always speed of FF and in particular NS speed as part of FF, with serious acceleration of NS on GPU is also expected speedup time of finding solution in general for GA.

Документация по MQL5: Стандартные константы, перечисления и структуры / Константы индикаторов / Стили рисования
Документация по MQL5: Стандартные константы, перечисления и структуры / Константы индикаторов / Стили рисования
  • www.mql5.com
Стандартные константы, перечисления и структуры / Константы индикаторов / Стили рисования - Документация по MQL5
 
Urain:
And what GA are you going to teach?

The point is not which one. I just wonder if the GA will pull with such a weak iron?

A lightened version of the joo algorithm.

 
her.human:

The point is not which one. I just wonder if the GA will pull with such a weak iron?

A lightened version of the joo algorithm.

Somehow I thought to write a tester for teaching small grid with GA, like that one I drew above, 6 weights, 3 neurons, XOR problem, but I can't get to it :)
 
her.human:

The point is not which one. I just wonder if the GA will pull with such a weak iron?

A lightened version of the joo algorithm.

Well, you can roughly estimate. To do this, run once through the history of the FF measuring the time and then multiply by 10000. You'll get a pretty realistic result of what you'll get if you run the training.


And that's... What's there to lighten up about? :)