"New Neural" is an Open Source neural network engine project for the MetaTrader 5 platform. - page 69

 
Urain:

Just do not disappear, put your work, ask questions, see who else will buckle.

What's in the works. I'll put it out as soon as it's ready.

I have a C++ implementation. All I need to do is rearrange and upgrade a couple of things.

 
ivandurak:

I wish they would spit, but they are ignoring me. You were asked for advice - yes or no.

If (YES) I have to go read some smart books;

else go read another one and kick in the given direction;

Nikolay, throw here books, which I recently gave you, on clustering, but it is lazy to rummage in library again.
 
ivandurak:

Good afternoon not really in the subject.

Sam mnu set the task . It is necessary to choose an adaptive time window of the current moment, and not to set it in the parameters, such as 10 bars. Then I have to run through the history in depth to define the cluster the selected window belongs to. Can neural networks handle it, or is it easier to do something else? If I'm not sorry, please send me a book on grids on the sausage level only.

I do not know how at the level of sausage, as they say rich.

Just clustering and classification are the tasks that best solve the network.

The approximation is worse, but it's ok too, but with extrapolation in general is tight. Although it all depends on the implementation.

Files:
 
Urain:

I don't know about the sausage level, as they say.

Just clustering and classification are the tasks that are best solved by networks.

Approximation is worse, but also okay, but with extrapolation in general tight. Although it all depends on implementation.

Much obliged, I will go to gnaw the granite.
 
yu-sha:
...

learning is an external process with respect to the network itself

...

Then for completeness I will add, learning is not only an external process, but also insider in its essence, as it often has access not only to weights but also to intermediate calculation data, as well as to topology properties.

And it is because of this property of the learning process that it is often attributed to internal network processes.

They came to the point that the network must reveal its internal information for the learning process while hiding it from the environment at all.

The logical move in this situation is to wrap the network itself in a training shell, if necessary,

so we have an external network object that has methods:

  • initialization
  • workflow.
  • learning

The workflow method gets net as it is, the learning method gets net wrapped in a learning shell, the logical continuation of encapsulation would be instead of two methods workflow and learning give one method net with the flag of choice.

 
Urain:

so we have an external network object that has methods :

  • initializing
  • run()
  • training

A workflow method gets the network as is, a learning method gets the network wrapped in a learning shell, a logical extension of encapsulation would be to give one net method with a selection flag instead of two workflow and learning methods.

In the most general case, the network must have a single run() method

It performs the calculation of output neurons and assumes that the inputs have already been initialized

The "teacher" is a separate object which is initialized with the training parameters and controlled by the object to be trained

It's a good idea to have a validator to check if the object can be trained by this method

But all this is difficult to formalize in general case

That's why for an end user it is possible to make standard rigid constructions of the type Network+FitnessFunction+Teacher and allow to set only some parameters, for example, the number of neurons in a layer

the network must reveal its internal information for the learning process, at the same time hiding it from the environment

I agree. Some, but not all, learning methods need almost full access to network internals

 

yu-sha:

...

Some, but not all, training methods require almost full access to the network internals.

That's the catch, some methods require that the network should be not only opened by it, but also correctly structured for the method.

In other words, the method itself was written for a certain network. What's the point of implementing these methods within universal engine.

It's better to let Andrew coding it all. I see one universal training method for the universal engine - GA.

In the remainder we have: the idea of universal engine for any topology, the idea of universal method of initialization for any topology and GA as a universal tutorial of all this.

On the plus side, easy implementation of new types of neurons, those standard but not yet described or non-standard.

There is only one training method on the downside.

If someone can figure out how to fit other training methods into all this, that would be great, but for now it will be like this.

 
Urain:

In the remainder we have: the idea of a universal engine for any topology, the idea of a universal method of initialization for any topology, and GA as a universal tutor for all this.

On the plus side, it is easy to implement new types of neurons, those standard but not yet described or non-standard.

The disadvantages are only one method of learning.

Thinking in the same way I came to approximately the same conclusion)).

Since GA becomes the main learning algorithm, there is an urgent need for parallel computing.

This is where GPUs come in.

Параллельные вычисления в MetaTrader 5 штатными средствами
Параллельные вычисления в MetaTrader 5 штатными средствами
  • 2010.11.24
  • Andrew
  • www.mql5.com
Время является неизменной ценностью на протяжении всей истории человечества, и мы стремимся не расходовать его понапрасну. Из этой статьи вы узнаете, как можно ускорить работу вашего эксперта, если у вашего компьютера многоядерный процессор. Причем, реализация описываемого метода не требует знания каких-либо еще языков кроме MQL5.
 
yu-sha:

Thinking in the same way I came to approximately the same conclusion ))

And since GA becomes the main learning algorithm, there is an urgent need for parallel computing

That's where GPUs come in.

If carefully read what I wrote earlier, you'll notice that in my model of a universal network the processing itself is divided into layers, thus neurons are combined into layers not formally (by ownership), but actually (a layer has a memory and a neuron doesn't, a neuron remains to be just an informational entity supplying information to a layer via where and why). So, the parallelism is defined by the very structure of the engine (the information inside the layer is processed in parallel). I already did NS-trained GA, and the biggest loss of performance was exactly on calculation of NS (especially on large networks). Well, and as an advertisement, I can say that for UGA proposed by joo learning NS is a piece of cake.

But if you can also parallelize the calculations of the FF (and the NS for the GA is part of the FF), then I'm all for it. Although I do not think that this will be a simple task, in the layers are made simple actions, and the calculation of the FF may involve a fairly complex sequence.

 

If anyone can do it, let me know what the point is.

Combining adaptive control and deterministic chaos approaches to build efficient autonomous control systems

Autonomous adaptive control method.

Logical adaptive control automata with a finite number of inputs

In short, you can search here http://www.aac-lab.com/rus/

Группа Методов адаптивного управления
Группа Методов адаптивного управления
  • www.aac-lab.com
Определение живого: живым является то, что содержит в себе систему управления! (Жданов А.А.) Основным направлением научной деятельности группы является исследование возможностей построения адаптивных систем управления на бионических основах. Концепция построения такого рода систем, выработанная сотрудниками отдела, названа нами методом...
Files: