"New Neural" is an Open Source neural network engine project for the MetaTrader 5 platform. - page 49

 

If the MCs do not show activity in the next couple of weeks, the project can be wound up, or moved to a commercial channel and to another place.

For without the MCs' control, the project as an opsource loses its meaning.

 
Vladix:

In general, as far as I can understand, there is a kind of deadlock - here, in the main, all the independent birds, capable of solving any problem on their own (from self-study of language to describe the complex logic of trade). And a swan, a crayfish and a pike with all their strong qualities in one cart, of course, can be harnessed, but enough them only for 50 pages of active discussions in this branch of the forum...

Now the point is that the project must have a leader who:

  • first, will be interested in the ultimate goal of the project,
  • Secondly, will be able to divide the project into stages, tasks and subtasks, which any programmer in this thread would be able to write down and do in a reasonable time. At the same time it is desirable to make the tasks and subtasks context-independent, i.e. abstract them from other code as much as possible.
  • Third, one has to keep his or her hand on the pulse of the project, to know which parts of the project are ready and to what extent; whether it is possible to integrate the solved subtasks into the complete solution of the task.
The ideal solution would probably be someone from MetaQuotes, who has similar experience, + it would be an opportunity to test the TeamWox system in relation to the MQL community, especially since Renat mentioned it once before.

All in all, what he said is true. Each of us can do this project on our own.

But as usual the devil is in the details.

On materials of 50 pages of assault can be summarized that there are ideas and of them can be made quite a sensible plan of attack.

Although most individuals, but to work in a team no one is clearly not resisting. After all, teamwork allows you to parallelize tasks, which will speed up the entire project.

And here comes the details: the teamwork in the classical sense of the word assumes that the performer receives the task and will complete it within the specified time. Then it will be possible to plan the project progress from a single center and distribute tasks to the performers. In fact, the performers are busy doing their own things and cannot focus all their time on the capabilities of the project. Hence there will inevitably be imbalance in the development of the project.

I think the way out can be the bulletin board where the manager will set tasks and performers will take what they can and report on progress and deadlines. If the TOR will be clearly formalized, the project will be completed before it starts :)

And one more detail, it would be nice to have a list of commonly used names of variables and methods, not that it is fundamentally, but it will be easier if it is standardized. Although of course it's hard to make such a list, but some general principles of creating names can be worked out (or borrowed).

 
TheXpert:

If the MCs don't show any activity in the next couple of weeks, the project can be scrapped, or moved commercially and elsewhere.

For without MK control, the project as an opsource is meaningless.

That's true what you say.

SZY at least two of us are capable of doing everything on our own.

ZZZY and as you correctly said, custom development is already a commercial development.

Since time is spent and only one has the source code, the conclusion is simple.

 

Okay, while we're looking for Santa Claus,

I'll lay out all the garbage that I dug up in my brain, maybe from this it will be possible to make at least some terms of reference.


Grid engine
1. Initialization of grid
2. Grid workflow
3. Grid learning

1) Grid topology can be set by binary fields
more details here http://cgm.computergraphics.ru/content/view/25 section 7.Direct coding

Separation into grammars or direct coding is already a superstructure over the initialization method, anyway at the end everything is reduced to direct coding.
So the topologies themselves (which is the lion's share of difficulties in specifying a network) are reduced to writing methods for making a direct coding table.
The article says that it is impossible to set backward links, but if for each rank of delay operators to create its own matrix of links, the problem disappears (although the matrix will be full and not triangular as in the zero delay).
It turns out that the superstructure over the direct coding method should know what delay rank the network uses.
Neuron types must also be specified in the superstructure (this point I haven't worked out yet, I'm not sure if I need to write and overload neuron types rigidly or specify them by some more liberal methods) ?
It is possible to stop on overloading of rigid types for now and if there will be a method of soft coding add it as one of the overloaded.

2) Working stroke is conditioned by prescribed links (using data aggregation) and neuron types, I've laid it out on page 5. There should be 4 arrays of data outside: Grid Inputs, Neuron Outputs, Weights, Grid Outputs. The possibility of external access of Grid Inputs and Grid Outputs is necessary for giving examples and for working use of the grid. External access to Weights
is needed for training. External access toNeuron Outputs is needed for transferring to the GPU for calculation. In principle, I think data arrays should initially be external, and already these external data should be aggregated into the network.

3) Learning. I'm inclined to training with GA as a method not depending on the topology of the network, I suggest to take it as a basis and if possible/necessary to overload to the right one.

Those are three tasks on the agenda.

The layer is a union of neurons that are not dependent on the same iteration and have the same type.


 

Separation is actually very realistic.

For example, there is the IEvolvable interface. An interface for the grid on the genetics side. And so, for example, you and Andrei can quietly quietly saw genetics, being tied exclusively to this interface.

 

By the way, this is where multiple inheritance would really come in handy.

________________

Agreed, I'll try to write the interfaces today.

By the way. The project manager can be gpwr. I can do part of it.

In principle, you can start the project.

 
Ugh. It's going downhill.
 

This is a reminder to yourself and others of the types of data binding.

//+------------------------------------------------------------------+
//| Пример Ассоциации, Агрегации, Композиции                         |
//+------------------------------------------------------------------+
/*///
   * Ассоциация обозначает связь между объектами. Агрегация и композиция это частные случаи ассоциации.
   * Агрегация предполагает, что объекты связаны взаимоотношением "part-of" (часть-целое). 
     Агрегация может быть множественной, 
     то есть один и тот же объект одновременно может быть агрегирован в несколько классов, либо объектов.
   * Композиция более строгий вариант агрегации. Дополнительно к требованию part-of накладывается условие, 
     что "часть" не может одновременно принадлежать разным "хозяевам", и заканчивает свое существование вместе с владельцем.
/*///
//+------------------------------------------------------------------+
//|                                                                  |
//+------------------------------------------------------------------+
class Base
  {
public:
                     Base(void){};
                    ~Base(void){};
   int               a;
  };
//+------------------------------------------------------------------+

class A_Association
  {
public:
                     A_Association(void){};
                    ~A_Association(void){};
   void              Association(Base *a_){};
   // При ассоциации данные связываемого объекта 
   // будут доступны через указатель объекта только в методе, 
   // в который передан указатель.
  };
//+------------------------------------------------------------------+
//|                                                                  |
//+------------------------------------------------------------------+
class A_Aggregation
  {
   Base             *a;
public:
                     A_Aggregation(void){};
                    ~A_Aggregation(void){};
   void              Aggregation(Base *a_){a=a_;};
   // При агрегации данные связываемого объекта 
   // будут доступны через указатель объекта в любом методе класса.
  };
//+------------------------------------------------------------------+
//|                                                                  |
//+------------------------------------------------------------------+
class A_Composition
  {
   Base             *a;
public:
                     A_Composition(void){ a=new Base();};
                    ~A_Composition(void){delete a;};
   // При композиции объект становится частью класса.
  };
//+------------------------------------------------------------------+
//|                                                                  |
//+------------------------------------------------------------------+
void OnStart()
  {
   Base a; 
   A_Association b;
   b.Association(GetPointer(a));
  }
 

There is a nuance in the workflow problem, since the data processing methods depend on the neuron type, they should be part of an object of the neuron type.

Newness is in what to consider as a layer. If such a formulation, that I gave, it would be difficult to organize the calculation in the GPU.

If we stand on the formulation of TheXpert , there would be problems with the workload of the GPU.

On the whole, I lean towards the compromise (combining) formulation, it has less problems, although it inherits the problem of loading the GPU.

The layer is a union of neurons that do not depend on the same iteration and have the same type.

What are your thoughts?

PS are there any cons against it?

 
Urain:

There is a nuance in the workflow problem, since the data processing methods depend on the neuron type, they should be part of an object of the neuron type.

1) The nuance is in what to consider as a layer. If such wording as I gave, it would be difficult to organize calculation in GPU.

2) If we stop at the formulation of TheXpert , there would be problems with the loading of the GPU.

1) Why?

2) Why?