"New Neural" is an Open Source neural network engine project for the MetaTrader 5 platform. - page 21

 
sergeev:

Andrew's (TheExpert) sentiments about the utopianism of this idea should be postponed until the announcement of this verdict by the hired specialist, the project administrator and the final consilium of participants. In the meantime, the theme remains in force.

And as a consequence, you will have to adjust your goals to achieve at least similarity.

Damn, you do not seem to understand me completely. I'm generally down-to-earth pessimistic side.

Yes, the library should be universal, like a Lego set - you can assemble anything you want.

That's not a problem! But teach each one separately. You can design for functionality, cooperative training is either a hell or clumsy monster that will spend an hour to learn XOR

Yes, the library should be easy to use, which requires a template expert. Easy to use, so much so that it can be used by a non-programmer.

Theoretically it is possible to create a couple of template EAs and automate their data feed, it's for simplification and non-programmers. But the data will still have to be prepared by yourself.

Yes, the library should have a universal interface both on the input and output, so that you could connect everything to it, from indicator values to...

So it is universal! An array of doublets -- how much more universal? And the main thing in the committee is that the dimensions coincide when docking.

 
Urain:
Are we talking about the same thing? The target function is the one that calculates the network output error.
The target function is what we're aiming for. In the standard -- minimizing the root-mean-square error of the output compared to the benchmark.
 
TheXpert:
....

It's already universal! An array of doubles is much more universal. And the main thing in the committee is to make sure that the dimensions match when docked.

That's what I mean, isn't it? GA is just right for greater versatility:

The engine algorithm is simplified for clarity:

We want to add a MLP grid, so we ask how many weights it will have if there are 20 values at the input, 10 neurons in the first hidden layer, 10 in the second, and 1 at the output?

It tells us that 244.

We want to add one more grid (whatever), so we ask ..... again and it answers - 542.

so 244+542=786.

We also want to optimize SL and TP, plus two more parameters, so 786+2=788.

We also want to optimize macdi, it has two parameters, so 788+2=790.

Ok, resize the array to 790.

and voila! We will optimize 790 parameters in GA.

and then you can add types of nets and other stuff as much as you like, keeping common interface standards.

something like this.

 
Urain:

Let's make a graph engine, make a universal network (a couple of variants) and then invite an expert to explain how to implement such network algorithms and come up with a unification of learning algorithms.

We can make it even simpler.

In this situation we go from the particular to the general, with an attempt to abstract to universal models.

1. Draw (on paper + verbal algorithm of the mathematical model) the networks we can realize (topology and training methods for them).
2. Find common docking points in the drawn models to create abstract engine classes.

We need to look at more models so that we can pull the basic building blocks into them in this way.

This abstraction is necessarily viewed from the angle of human language concepts ("creating", "learning", "fixing a bug"). Because this will firstly make the model visual for the common user. Secondly, such function-notions can be easily extended to new topologies and methods.

 
Another imho. It is unlikely you will find a specialist consultant from outside that meets your requirements. In the best case, you will get bogged down in negotiations with specialists at your level, but trying to sell their knowledge, overstating their level to the level you demanded at the stage of negotiations. If there is a budget, no matter what, it is more efficient to divide it between you at the end of the project, either equally or not equally, based on a subjecc tive evaluation of the metaquotas.
 
sergeev:

We can make it even simpler.

In this situation we go from the particular to the general, with an attempt to abstract to universal models.

1. Draw (on paper + a word algorithm matmodel) networks that we can implement (topology and learning methods for them).
2. Find common docking points in the drawn models to create abstract engine classes.

We need to look at more models so that we can pull the basic building blocks into them in this way.

This abstraction is necessarily viewed from the angle of human language concepts ("creating", "learning", "fixing a bug"). Because this will firstly make the model visual for the common user. Secondly, such functions and notions can be easily extended to new topologies and methods.


I drew a universal neuron on page 12, but there is something missing.

Namely the memory reception cells in the activator.

But I'm not going to deal with methods of training. Let mathematicians come up with it :o)

 
Mischek:
Another imho. It is unlikely that you will find a specialist consultant from the outside .meet your requirements. If you have a budget, no matter what, it's more effective to divide it between you at the end of the project, either equally or not equally, based on a subjective assessment of methaquotas.

Wait with the budget, I personally chose the first point of the survey, and the brains here dry not for the metal.

And about the outsider, it depends on where to look, at least it should be a mathematician by education.

Not near mathematics, but a mathematician.

 
Urain:

Wait with the budget, I personally chose the first point of the survey, and the brains here dry not for the metal.

And about a specialist from the outside, it depends on where to look, at least it should be a mathematician by education.

Not near mathematics was lying, namely the mathematician.

You first try to formulate a general or almost general opinion to the requirements for a specialist
 

sergeev:

2. Find common docking points in the drawn models to create abstract engine classes.

I drew and laid out a sample code. All of the simple models fit on these entities.
 
TheXpert:
By the way, Vladimir, would you like to voice your view and grids more broadly?

In my opinion, the grids are divided into modeling and classifying ones. Modeling ones are trying to predict the next price on the basis of some input data, for example, past prices. Such nets-models cannot be applied to the market, IMHO. Classifying networks try to classify input data, i.e. Buy/Sell/Hold or trend/flat and or something else. That's what I'm interested in. The most promising classifying network in my opinion is SVM with proper transformation of input data. I would say that the network itself is not as important as the transformation of input data, that is, instead of SVM we can use something else, such as RBN. In the last two years I have been working on networks based on the brain (by the way, MLP and most other networks have nothing in common with the brain). The brain has several layers for transforming input data (sound, image, etc.) with some sort of classification engine like SVM. Transformation of data in the brain is done, as usual, by filtering it and reducing its dimensionality. The filter features are trained without a teacher, using Hebbian competitive learning or other self-learning methods. Classification of probabilized data is done with a teacher (feedback). I will write more details later.

MLP
Generalized MLP
Modular netwoks
Self-organizing maps
Neural gas
Competitive learning - promising
Hebbian - promising
FFCPA
Radial basis networks
LSTM
Time lagged recurrent
Partially recurrent
Wavelet networks
Fully recurrent
Neuro-fuzzy
Support Vector Machines - promising
Custom architectures - promising