"New Neural" is an Open Source neural network engine project for the MetaTrader 5 platform. - page 62

 

Tomorrow I will copy here from my work computer my work on storing prototypes of networks, setting training goals, storing the found solutions

Everything in xml

I think the resource intensity of parsing xml-files is too exaggerated

Don't forget that it's a one-time procedure.

Moreover it's a trivial task to write a native xml-files parser for MQL5 in comparison with the complexity of a neural network project

 
Urain:

The first way.

The second alternative has been discarded (I don't remember exactly, but in the first pages), because in the future people who don't know what F7 is will be enlisted in the category of "user".

Besides, this engine is supposed to be easily extendable, and whoever knows F7 purpose, can add some more grid type for himself or invent his own.

I have only one question here, due to my lack of competence in grid typology.

Can any type of grid be unambiguously defined with a connection table? I mean, can we create a universal abstract grid where its type just follows from a given connection table? In other words, a really universal network?

If the answer is yes, then the grid type is specified by the grid configuration editor BEFORE creating the intermediate representation, and no modification of the universal library is required. I mean it will never be required (if it is not glitchy), whatever the network structure is. You can, except to engage in its optimization, building up the library of nonlinear transducers, training methods, etc.

If the answer is no, then please poke your nose in some reference for exceptions that don't fit in this approach.

--

If xml-representation of network description is thoroughly thought out and completely abstracted from mql-implementation (which is correct), then the alternatives don't look contradictory. Not only can they both be implemented, but they can also be crossed, if necessary, with each other.

 
MetaDriver:
...

The answer is not binary,

On the one hand the answer is negative, the connection table itself does not set neuron typing.

On the other hand the answer is positive, it is possible to set types in numerical form (you create an object of a particular type inherited from a common ancestor by the switch).

So in aggregate, a parametric array and a link table are fine.

But on the other hand, even the configuration editor has parameters (number of layers, number of neurons in each layer, types of neurons in the layer) and this before creating links.

 
MetaDriver:

In other words - a really universal network?

From feed-forward, yes. For the others, you have to look at the topology.
 
TheXpert:
From feed-forward, yes. For the others, you have to look at the topology.

The topology is set by the connection table....

?

 
MetaDriver:

The topology is set by the connection table ....

And the functionality of the parts to be linked.
 
TheXpert:
And the functionality of the parts to be linked.

Okay. Let's go into a little more detail here.

Can this functionality be given by a finite (small) table? What is the difference between neurons of different types (except activation functions)?

 
MetaDriver:

Okay. Let's go into a little more detail here.

This functionality can be specified by a finite (small) table? What is the difference between neurons of different types (other than activation functions)?

Strictly speaking, no.

First a simple case. Suppose we have linear, sigmoid, and tangent neurons. If we want to add a new type of activation, we have to expand the list of activation types.

Basically, so what the hell with it. But first, why, for example in the Kohonen network, would the output layer need a sign of any=there activation functions? This is unnecessary, redundant information.

Second, this list is theoretically unlimited.

Third, each network may have peculiarities in its operation and device. For example, a Kohonen network (SOM) may have a neighborhood function setting and a flag whether to output results as an output or only the leader (zeroing out all nonleaders)

In logic models, for example, the configurable parameters are in the activation function. Is this also in the general model?

In the MLP layer it could be a single neuron presence flag.

____________________________

By the way xml is much easier to check for validity than binary representation. And saving/restoring is essentially not time-critical.

 
TheXpert:

1. strictly speaking, no.

First a simple case. Suppose we have linear, sigmoid, and tangent neurons. If we want to add a new type of activation, we have to expand the enumeration of activation types.

Basically, so what the hell with it. But first, why, for example in the Kohonen network, would the output layer need a sign of any=there activation functions? This is superfluous, redundant information.

Second, this list is theoretically unlimited.

Third, each network may have peculiarities in its operation and arrangement. For example, a Kohonen network (SOM) may have a neighborhood function setting and a flag whether to output results as output or only the leader (zeroing out all nonleaders)

In logic models, for example, the configurable parameters are in the activation function. Is this also in the general model?

In the MLP layer, it could be a single neuron presence flag.

____________________________

By the way, xml is much easier to check for validity than binary representation. And saving/restoring is essentially not time-critical.

1. Why not. My idea is something like this - to create a universal "element base" from which neural network of any type can be "soldered" (can be extensible). Elements in this base are defined by exact unambiguous definitions - formulations. If necessary with pseudocode. But not in the form of mql-code, to provide a decoupling from the implementations - they can be improved over time. After the abstract element base is created (if possible), you can start with xml-file format, able to describe all the links between the elements of the network. After the xml-description is approved, the project can be easily parallelized: separately write

1) element implementations. => the output is a library of components.

2) network type/structure configurator(s) => output - graphical, step-by-step or any other configurator(s), saving configuration into xml-file.

3) translator(s) into mql code. => the output is either (1) a super-duper self-configuring mql-neural network, taking an xml-file as a parameter, or (2) a compiler to a particular mql-based rigid network.

Something like this. Seems to make sense.

2.

 
TheXpert:
...

By the way, xml is much easier to check for validity than binary representation. And saving/restoring is essentially not time-critical.

What do you see as the difficulty of checking in the binary representation of links?

Give me an example in which cases it is difficult.

I can't even imagine what kind of topology it is, when you represent links by a binary table, it's hard to check the validity of any link.