"New Neural" is an Open Source neural network engine project for the MetaTrader 5 platform. - page 12

 
NeuroOpenSource:
It would have to be something to code. We're still waiting for the project administrator to discuss the implementation plan.

In fact, as an option, you can register on the same sorsforge and start...

But it's probably better to wait until Metakvotts will bring up the right environment.

 
By the way, it would be nice to think about paralleling. Only I have no idea how it can be implemented.
 
TheXpert:
By the way, it would be nice to think about paralleling. Only I have no idea how it can be implemented.
Are you talking about parallelization of NS training? There's no way to do that, except to parallelize the networking committee somehow
 
TheXpert:
By the way, it's nice to think about paralleling. I have no idea how to do it.

Andrew, I checked your message about recursion, you're right, recursion is 1.5 times slower and limited in depth, so loops won unambiguously.


Don't worry about paralleling, until there's API for graphical processor it's not worth a damn, neural networks are too simple tasks for paralleling them between cores (the thread call takes hundreds of clock cycles).

Unless MQ will give us necessary API at least for neural networks, but for that we need to formalize universal network. So first the project, and then knock on the door to give me an API for GPU.

 

I drew an explanatory picture to the code on page 5:

The yellow ones are linked memory cells (to be more precise, they are one and the same memory cell, just referring to different memory objects),

the rest, associations of memory objects, red next, green prev, blue side.

The direction of the arrow indicates the direction of the pointer transfer.

The zero cell of the delay operator memory is used as the output memory cell.


The memory is linked in reverse order as one input has only one output (from where it receives data) but not the other way around.

Well probably intuitively it is clear that circles are inputs, squares weights, triangles delay operator (zero cell operator Z neuron output).

ZZZY All the scheme is final, I will not change it any more :o)

 

1. Imha, the main thing is to specialize in trading and here it is important to systematize the preprocessing of input data. Both discrete signals (for example, a breakthrough of a certain extremum, or price change over a certain threshold) and continuous signals (for example, distance between 2 wheels or price change over a period of time) should be inputted to the elmentary price-action signals. Of course NS can obtain discrete learning by itself from continuous ones, but then it goes into black-boxing and that is not always needed. In general, for the inputs we need a separate class, where the virtual method will be defined to calculate the price-action and where users can choose from those already written, or write their own.

2. the standard EA is in fact a work with elementary price-action signals and Boolean algebra with them (and/or/not). Therefore, the standard Expert Advisor can also be described in the form of NS of a certain topology and setting weights. Perhaps a good option - the automatic converter of standard EAs into NS (it's difficult to do it :)), or at least allow to design NS using Boolean logic templates. In order to give a starting point for creation of a logical NS, but not just a set of layers with some topology.

And also to add elements of Boolean algebra to the constructed NS. For example, we have built a BS and want to check the influence of a simple filter on it, for example buy trade only when the price is above МА200 and sell vice versa. Of course, we can enter a new entry and re-train the network, etc. Or we can simply add this filter as Boolean logic and check its effect on the result.

I.e. it is about combining intuitive human-readable Boolean logic and NS at different stages of TC design.

3. It is logical to make a possibility to fix some connections so that they do not participate in further training. I.e. there is a backbone of the system which should not be frequently rebuilt, and there is a more frequently adaptable part. Re-training all weights is an increase in fitting.

4. It would be nice if the invariable backbone from p. 3 could be selected automatically. I.e. for example we have a test section. It is divided into N parts. NS is trained sequentially on each, but so that there was a part of NS that is fixed (not re-trained on each part).

I.e. the point is to build robust NS which require minimum adjustment to the current market.

5. The target f-factor of learning is not certain forecasting successes (trades), but characteristics of equity curve. For example a profit-factor, or a user-defined one.

6. A visual interface for NS projecting is necessary for all this and other possibilities.

 

Just for a start, you can consider some kind of an object-oriented model like http://www.basegroup.ru/library/analysis/neural/fastneuralnet/.

What makes sense to do by analogy, what are the disadvantages considering the specifics of trading and MQL5. Or a more advanced open source model, so as not to reinvent the wheel :)

BaseGroup.ru :: NeuralBase - нейросеть за 5 минут
  • basegroup.ru
Библиотека компонентов , предназначена для программной реализации нейронных сетей. В качестве примера, созданы компоненты реализующие две нейросетевые парадигмы: рекуррентную нейронную сеть, в нашем случае – это сеть Хопфилда и многослойную нейронную сеть обучаемую по алгоритму обратного распространения ошибки (back propagation). Основным...
 
Avals:

1. Imha, the main thing is to specialize in trading and here it is important to systematize the preprocessing of input data. Both discrete signals (for example, a breakthrough of a certain extremum, or price change over a certain threshold) and continuous signals (for example, distance between 2 wheels or price change over a period of time) should be inputted to the elmentary price-action signals. Of course NS can obtain discrete learning by itself from continuous ones, but then it goes into black-boxing and that is not always needed. In general, for the inputs we need a separate class, where the virtual method will be defined to calculate the price-action, and so users can choose from already written, or write their own.

Yeah, I wrote about that. However, I wrote it in a different way. Can you give a little simple example of what a price action input would look like?

And also adding elements of Boolean algebra to the constructed NS. For example, we have built a BS and want to check the influence of a simple filter on it, for example buy trade only when the price is above МА200 and sell vice versa. Of course, we can enter a new entry and re-train the network, etc. Or we can simply add this filter in the form of Boolean logic and check how it affects the result.

What's the point of putting it on the net? It's easy and simple to check without the net...

I.e. it's about combining intuitive Boolean logic and NS at different stages of TC design.

What's the point of the NS then? Imho, NS in any case should be regarded as a black box, which converts inputs into outputs.

3. It is logical to make a possibility to fix some connections so that they do not participate in the further learning. I.e. there is a backbone of the system which should not be frequently rebuilt, and there is a more frequently adaptable part. To retrain all the weights is to increase the fit.

And on what principle? Well the topology can be changed (heh :) ). It is possible to reduce the quantity. But to train only manually chosen synapses...

4. It would be nice if the invariant knuckle from p. 3 could be selected automatically. I.e. for example we have a test section. It is divided into N parts. NS is trained sequentially on each part, however a part of NS remains fixed (not re-trained on each part).

Yes, just check the work on the glue. You can't get rid of the fit with such machinations anyway.

That is, the point is to build robust NS that require a minimum of tweaking to the current market.

5. The target f-function of training is not certain successes in prediction (deals), but characteristic of equity curve. Profit-factor, for example, or one set by a user.

It doesn't work that way. Not all entries can be transferred to the strategy. After all, the NS is separated from the TS, they are independent. But it may be done at the level of formation of inputs/outputs of training sample.

_________________________________

Or you should set your goals differently in general.

 
TheXpert:

Yeah, I wrote about that. But in a slightly different way. Can you give me a simple little example, how a price action input would look like?

bool F1(int period){

if (High[0]>iHighest(NULL,0,MODE_HIGH,period,1)) return(true); else return(false);

}

double F2(int ma1P,int ma2P){

return(iMA(...ma1P ...)-iMA(...ma2P....))

}

TheXpert:

And why enter it into the network? It's easy and simple to check without the net...

What's the point of the NS then? Imho, NS should in any case be regarded as a black box which converts inputs into outputs.

To combine NS and conventional logic. The LoS can solve only part of a trading problem, for example filtering, and the result of its work will be either buy or sell, or don't trade anything. But the entry and exit points, as well as other filters may be contained in the standard Boolean logic. How to teach the right NS if it should not work without the Boolean part (giving profits or predicting price changes)? I.e. each step of NS training must contain the start of algorithm on Boolean logic.

TheXpert:

And on what principle? Well, you can change the topology (hehe :) ). It is possible to reduce amount. But to train only manually chosen synapses...

why not? :) For example, a committee of several NSs, each essentially solving different problems. They don't have to be trained on exactly the same samples and with the same periodicity, do they? Different tasks require different training periods.

There is, for example, an NS, which is quite good at identifying a trend/flop. We can use it in various projects and add it to other NSs, for example. But not to retrain it every time you need to train them.

TheXpert:

Just check how it works on gluing. You can't remove the fit with these machinations anyway.

I think this is the only way for NS :)

TheXpert:

After all, the NS is separated from the TC, they are generally independent. But it can be done at the level of formation of inputs/outputs of training sample.

And why cannot they be combined? Then NS will solve only a part of problems, but it will be trained as a part of the whole system. And in general, combine the process of strategy optimization and training of the NS which is part of it, because it is essentially the same thing. I.e., for example, with each run of the tester the NS is trained on the test sample. Of course, it makes sense if NS is not an independent progozist))) prices, but only a part of the general logic of the system

 

No. It doesn't work like that.

Talking like this is demagogy. We can talk about different things without understanding each other, and call them roughly the same thing.