"New Neural" is an Open Source neural network engine project for the MetaTrader 5 platform. - page 46

 

Re-designed so that when an object is created through the constructor with parameters, a new sequence will be initialized at the end of the sequence.

It will work a little slower, but the sequence will be unique all the time.

When creating through standard constructor, you also need to call Srand() and sequence will be the same, spinning around in a circle.

  uint st=GetTickCount();
  CRandm *rnd=new CRandm((uint)TimeLocal(),10000);// длинна последовательности
  for(int i=0;i<1000000;i++)// количество вызовов
    {
     rnd.Rand();
    } 
  delete rnd;   
  Print("time=",GetTickCount()-st);
Документация по MQL5: Основы языка / Операторы / Оператор создания объекта new
Документация по MQL5: Основы языка / Операторы / Оператор создания объекта new
  • www.mql5.com
Основы языка / Операторы / Оператор создания объекта new - Документация по MQL5
Files:
Randm.mqh  5 kb
 

I suggest you consider the virtual orders bit (emulates opening/closing on the market. The commands are almost the same as for real orders).

It will come in handy when you need to run a model on historical data (outside of the strategy tester).

What have you forgotten? What can be improved, added, changed or improved? - The library is very basic.

Only operations of buy-0 and sell-1 types have been implemented so far.

Files:
C_Orders.mqh  15 kb
 

Lecture 1 here https://www.mql5.com/ru/forum/4956/page23

Lecture 2 here https://www.mql5.com/ru/forum/4956/page34

Lecture 3 here https://www.mql5.com/ru/forum/4956/page36

Lecture 4. Application of visual cortex organization to time series transformation

So our task is to build a classifying neural network for price patterns according to the principle of brain functioning. This network can be divided into two modules: the module for transformation of input information (prices) and the classification module, which can be constructed according to any known principle (for example, Support Vector Machine):

In the previous lecture I described the HMAX model of visual cortex as an example of biological information transformation. The big drawback of this model is that values of weights (receptive fields) in it are not trained, but simply taken from biological measurements of brain neurons. For example, the measurement of receptive fields of simple neurons in V1 (S1) is done as shown in the video here https://www.youtube.com/watch?v=Cw5PKV9Rj3o(the clicks you hear are neuron pulses). In our case of time series quotes, the "receptive fields" of neurons which transform the information are unknown in advance. So we will have to find them ourselves. For example, the first layer of price transformation (S1 layer) can be built as follows, by analogy with S1 layer of visual cortex:

In this example, our S1 layer has 6 sublayers, numbered 0-5. Neurons (circles) of the same sublayer have the same input weights (receptive fields), conventionally shown in the rectangles on the left. Each sublayer of neurons has its own receptive field. We know in advance that if there is a sublayer with receptive field w_0,k where k=0...5 (direction "up"), then there must be a sublayer with receptive field -w_0,k (direction "down"). Let the layers with positive field be numbered with distinct numbers (0, 2, 4) and their negative counterparts numbered with fuzzy numbers (1, 3, 5). In addition, we know that S1 sublayers must have receptive fields of different sizes. Let the field size increase from sublayer 0 (and its negative counterpart 1) to sublayer 4 (and its negative counterpart 5). The inputs of all neurons of the same column are fed with the same prices (for example, the inputs of neurons of the first column are fed with prices x_0...x_5). The inputs of neurons in the next column are fed with prices shifted by 1 bar (x_1...x_6), and so on. So, our S1 layer consists of neurons with receptive fields of different directions (up, down), sizes, and time positions.

Learning of input weights of S1 layer neurons is done only for one column of neurons from different sublayers, and it does not matter which one. Then all weights are copied for the remaining neurons in each sublayer. Learning is done without a teacher by feeding different price patterns into inputs of S1 layer and changing weights by some rule. There are many rules for self-training of neuron weights, well described here:

Miller, K. D., and MacKay, D. J. C. (1994). The role of constraints in Hebbian learning. Neural Computat., 6, 100-126.

The first rule of self-learning of neurons was postulated by Hebb in 1949(https://en.wikipedia.org/wiki/Hebbian_theory). This rule reads: "If a neuron receives an input signal from another neuron and both are highly active, the weight between neurons must be amplified. Mathematically, it is written as follows

dw_i = mu*x_i*y,

where dw_i is the increment of weight w_i, x_i is the value at the i-th input, y is the neuron output, mu is the learning rate. We will use Oja rule ( https://en.wikipedia.org/wiki/Oja's_rule), which belongs to the class of competitive learning rules :

dw_i = mu*y*(x_i - y*w_i/a),

where dw_i is incremental weight w_i, x_i is price at the i-th input, y is the output of the neuron calculated as y = SUM(w_i*x_i, i=0..m-1), mu is the learning rate, a is a parameter (sum of squares of weights SUM(w_i^2,i=0..m-1) tends to a). The advantage of this rule is that it automatically finds weights as principal eigenvectors of price quotes. In other words, Ogi's rule reproduces the Principal Components Method (PCA). This coincides with the biological assumption that the receptive fields of the S1 layer of the visual cortex represent the principal eigenvectors of visual information. The attached C++ code automatically trains weights of 32 S1 sublayers with EURUSD M5 quotes as input, mu=1, a=1. These weights as functions of input are shown below

Weights of the first two sublayers (0 and 1) are shown in red. They have only two non-zero values: -0.707 and +0.707. The weights of sublayers 2 and 3 are shown in orange. They have 4 non-zero values. Etc.

To use the attached code, you need to install Boost and CImg libraries http://cimg.sourceforge.net/. I haven't got to the higher layers (C1, S2, C2) and probably will not for a long time. Those who have read my previous lectures should understand how all HMAX layers work and complete the Price Quotation Conversion Module. In the next (last) lecture I will talk about SparseNets.

Files:
BrainPower.zip  907 kb
 

I suggest you consider the virtual orders bit (emulates opening/closing on the market. the commands are almost the same as the real orders).

It will come in handy when you need to run a model on historical data (outside of the strategy tester).

What have you forgotten? What can be improved, added, changed or improved? - The library is very basic.

Only buy-0 and sell-1 operations have been implemented so far.

Thanks for the library. Can I get a brief instruction on how to use it?
 
Graff:
Thanks for the library. Can I get a brief instruction on how to use it?

Actually, there is nothing much to tell.

Before each history run, for the order history to be cleared, the following commands should be called:

void Initialise(int MaxPossibleCountOrd, double Spread, double Point_);

and then, according to your trading strategy, call the appropriate commands:

int    OrderOpen        (int Type, double Volume, int Time,double Price,double SL, double TP);
void   Possible_SL_or_TP(int Time, double PriceHigh,double PriceLow);
void   OrderClose       (int Ticket, int Time,double Price);
void   OrderCloseAll    (int Time, double   Price);
int    ProfitTradeCount ();
int    TotalPipsProfit  ();
int    LossSeriesCount  ();
int    ProfitSeriesCount();

The spread is fixed, it is set during initialization. For a floating spread, you will have to add the appropriate functionality. For me, it is not necessary in the hell - just set the maximum possible spread for an instrument and that's it.

 
joo:

Actually, there is nothing much to tell.

Before each run through the history, you have to call the order history to clear it:

and then, according to your trading strategy, call the necessary commands:

The spread is fixed, it is set during initialization. For a floating spread, you will have to add the appropriate functionality. For me, it is not damn necessary - just set the maximum possible spread for an instrument and that's it.

Thanks, I will try it tonight. The analogue for MT4https://www.mql5.com/ru/forum/124013 might be of some help.
Простая идея мультивалютного тестера с примером реализации - MQL4 форум
  • www.mql5.com
Простая идея мультивалютного тестера с примером реализации - MQL4 форум
 

I'm sorry, but I seem to have miscalculated. At the moment I have almost no opportunity to do the project.

And although the desire to participate huge, I am stupid lack of time and energy, which is too bad.

I will be able to join after exams (mid December). By the way, one of the courses is directly related to neural networks.

 

By the way. There are two target functions. For logistic regression and simple regression.

Classification is one case of logistic regression.

The target function for logistic regression:

And for ordinary regression:

True, their derivatives are similar. Maybe that's why the distinction is usually kept silent.

The first one is used at output sigmoidal layer in classification problems.

The second is used for linear output in prediction problems.

 
TheXpert:

I'm sorry, but I seem to have miscalculated. At the moment I have almost no opportunity to do the project.

And although the desire to participate huge, I am stupid lack of time and energy, which is too bad.

I will be able to join after exams (mid December). By the way, one of the courses is directly related to neural networks.

You're all backed up in the corners, comrades. No good.
 
Mischek:
You're getting into corners, comrades. Nikarasho.

I'm not sprawling. It's just that in the last week I've noticed an increase in the frequency of outright bullshit.

I'd rather sit on the sidelines in this state.