Finding a set of indicators to feed into the neural network inputs. Discussion. A tool for evaluating the results.

 

Greetings to all forum participants and visitors.


I want to offer all interested to discuss and search for a set of indicators best suited to feed into neural network inputs.

The profitability of neuronet in my program may be used as an estimation tool, I can also lay out MQL4 Expert Advisor with a trained neuronet. Within reasonable limits, of course.


I have self-written (in java) perseptron with arbitrary number of layers and neurons in each layer I train it with genetic algorithm from JGAP library (http://jgap.sourceforge.net/).

The number of neurons in the first layer is equal to the number of inputs and in the second layer - arbitrarily, in the third layer 1 neuron. The neuronet produces trading signals (output of the neuronet >0.5 - buy, output of the neuronet<-0.5 - sell). The signals are processed by a self-written trading tester that based on the neuronet's signal reverses the position (or enters the market, if no position is opened). The target function of the genetic algorithm is the resulting profit. Such an approach, in my opinion, allows us to minimize all possible errors and bring the training as close as possible to the real trading. I export the trained network into MQL4 Expert Advisor and test it in the strategy tester of MT4. I form the inputs for the neural network in the MT4 indicator and download them into a file. The indicator and the Expert Advisor are formed by the program and are written to the files (less confusion and less errors).

For me, 4-layer networks do not give more profit (usually less) than 3-layer ones, but they take longer to train.
I trained an 8-10-1 network for 4 days on a Core2 Quadro 2.3. 10 parallel threads with different initial populations competing to see "who has the most profit". 4 days passed about 4000 generations with 200 chromosomes in a population. The maximum profit was obtained in the first 2000 generations, beyond that, the profit did not increase. The biggest profit increase was in the first 100 generations.

I have checked results of this network in the MT4 strategy tester. I found out that the network almost never reaches the +-0.5 threshold and the trade signal is not triggered. The reason is unclear, i.e. I checked correctness of exporting to MQL4 (with the same values of inputs in Java and MQL4 the net gives the same values, maybe the whole incoming stream should have been submitted, not some random values). I lowered the threshold to 0.4 and it seems to work... Then I discovered that the Expert Advisor cannot open a position in one go... My Expert Advisor will close the bar and the price has time to move before the next bar. On the learning period (I am learning on 1-08-09 to 1-10-09) profit in MT4 was less than in my tester, on the testing period in MT4 (1-10-09 to 1-11-09), the net was profitable. I looked at what points of unprofitable entries occur and I got the impression that the data entering the neural network carries insufficient information...

I input neural network: (k=100)

a[0]=(iMA(NULL,0, 13,0,MODE_EMA,PRICE_CLOSE, i)-Close[ i])* k;
a[1]=(iMA(NULL,0, 21,0,MODE_EMA,PRICE_CLOSE, i)-Close[ i])* k
a[2]=(iMA(NULL,0, 34,0,MODE_EMA,PRICE_CLOSE, i)-Close[ i])* k
a[3]=(iMA(NULL,0, 55,0,MODE_EMA,PRICE_CLOSE, i)-Close[ i])* k
a[4]=(iMA(NULL,0, 89,0,MODE_EMA,PRICE_CLOSE, i)-Close[ i])* k
a[5]=(iMA(NULL,0, 144,0,MODE_EMA,PRICE_CLOSE, i)-Close[ i])* k*0.9;
a[6]=(iMA(NULL,0, 233,0,MODE_EMA,PRICE_CLOSE, i)-Close[ i])* k*0.8;
a[7]=(iMA(NULL,0, 377,0,MODE_EMA,PRICE_CLOSE, i)-Close[ i])* k*0.6;
I understand how indicators work, but I don't understand them and the market enough to choose minimal number of indicators by myself...

I searched the forum and found (the author of post from which I took the idea, unfortunately, I don't remember):

a[0]=(iMA(NULL,0, 3,0,MODE_EMA,PRICE_CLOSE, i)-iMA(NULL,0, 5,0,MODE_EMA,PRICE_CLOSE, i))*200;
a[1]=(iMA(NULL,0, 5,0,MODE_EMA,PRICE_CLOSE, i)-iMA(NULL,0, 8,0,MODE_EMA,PRICE_CLOSE, i))*200;
a[2]=(iMA(NULL,0, 8,0,MODE_EMA,PRICE_CLOSE, i)-iMA(NULL,0, 13,0,MODE_EMA,PRICE_CLOSE, i))*200;
a[3]=(iMA(NULL,0, 13,0,MODE_EMA,PRICE_CLOSE, i)-iMA(NULL,0,21,0,MODE_EMA,PRICE_CLOSE, i))*150;
a[4]=(iMA(NULL,0, 21,0,MODE_EMA,PRICE_CLOSE, i)-iMA(NULL,0, 34,0,MODE_EMA,PRICE_CLOSE, i))*150;
a[5]=(iMA(NULL,0, 34,0,MODE_EMA,PRICE_CLOSE, i)-iMA(NULL,0, 55,0,MODE_EMA,PRICE_CLOSE, i))*150;
a[6]=(iMA(NULL,0, 55,0,MODE_EMA,PRICE_CLOSE, i)-iMA(NULL,0, 89,0,MODE_EMA,PRICE_CLOSE, i))*140;
a[7]=(iMA(NULL,0, 89,0,MODE_EMA,PRICE_CLOSE, i)-iMA(NULL,0, 144,0,MODE_EMA,PRICE_CLOSE, i))*130;
a[8]=(iMA(NULL,0, 144,0,MODE_EMA,PRICE_CLOSE, i)-iMA(NULL,0, 233,0,MODE_EMA,PRICE_CLOSE, i))*120;
studied in 10 threads, 200 generations, population size 200 for the period 1-08-09 to 1-10-09 (my java tester gives the results):
network 9-10-1 : profit 10521
9-20-1 net : profit 10,434
9-30-1 network : profit 10361
9-50-1 network : profit 10059
result is good, but it seemed to be better with the previous version... i'll have to put it with previous inputs (i didn't save the results of last training)

Additional multipliers are needed to adjust values in the range from -1 to +1


Having read what I've written, I started thinking about a possible error in exporting the trained neural network to the Expert Advisor - it should be tested again.

P.S. Now I am writing a recurrent neural network of arbitrary structure (as I understand it, the recurrent neural network takes into account not only the value, but also the slope angle).

 

The problem is close to yours . The approach is different.

I am even less of a geneticist than an engineer, but I understand it roughly.

Let me ask you a few questions, the goal function of maximizing the score is not the best option. If I understood correctly from the example above - you are going to search a set of systems, in your case - 9, well, at a glance optimizing stops profits + more fantasy total at least one system (here we use an indicator) has at least 2-3 hundred possible variants. We will not get those variants inside the frame, say CCI or RSI will fall out of view. Then we have a magic indicator for the current moment. Let it be mov(20) -mov(10) > < 0. How to estimate or at least estimate how long the situation will last .

 
iliarr >> :

I have a self-written (in java) perseptron with an arbitrary number of layers and neurons in each layer I train it with a genetic algorithm from JGAP library (http://jgap.sourceforge.net/).

How do you do the training? Please describe training algorithm.

If I remember correctly - JGap, is just a library for genetic algorithm and nothing more, nothing to do with NS. Perhaps the question is how the genome is formed and the correctness of fitness function selection .

 

There are many pitfalls in training a network with Ga, which you are unlikely to be able to solve... At the very least the question "How long will the network work as well as in training?" does not seem possible to me, when training with Ga.

I recommend to go to a standard prediction problem solving with ns-i.

 
rip >> :

How do you conduct the training? Please describe the training algorithm.

If I remember correctly - JGap is just a library for gene algorithm and nothing more, it has nothing to do with NS. Perhaps the question is how the genome is formed and the correctness of fitness function selection .

JGap is a library which implements a genetic algorithm. for me it is a black box which needs to be given a target function which depends on a vector of a certain length. the genetic algorithm from this library selects values of this vector so that the target function is maximal. my target function outputs profits from history passes by a trading emulator which reflects signals from a neural network. the vector whose values the genetic algorithm selects, determines weights of neural networks.

ivandurak >> :

The problem is close to yours . The approach is different.

I am less of a geneticist than an engineer, but I understand the gist of it.

Let me ask you a few questions, the target function maximizing the score is not the best option. If I understood correctly from the example above - you are going to search a set of systems, in your case - 9, well, at a glance optimizing stops profits + more fantasy total at least one system (here we use an indicator) has at least 2-3 hundred possible variants. We will not get those variants inside the frame, say CCI or RSI will fall out of view. Then we have a magic indicator for the current moment. Let it be mov(20) -mov(10) > < 0. How to estimate or at least estimate how long the benign situation will last .

Unfortunately, I don't have enough computing resources to search for optimal indicators (network 9-10-1 in 10 threads, 200 generations, population size 200 from 1-08-09 to 1-10-09 takes more than an hour to learn, though the number of neuron weights (vector length which the genetic algorithm takes) = 181).

I need a set of indicators that reflect the market situation. the indicators should be simple, preferably standard MT4 (question of implementation and possible errors) there are only 30 of them. no goal Indicators should give information where the market goes. i need them to get as much information as possible from price fluctuations.


I like it here... I think how and what to write, formulate, and more understanding appears... thanks to the forum and to you ivandurak . :)

 
iliarr >> :

JGap is a library which implements a genetic algorithm. my target function is a black box which needs to be given a target function which depends on a vector of a certain length. the genetic algorithm from this library fits values of this vector so that the target function is maximal. my target function generates profit from history passes by a trading emulator which processes signals from neural network. the vector whose values the genetic algorithm fits, defines weights of neural network neurons.

That's exactly what I mean...


How do you form a vector which you then pass to JGap, is it just a vector of W values or are they encoded W values.

What is the target f-function. I can give an example - if we take as a target f-function E[i](t) = D[i](t) - Y[i](t), where E is an error, D is a value expected at the output, Y is a value obtained when feeding the training sample X, i is neuron norm, t is epoch number. If we take E[i](t) = Sign(D[i](t) - Y[i](t))*(D[i](t) - Y[i](t))^2 on a number of tasks, the result is much better. Say, if we form a series reflecting attractors of classical dynamical systems (Lorenz, Henon, Rössler,...), we can even train the network to approximate such data, not deeply but still.


I haven't tried it :) because I don't think it will work :)

 
With a design like this, you can achieve near-vertical eviti without slippage. Will you address the issue of overtraining at the neuronics?
 
IlyaA >> :
With a design like this, you can achieve a near vertical eviti with no slippage. Are you going to address the issue of retraining at the neuron?

And there might not be any retraining ... If the author graphs the error on the test sample, you can tell right away what happens with retraining.

 
rip >> :

And overtraining may not happen ... If the author cites as a graph of error on a test sample, you can tell at a glance what happens to overtraining.


I agree. it works with a black box. overtraining is very likely. Dear iliarr can you publish a timetable for the training.
 
iliarr >>:.............

You shouldn't be using the waving arms. Or rather, you should not be using only moving averages. Try to experiment with a set of different types of indicators, preferably each indicator's algorithm should be radically different from the others. Then you will get more information for the network.

One more point.

You are using a reverse trading system based on NN signals. This is exactly the same as the standard muvingaverage expert. No better or worse.

Look for a way to determine SL and TP size with the help of NN, and ways to accompany open positions. You can open at random as well.


StatBars wrote :>>

In the network training with ha a lot of pitfalls, which you can hardly solve... At the very least, solving the question "How long will the network work as well as in training?" does not seem possible to me, when training with Ga, at all.

I recommend to go to a standard ns-ey prediction problem solution.

GA is just an optimization tool (screwdriver for the machine). With minimal differences you can use it or any other optimization algorithm (screwdriver).

 

Hello

I have always been interested to learn about NS, but as soon as I start to read some literature on the subject my head starts to boil and eventually I cannot even understand what NS is

could you give a simple example (on the fingers, so to speak) to explain what it is

Thank you