Advisor for an article. Testing for all comers. - page 4

 
Reshetov:


I have improved the Expert Advisor so that it calculates probability of a future short position using trading signals. Accordingly, if the probability is higher than 0.5, we open a short position, otherwise we open a long one.



I have made Take and Losses fixed, i.e. they will not roll by the signals. This is necessary to be able to apply the MM.

Here is the picture: the first 136 trades are optimization, the rest are OOS.


Since this thread is not interesting for traders and only floodbusters come here to discuss the personality of the topicstater, I am not attaching the modified code of the Expert Advisor.


Yuri, don't listen to them. Please post the modified EA.
 
Andru80:
Yuri, don't listen to them. Please post the modified EA.

If you have an article, you will also have an EA. Let's see what's what... With some experience, you don't even need tests to evaluate the idea. We will wait.
 
Figar0:

When the article is written, the Expert Advisor will follow. Let's see what's what. You don't need tests to estimate the idea. We will wait.

I am posting the Expert Advisor before the article, so that everyone could test it and give their opinion (although it is possible that they will probably start discussing my personality again?).


The essence is not even in the Expert Advisor, but in the anti-fitting algorithm I have added to it.

Now we do not have to search for successful forward tests and dig deep into test results but look through the upper rows of results, they must be somewhere there and it is quite possible that they are in the very top row as well.

Test results and optimization results may be quite different as anti-fitting is only enabled during optimization and disabled for the rest of the time.

Here is the chart of the upper line of test results (the first 404 deals are optimization, the rest are forward tests):


The Expert Advisor is attached in the attached file (it is compiled before publication, I will open its code in the article).

The input parameters are:

x0, x1, x2, x3, x4, x5, x6, x7 - adjustable from 0 to 100 in increments of 1. Optimized.

sl - Stop Loss and Take Profit in pips. For example, for EURUSD on H1, for five digits, you can take from 100 to 1000 in increments of 50 (for four digits, remove one zero from all numbers). Optimized.

lots - volume in lots, you need to put at least 1 lot to optimize. Not optimized.

mn - magic number. Not optimized.

d - number of decimal places in the lot size. I.e. if 0.01 is allowed, expose d = 2. It will not be optimized.

Optimization by Profit Factor, optimization results are sorted by this parameter too.

Just in case, I am adding to ZIP archive a file with EA settings, as I have them set up.

Files:
rnn_v3_1.ex4  7 kb
rnn_v3.zip  1 kb
 

Interesting result!

Yura, do you use the data of the instrument to which the EA is attached in the analysis?

 
Neutron:

Interesting result!

Yura, do you use data of the instrument to which the EA is attached in the analysis?

Yes, i.e. other instruments are not analysed. The TA is conducted only by opening prices.


Basically, the entries are not very sophisticated, so I can show you the code fragments where all this stuff is calculated:


   // Считываем показания индикаторов
   double a1 = input(9, 0);
   double a2 = input(9, 1);
   double a3 = input(9, 2);

   // Вычисляем вероятность торгового сигнала для короткой позиции
   double result = getProbability(a1, a2, a3);

   ...

// Показания индикаторов, должны быть в диапазоне от 0 до 1
// p - период индикатора
// shift - смещение вглубь истории в периодах индикатора
double input(int p, int shift) { 
  return (iRSI(Symbol(), 0, p, PRICE_OPEN, p * shift) / 100.0);
}
 

Thank you.

Antifitting... If you recall the classics of NS, anti-fitting comes down to finding the minimum prediction error on the forward. This can be accomplished by running statistics on a hundred or so independent runs with retraining NS on each run. This is understandable, but very difficult to implement due to finite length of initial BP and its non-stationarity.

Another way is to get an estimate on the optimal length of the training sample. As far as I know, no one has solved this problem in a general way yet...

Did you manage it?

 
Neutron:

Another way is to get an estimate on the optimal length of the training sample. As far as I know, nobody has solved this problem in general form yet...

You don't have to solve it. It's enough to find the right length of the training sample for a certain case, and it's very easy to do.

Yes, and it's not an anti-fit. It's second level fitting. For n-level fitting, there must be n+1 samples to test, the last one is a control sample.

That's exactly the last one Reshetov never has.

 

How simple!

For every curve point, statistics have to be recruited. And where to draw from, if the price range changes completely after a month?

 

No, it doesn't make any sense.

In Juri's case, learning NS is done by simply going through the weights on the stretch of history I set... right? If so, then the analysis is simply the result of the trades at each iteration. And the statement: "...Now successful forward tests do not need to search long and deep into test results, but we must go through the top lines of results, they must be somewhere, it is quite possible that in the top line ...", is incorrect - successful forwards may be anywhere and we must search everywhere.

 
Neutron:

Thank you.

Anti-fit... If you recall the classics of NS, the anti-fitting comes down to finding the minimum prediction error on the forward. This can be accomplished by running statistics on a hundred or so independent runs with retraining NS on each run. This is understandable, but very difficult to implement due to finite length of initial BP and its non-stationarity.

Another way is to get an estimate on the optimal length of the training sample. As far as I know, no one has solved this problem in a general way yet...

Did you manage it?

As for trading systems, only history matching is possible. Neural networks have two problems at a time: fitting and retraining. And retraining also takes place on stationary data.

But with neural networks with adequate inputs it is a bit easier when we deal with some neural network package or self-made grid, if we use a method presented by Leonid Velichkovski: we divide training examples into two samples, training and test samples, and in the learning process we calculate results for both samples, and teach the grid only for the training sample. As soon as the improvement of results on the test sample stops, i.e. the results do not improve further, it means that the net has been sufficiently trained and we should stop at this point. Thus, we obtain a net that has been trained most adequately - we will not get better and easily get worse. In this case we do not care exactly how much the network is retrained or fitted as we have an extremum on the test sample.

It is much more difficult with the genetic algorithm built into the terminal as we cannot calculate the results in the forward sections and stop the optimization in time. Because of this you have to manually search for forwards in the sorted by indicators optimization results.

I followed a slightly different path:

1. Found out the cause of neural network overtraining and eliminated it. I.e. if training examples are stationary and do not contain contradictions, then my algorithm is not overtrained, but can easily undertrain if the number of examples is not enough for interpolation.

2. It was more complicated with the fitting. But the problem turned out to be solvable too, as the fitting mainly depends on the quality of learning, i.e. adequacy of the system of rewards and punishments, otherwise the algorithm will follow the path of least resistance and learn in the way it finds easier, rather than in the way that is actually adequate. So I've eliminated these very paths of least resistance, i.e. I forced the genetic algorithm to evaluate the quality of calculation of probabilities of outcomes, rather than the number of pips.

3. non-stationarity. It could not be overcome at all, as it depends only on the market, not on the algorithm. This is the only thing that is a real hindrance to the quality of forecasts. But its impact, if we eliminate the other drawbacks of training, is no longer so great as to interfere with earnings.