What to feed to the input of the neural network? Your ideas... - page 11

 

Tick = Transaction = (Time, Price, Volume)

Volume confirms that the Price is not from the ceiling. There is NO volume in Forex, so the Price is "informational" (drawn). That's why nothing works... below H4 for sure - sudden volatility is high.

In stock markets it is Volume that explains volatility.

 
I made a mistake when testing, in the export script I specified to export only one data, but in the Expert Advisor I forgot to duplicate this rule for entry, as a result - on the tester's chart there is a logical gibberish, but.... if you increase the threshold (filter) of transactions, you can get profit on back and forward.

I chose randomly 2 months of forwards, the first - November 2021, the second - July 2022. Trained before each one, repeated the "erroneous" actions. The first sets from the optimised list give a positive outcome not only for these months, but also does not drain (holds flat) until the end of 2022. In general, these lines, this unapologetic approach and outright mockery of neural networks and common sense cause pain to the professionals here, don't be angry. And we continue on. Trying it out for a couple more months.
 
Ivan Butko #:
I made a mistake when testing, in the export script I specified to export only one data, but in the Expert Advisor I forgot to duplicate this rule for the input, as a result - on the tester's chart it is a logical gibberish, but.... if you increase the threshold (filter) of transactions, you can get profit on back and forward.

I chose randomly 2 months of forwards, the first - November 2021, the second - July 2022. Trained before each, repeated the "erroneous" actions. The first sets from the optimised list give a positive outcome not only for these months, but also does not drain (holds flat) until the end of 2022. In general, these lines, this unapologetic approach and outright mockery of neural networks and common sense cause pain to the professionals here, don't be angry. And we continue on. I'll try it for a couple more months.

we can keep going over and over again. In theory, there is a value that is optimal for the next couple of years.

 
Maxim Kuznetsov #:

you can still go through the seed. In theory, there is an optimal value for the next couple of years

Please clarify what is meant. I found the word seed in the perceptron script from the Brazilian's article on multilayer perceptron, where it means a function of a random number.

void seed(int seed=-1)

  {

   if(seed!=-1)

      _RandomSeed=seed;

  }  

 
I don't like predicting N steps ahead, but sometimes it hits. You can twist it in that direction. Below on the left is a prediction, on the right is a fact.

 
Ivan Butko #:
I don't like predicting N steps ahead, but sometimes it hits. You can twist it in that direction. Below on the left is a prediction, on the right is a fact.

Most likely that "sometimes" = 50%. But check it out, it might be your gold mine.
 

I tried to input the difference of Close[1] and extrema every N*2 hour candles (24, 48, 96, 192, 384, 768, 1536, 3072).
That is, today's extrema, ........ for half a year.

Training - a year. From 2021 to 2022.

Forward from 2022 to today.

There are 16 values in total.

The result is interesting because the balance chart tried to go up for the first time on the forward. Before that, it could hold for a couple of months at most, but still went down on the distance. Neuro Pro programme

At the same time, I did not feed the network with a value that would correspond to the previous entry in the previous sample(Close[1]- Close[2]), i.e. there are no actual values among the predictors.



Although the chart is terrible, it at least gives reason to believe that working with extrema can yield some results if done properly.


UPD

Alexey, who helped to run the neural network from the documentation: no results so far, some chaotic picture. Either it is not designed for prices, or it needs to be refined and cooked differently somehow

 

The neuronics input needs to be fed

I`m sorry

 
Interesting point: the genetic algorithm in the strategy tester is usually designed to find profitable trading. Accordingly, it searches for values at which the trade for the selected period will be better. The trades are better.

I tried to set the task in a different way: open any trade when the neural network output is equal to the next price +/- n points, and close it immediately. Only two previous closing prices should be used as input. The optimised parameters are weights. But there is no neural network in the usual sense - we just multiply the weights for these two inputs. The result of adding plus and minus values gives a "wiggle" of the output number in the neighbourhood of the next price. And the less we set the parameter n, the closer the output will correspond to the next price +/- n.

As a result, the number of trades started to increase in the process. That is, the usual tester with limited possibilities in the number of optimised parameters began to "follow the price" closer and closer to it.

So what is all this for: we wait for the end of optimisation, choose the set with the largest number of trades and set another condition: when the forecast flies away by "many" points - open a trade in that direction.

Just an observation, we will have to try to test it further.
 
A couple of months ago, I tried a different approach:

I choose a spot on the chart where there was a long-term downward trend. And from the beginning to the end.

I optimise only BUY trades
SELL trades are switched off.

At the end, I select the top set in the category of either "Max Complex Solution" or "Max Recovery Factor".

I start the forward with this BUY - a year and a half of stable growth. And, beautifully so rising slightly shaking balance, as if after an unsuccessful entry you re-enter at a better price.

The idea is as follows: to train in a neuron (with a backward pass) or optimise (in a tester) SELL on a descending chart or BUY on an ascending chart is to retrain.

Such a position is confirmed by training on Eurodollar for 2020, after which exactly in the new year the trend reverses. And all top sets in the optimiser or trained models of trush neuronka fail.

And if you train 2021, the vast majority of sets or models handle almost all of 2022 until it has a long term trend reversal in November or somewhere in autumn.

// ---------

About entry ideas:

Trush Neuronics has tried chart marking as follows: we look ahead and from the last closing price count the maximum amplitude of the subsequent up and down closing prices.

Whichever side has gone beyond 100 pips, the markup is made in that direction (analogue of the stop loss on the opposite side, which did not work).
That is, we looped through the forward and if the 3rd, 5th, 10th closing price is more than 100 pips and this price is higher than the current price, then the entire set of incoming variables to the neuronka input is marked as 1, if the opposite is true, then -1.

As input data, I gave all possible amplitude of price movement, so as not to ignore any movement (as in the case of giving only closing prices, which essentially loses some information about the chart, it becomes neutered) of the following kind:

First shadow - body - second shadow.
Accordingly, if the candle is up, the first shadow is a downward movement, its size will be with the "-" sign.
Then I converted all values to the range from -1.0 to 1.0.

As a result, with MLP-neuron (NeuroPro) and 10-10-10 architecture, the forward has been travelling for 4-5 months with a thin increasing balance line, and most importantly - with a high frequency of transactions.

With NeuroPro I have the best one-off result so far. In 99.9℅ cases it overtrains as much as it can.

// -----

In parallel, I am playing with neuronics in the optimiser.

I wrote a convolutional neural network (CNN) with pooling in procedural language. As it turned out, it is possible to do without cycles with arrays.

Since the optimiser is limited by the number of external variables and the step of change, we have to save money.

Now CNN-MLP is built, 8 filters, pooling with size 2, after that there are 4 outputs, which are passed to a fully connected MLP layer of 4 neurons with sigmoidal activation function (or tangent, I don't remember).

I now want to add LSTM, some super-duper technology for forgetting or storing the state of each layer. Googled it - some complicated stuff, but with mql5 procedural methods can be implemented.

We will get CNN-LSTM-MLP architecture on genetic algorithm (optimiser).

I know that any neural network with error back propagation (real learning) loses its meaning when switching to optimiser-based selection of weights.
But, here I am just curious to twist in my hands