What to feed to the input of the neural network? Your ideas... - page 29

 
I have tried all MLP and RNN architectures
Up to 5 layers, up to 20 neurons per layer. The MT5 optimiser does not allow more

The result is curious: the more layers - the worse. The more neurons - the worse.

Record: 1 input, 1 layer, 1 neuron - turned out to be the best.

I don't understand it at all.

That is, 1 input is enough for even 1 neuron, multiplying only 1 weight by that input, to maintain a stable trade for the longest time.

Well, as stable: the best of the worst, that is - sets with stable forward appear even in the top lines of the optimiser.

Apparently, we need to dig in this place: locally within a small but cosy "cat box"
 
Ivan Butko #:
I have tried all MLP and RNN architectures
Up to 5 layers, up to 20 neurons per layer. MT5 optimiser does not allow more

The result is curious: the more layers - the worse. The more neurons - the worse.

The record: 1 input, 1 layer, 1 neuron - turned out to be the best.

I don't get it at all.

That is, 1 input is enough for even 1 neuron, multiplying only 1 weight by that input, to maintain stable trading for the longest time.

How stable: the best of the worst, i.e. sets with stable forward appear even in the top lines of the optimiser.

Apparently, we need to dig in this place: locally within a small but cosy "cat box"

The main factor is the number of variables

 
Ivan Butko #

I don't get it at all.

Bias-variance tradeoff, ML basics.
 
Maxim Dmitrievsky #:
Bias-variance tradeoff, ML basics.

I tried the opposite: I retrained to the extreme, almost point to point, so that it would drain steadily on the forward, and then turned positions over. Yes, the draining stopped, but it turned into a flat because of the spread of a huge number of trades. One failure compensates for another.

 
Ivan Butko #:

I tried it the other way round: I retrained it to the extreme, almost point to point, so that it drained steadily on the forward, and then turned positions over. Yes, the draining stopped, but it turned into a flat because of the spread of a huge number of trades. One failure compensates for another.

That's how they do it. First, a large network is trained, then layers and neurons are discarded to some optimum, the bias is minimised on new data. In parallel, the variance (error spread) on the training data increases. If nothing works at all, then the problem is in the data.

That is, to make it decent on the training data and acceptable on the test data. Compromise. If you don't like the result, then you have to change the data. And so it goes round and round, until the carrot's end.
 
Try volatility (std indicator). It will be better on new data, because it is always about the same, depending on time. There will only be a difference if on new data with the same volatility the market moves in a different direction on average. Then you can also add a time filter to find when it is the same.
 
Maxim Dmitrievsky #:
Try volatility (std indicator). It will be better on new data, because it is always about the same. There will only be a difference if on new data with the same volatility the market moves in a different direction.

Thanks for the advice.

I had an idea to normalise the data (cut off the tail) to one decimal place .0;
Mol, to create stationarity, because it reacts to these small numbers and because of them stupidly remembers the "path" of the price as if it were a small number.

 
Ivan Butko #:

Thanks for the advice.

I had an idea to normalise the data (cut off the tail) to one decimal place .0;
To create stationarity, because it reacts to these small numbers and because of them it stupidly remembers the "path" of the price as if it were the same.

It won't save much, just like you can add random noise to the features. It will be a little better in some cases, because it will be less retrained, but not a panacea at all.
 
If you think logically, you need some kind of long-term trend indicator and volatility. Trend shows in which direction deals are opened, and volatility specifies the moments. You can't think of anything else out of indicators.

And it is necessary to include different trends in the training so that it learns to distinguish between them.
 

You can input anything you want:

time of day, day of the week, moon phases, etc. etc.

A normal network will sort the necessary and unnecessary data by itself.

The main thing is what to teach!

Learning with a teacher is not a good fit here. Networks with backward error propagation are simply useless.