Machine learning in trading: theory, models, practice and algo-trading - page 325

 
Renat Akhtyamov:

Yeah. That's cool.

All the same, not bad on the whole.


Well, it's a simple neuron, it is not able to master so much information, but it is interesting.
 
Maxim Dmitrievsky:

Well, it's a simple neuron, it is not able to master so much information, but it's interesting
Can I see a schematic diagram, on a piece of paper? What's in the inputs, what's in the middle, and where the output is. I've already read a lot of books (noise suppressors, recognition, etc.), but with the market I can't get to it).
 
Yuriy Asaulenko:
Can I see a schematic diagram, on a piece of paper? What is in the inputs, what is in the middle, and where is the output. Because I have already read a lot of books (noise suppressors, recognition, etc.), but I can't get to the market).


https://c.mql5.com/3/126/RNN_MT5.zip

There's an advisor and a description

 
Maxim Dmitrievsky:


https://c.mql5.com/3/126/RNN_MT5.zip

There is an Expert Advisor and description

Thank you. It's similar to the neuronics noise suppressor. This is exactly what I was afraid of.

Suppose 10 samples of price series + suppose, 5 predictors - 10*5=50 + their derivatives - another 50, 20 more inputs of additional information. Total 120 inputs on the minimum. (Sad).

It is very difficult to count such matrixes for a multilayer network, and for 1m TF, and even inside a candlestick.

 
Maxim Dmitrievsky:


And what does it mean when the results start to go into turbulence during genetic optimization? :) graph should improve with time

I do not recommend you to opt for max-balance. Optimization failures indicate problems with "marginal" parameters. Try to find out how much the parameters in the failure zone differ from those variants that are slightly to the left on the picture.

Optimization works very simply - goes towards increasing optimization criterion and at the same time occasionally checking previously unexplored zones "just in case" so as not to miss (if there is one) the way to even better results.

But in any case you should be careful with such optimization results. There should be a gradual shifting of variants cloud upwards with rare variants in the lower zone of the graph.

 
Yuriy Asaulenko:

Thank you. It's similar to the noise suppressor on a neuron. This is exactly what I was afraid of.

Suppose 10 samples of price series + suppose, 5 predictors - 10*5=50 + their derivatives - another 50, 20 more inputs of additional information. Total 120 inputs on the minimum. (Sad).

Such matrixes, especially for multilayer network, for 1m TF, and even inside a candlestick are too complicated.


Yes, when you increase input parameters trouble will be :) so look towards improving the logical core rather than the number of layers
 
Andrey Dik:

I do not strongly recommend to go for max-balance. optimization failures indicate the problems on the "edge" parameters. try to understand how much the parameters in the failure zone differ from those variants that are slightly to the left on the picture.

Optimization works very simply - goes towards increasing optimization criterion and at the same time occasionally checking previously unexplored zones "just in case" so as not to miss (if there is one) the way to even better results.

But in any case you should be careful with such optimization results. There should be a gradual shifting of variants cloud upwards with rare variants in the lower zone of the chart.


I rewrite the bot logic and will check it again.
 
Andrey Dik:

I strongly do not recommend to opt for maximal balances. Optimization failures indicate problems at the "edge" parameters.


I completely agree - optimization is extremely dangerous thing: I am proudly standing on top as a millionaire, while my depo is plummeting - this is the usual result of optimization.

Here above are the normal balances - this is one line. And why one line with random input? If the input is random, then the balance line should also be random, enclosed in confidence intervals!

A surrogate for confidence intervals could be a three-dimensional (color) graph in optimization. And if the balance is accompanied by this graph from the optimization and this three-dimensional graph has about the same color, the grail is close. Otherwise these graphs are not worth a penny.

 
SanSanych Fomenko:


I totally agree - optimization is a very dangerous thing: I proudly stand on top as a millionaire, while my depo is plummeting.

Here are the normal balances above - that's one line. And why one line with random input? If the input is random, then the balance line should also be random, enclosed in confidence intervals!

A surrogate for confidence intervals could be a three-dimensional (color) graph in optimization. And if the balance is accompanied by this graph from the optimization and this three-dimensional graph has about the same color, the grail is close. Otherwise the above charts are not worth a penny.


In our case we're adjusting neuron's weights via optimizer, that's all... what difference does it make whether it is trained in logic or via optimizer... And in terms of speed I think that learning is much faster in the cloud through the optimizer.

1000% in 2 months is bad or what? :) I improved my logic a little.

But it's true that the bulk of the profit was made in April. Since mid-May even a steady trend


 

This branch is huge.

Can someone give me a hint...

I have charts of movements of several currency pairs. How can I use machine learning to select parameters (lot, direction) for opening/closing orders so that the result is positive as often as possible?

That is, what should I do, how should I train the program?