Discussion of article "Practical application of neural networks in trading (Part 2). Computer vision"

 

New article Practical application of neural networks in trading (Part 2). Computer vision has been published:

The use of computer vision allows training neural networks on the visual representation of the price chart and indicators. This method enables wider operations with the whole complex of technical indicators, since there is no need to feed them digitally into the neural network.

Before preparing an array of images, define the purpose of your neural network. Ideally, it would be great to train the network at pivots. According to this purpose, we would need to make screenshots with the last extreme bar. However, this experiment showed no practical value. That is why we will use another set of images. Further, you can experiment with different arrays, including the above mentioned one. This may also provide additional proofs of the efficiency of neural networks in solving image-based classification tasks. The neural network responses obtained on a continuous time series require additional optimization. 

Let us not complicate the experiment and focus on two categories of images:

  • Buy - when the price moves up or when the price has reached the daily low
  • Sell - when the price moves down or when the price has reached the daily high

Buy   Buy1  Buy2  Buy3

For neural network training purposes, the movement in any direction will be determined as the price reaching new extreme values in trend direction. At these moments chart screenshots will be made. Trend reversal moment is also important for network training. A chart screenshot will also be made when the price reaches the daily high or low.

Author: Andrey Dibrov

 

Very nice article!
I enjoyed reading it word by word (by translation) as I also did a similar CNN based market predictions experiment with Pytorch implementations.

Thanks for sharing your hard work! :)

 

It would be interesting to analyse a pure ZigZag (at times when the last extremum has formed, of course).

Many patterns could emerge. Probably.

 
Great article!
 
Andrey Khatimlianskii:

It would be interesting to analyse pure ZigZag (at times when the last extremum has formed, of course).

Many patterns could emerge. Probably.

The only question is why computer vision is needed. A zigzag can be easily formalised into a "chisel").

 
Aleksey Mavrin:

The only question is why computer vision is needed here. A zigzag can be easily formalised into "chisels")

Exactly the same as bars with indicators, which the author considered. Only the ZZ is "cleaner".

 
Andrey Khatimlianskii:

Exactly the same as the bars with indicators that the author considered. Only the ZZ is "cleaner".

I agree. A dabble from nothing to do.

 

The point is not to formalise something into "chisels". Zigzag is a problematic indicator in general... Specifically lagging in dynamics and not telling anything....

https://youtu.be/mcQH-OqC0Bs, https://youtu.be/XL5n4X0Jdd8

Практическое применение нейросетей в трейдинге. Python(Часть2). Компьютерное зрение
Практическое применение нейросетей в трейдинге. Python(Часть2). Компьютерное зрение
  • 2021.01.27
  • www.youtube.com
Подробно о этапах подготовки сверточной нейронной сети в статье https://www.mql5.com/ru/articles/8668В видео показан - процесс подготовки изображений графика...
 
Andrey Dibrov:

The point is not to formalise something into "chisels". Zigzag is a problematic indicator in general... Specifically lagging in dynamics and not telling anything....

https://youtu.be/mcQH-OqC0Bs, https://youtu.be/XL5n4X0Jdd8

Andrey, I don't want to criticise or "belittle" your work, the work itself is powerful and useful by the very fact and opportunity to learn.

But still it seems very much that the practical usefulness of such a problem statement is doubtful. After all, the essence of convolutional networks is to detect and select any entities in unformalised input data, formalise them and pass them on to a fully connected layer for classification or somewhere else. Well, the output of a convolutional network will be mainly formalised entities, which are formalised in these indicators, the specific representation is a secondary issue. I don't understand what a convolutional network can find except this. Your experiment can just make sure of it if you compare this approach on the same data with a classical network that is fed with indicator data as input. I'm sure convolution will just take longer to learn and the metrics won't be better. Maybe I am wrong and there is something beyond my understanding, then it would be interesting to get a rebuttal.

 
Aleksey Mavrin:

Andrew, in no way do I want to criticise or "belittle" your work, the work itself is powerful and useful by the very fact and opportunity to learn.

But still it seems very much that the practical usefulness of this problem statement is doubtful. After all, the essence of convolutional networks is to detect and select any entities in unformalised input data, formalise them and pass them on to a fully connected layer for classification or somewhere else. Well, the output of a convolutional network will be mainly formalised entities, which are formalised in these indicators, the specific representation is a secondary issue. I don't understand what a convolutional network can find except this. Your experiment can just make sure of it if you compare this approach on the same data with a classical network that is fed with indicator data as input. I'm sure convolution will just take longer to learn and the metrics won't be better. Maybe I am wrong and there is something beyond my understanding, then it would be interesting to get a rebuttal.

Totally agree with this and have compared it. But the task in this experiment was quite different - to do without numerical representation of the indicators used. I mentioned it in the introduction to the article. By the way, you can do without indicators at all.... And the results will also be positive. Now I am preparing it for launching in real work. Let's see what it will show in practice.....

 
Andrey Dibrov:

I completely agree with this and have compared it. But the task in this experiment was quite different - to do without numerical representation of the indicators used. I mentioned it in the introduction to the article. By the way, you can do without indicators at all.... And the results will also be positive. Now I am preparing it for launching in real work. Let's see what it will show in practice.....

I see. I will note at the same time such a plus that this approach fits into the so-called "universalisation of AI systems", i.e. when the same solutions=architectures are used to solve different problems.

I believe it also partially allows to get rid of the process of data preprocessing. I feed the primary source in raw form to the "network", and it will digest it as it should normalise it, etc.

Thanks for the answer. Success in your labours :)