Machine learning in trading: theory, models, practice and algo-trading - page 353

 
Yuriy Asaulenko:
If we speak about the minutes the market is statistically homogeneous, i.e. statistics changes little (stable) from week to week, from month to month. I don't know, I haven't studied the question. As far as I remember, you are working on 1 minute.


It is 15 min on the base timeframe or OHLC on the minutes, but it is still 15 min. I would like to use ticks, but it is very slowly optimizing, now the man is writing a tester in C++, it will be much faster.

In principle, if you often overoptimize on small periods, then you can do it on the minutes. This is not the Grail, of course, but you can earn something.


 
Has anyone tried implementing Q learning or other reinforcement algorithms? Perhaps there are people I know? I am interested in statistics, how well do they cope with the task of deposit management? I found a couple of articles on this subject, but their conclusions are rather vague and ambiguous.
 
As promised, I published in the blog a brief report on the task of recognizing the NA crossing of MA - NEUROSETS AND MOVING AVERAGE
 
Yuriy Asaulenko:
As promised, I've published in my blog the brief report on the task of МА crossing NS recognition - NEUROSETS AND MOVING AVERAGE

In the news, articles, etc. they talk about the achievements of neural networks, for example, that they distinguish kittens from puppies, etc. But apparently they are very expensive commercial or experimental networks, which ordinary traders can neither afford nor develop.

And can the NS available to us (e.g. from R or ALGLIB) distinguish primitive things like triangles, squares and circles from each other? Just like from educational games for 2-3 year old children.

It seems to me materials on this subject can be made in a new branch https://www.mql5.com/ru/forum/192779 that, if there will be a result and an opportunity to repeat the experiment) it was possible to find and repeat it, and here on 350 pages it is already difficult to find something...
Насколько доступные трейдерам нейросети умны?
Насколько доступные трейдерам нейросети умны?
  • 2017.05.16
  • www.mql5.com
В новостях, статьях и т.д. говорят о достижениях нейросетей, например, что они котят от щенят отличают и т.д. Но очевидно там оч...
 
elibrarius:

In the news, articles, etc. they talk about the achievements of neural networks, for example, that they distinguish kittens from puppies, etc. But obviously they are very expensive commercial or experimental networks, which ordinary traders cannot afford or develop.

And can NS (for example from R or ALGLIB) distinguish primitive things, such as triangles, squares and circles from each other? Just like in educational games for 2-3 year old children.

Well and absolutely difficult variant - volumetric figures...

I don't know about Alglib, but you can do it in R, you need an input matrix of about 16x16. That's 256 neurons per input). Well, maybe a little less. You can find ready implementations for a similar problem on the web.

With rotation is also possible, but the NS will be deeper and more complicated. Personally, I'll pass)).

Although, for the construction of TC, I do not see the application of your tasks in this form.

 
Yuriy Asaulenko:

Although, for the construction of TC, I do not see the application of your tasks in this form.

I just want to make sure that the networks available to us can handle simple tasks before applying them to more complex trading tasks.
 
elibrarius:
I just want to make sure that the networks available to us can handle simple tasks before applying them to more complex trading tasks.
After experimenting with MA, at this stage, I see the main problem of using NS in TS in the preparation of data fed to the NS inputs. Raw data is probably inedible for most NS.
 
elibrarius:
I just want to make sure that the networks available to us can handle simple tasks before applying them to more complex trading tasks.

Squares and circles can be easily recognized by the most classic MLP without problems, if you go a little deeper into the subject, you will meet the classic task of handwritten digits recognition MNIST, there ordinary MLP with a breeze to 97% fly, then with a creak another half percent, and then begins dancing with tambourine. In fact, such a situational pattern is recognized in many tasks in ML, the battle is mainly not for sufficient results, but for 3-5 decimal places in a bracket.

 
Yuriy Asaulenko:
After experimenting with MA, at this stage, I see the main problem with the use of NS in TS in the preparation of data fed to the inputs of the NS. Raw data is most likely inedible for most NS.

This branch was started mainly to solve this problem. But it is much more useful to make sure from your own experience of the importance of this question. You are at the beginning of a difficult but interesting path.

Good luck

PS. In R, and through it in Python, all the most sophisticated neural networks currently known are available. You just need to learn how to use them.

 
SanSanych Fomenko:


Yeah and screw it.

Stupidly take the simplest thing - a random forest. Usually we get classes as a result of training. In reality, the algorithm gives the probability of a class, from which we get a class. Usually divide the probability in half for two classes.

What if you divide it into classes like this: 0 - 0.1 is one class and 0.9 - 1.0 is another class? And the gap between 0.1 - 0.9 is out of the market?

That's what I saw in the article.

This is solved more correctly and elegantly incalibrate::CORELearn/.

And has been for quite some time now.

Good luck