Machine learning in trading: theory, models, practice and algo-trading - page 1962

 
Ivan_Invanov:
Did you use the demo?

in the tester

I tried it, there is an example for mt4

https://github.com/darden1/tradingrrl

darden1/tradingrrl
darden1/tradingrrl
  • darden1
  • github.com
01_python: Implementation with python. 02_cython: Implementation with cython. 03_cpp: Implementation with c++. 04_cython_cppwraper: Implementation with cython wrapping c++. 05_mql: Implementation with mql. 06_ga_optimizer: Use GA for optimizing...
 
Maxim Dmitrievsky:

in the tester

tried this, there is an example for mt4

https://github.com/darden1/tradingrrl

The tester does not give reliable information. Well, I'm talking about EAs without neural networks, but I don't see much difference in this aspect. I mean every strategy and even one strategy with different parameters gives different correlations with the test, that's why I can't even approximately estimate it.

 
Ivan_Invanov:

tester does not give reliable information. Well, I'm talking about EAs without neural networks, but I don't see much difference in this aspect. Each strategy and even one strategy with different parameters gives different correlations with the tester, that's why I can't even estimate approximately.

The tester gives reliable information

 
Maxim Dmitrievsky:

The tester gives reliable information

Well, check it out on the demo.

 
mytarmailS:

Maybe write to him, let him send the code? It seems to be his)

I read it twice myself and still don't understand how it works, how the memory is tripled there, and how he beat the properties of the super-precision layer... Anyway, the article is not very good, but the product itself is good, and the author needs to practice with the explanations... So if you figure it out but write how it at least works

I'm too lazy to debunk another myth) someone twisted something there and loudly announced that retraining is absent. Let's see.

 
Maxim Dmitrievsky:

so lazy to debunk another myth) someone there twisted something, loudly stated that there is no retraining. Let's see...

Well, the article shows that the man understands algorithms.

You should try it, experiment is the criterion of truth
 
mytarmailS:

Well, the article shows that the man understands algorithms

You have to try it, experiment is the criterion of truth.

The scheme is simple. We give a vector of features, the MNO gives a random signal in the range 0;1. We open a trade, and close it at the next iteration. For example we decide if it is more than 0.5 - buy, less - sell. If previous deal is in the red - we give a penalty to the grid on a separate entrance, if it is in the open we encourage it. You should do the same for each iteration. As a result, it adjusts weights in the course of trading. Why does he have it trade on the picture at once in the + is nonsense) since she is dumb at first. Most likely he first trained it and then re-trained it on the same area.

 

I'm not impressed with the results of the pictures...

Maxim Dmitrievsky:

the scheme is simple.

If it is so simple, then why are there two or three types of networks?

why does he write that classical RL will not work?

Why is the memory there?


I'm a total ignoramus in RL, but it seems to me that it's not so simple

 

several D-neurons (like a grid)

error, % = 45.10948905109489

goodbye )

I emailed the author of the grid the cuts and my indignation
Reason: