Machine learning in trading: theory, models, practice and algo-trading - page 1393
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
The classic version won't work, unfortunately... it's also all about the features/targets.
Don't be dramatic. Classical NS has been working for more than a year. There are shifts with NS regression, too. I don't know about RL, but I don't have any problems with the classics. You should formulate the task properly, not like I want a roast bird, and you won't have any problems. You don't have to predict the price of a candle.)
I'm talking about my experience
I'm talking about my experience.
I do not know exactly how to use RL in the market. I do not know about RL on the market. But the topic is very interesting. If I do not have some progress for a week or so, I will quit. That is all.
do you understand the difference between training with a teacher and training with reinforcement? they are different approaches altogether, absolutely. They have in common only that NS is used as an approximator
Do you understand the difference between learning with a teacher and learning with reinforcement? The only thing they have in common is that the NS is used as an approximator
Definitely, I understand.) What does this have to do with it? I'm actually talking about the end result, not the principles. If the result is the same, there is no point in a more complex solution. And it makes no difference what their principles are.
too abstract... another principle - another approach to solving the problem, other results
In general, people dedicated their lives to this, for example Sutton, so the "quick" mastering/application is out of the question. There's some very complicated stuff in there that's from the last one.too abstract... a different principle means a different approach to solving the problem, different results
too abstract... another principle, another approach to solving the problem, other results
In general, people dedicated their lives to it, like Sutton, so "quick" mastering/application is out of the question. There are very complicated things that are from the last.Judging by your article, it's not a very complicated thing to master for a long time.
Before the first training - a random target is set, and then after each cycle of training - if it brought a profit, it is left, if a loss, it is changed.
Your results with RL are no better and no worse than others. What does it have to do with your approach? The important thing is the result. The results are approximately the same for the sergeant MLP with the teacher on the opening deal. Even if you have a few. better. It does not significantly change anything. You need a qualitative leap from the application of RL.
When it comes to results - I haven't seen anything similar to mine in the thread, not even close
The only results I've seen are from fxsaber, and not from the MO in the full sense of the word.
i don't need to remind you about the backtests on the napkin.
I don't take it as a criticism, I'm just saying that it's a very complex approach and I'm amused by such statements like "I'll do it for a couple of weeks and everything will be fine".
Judging by your article is not a particularly difficult thing to master for a long time.
Before the first training - the target is set by random, and then after each cycle of training - if it brought a profit, it is left, if a loss, it is changed.
Even about such a seemingly simple thing, no one wrote here, as well as in general about RL, alglib scaffolding, etc., until I brought up the topic
so what are we talking about... so you see only that "random target", and how to attach to it something more complicated you can not think of, because to see ready and say that it's easy - always easy, but to improve ...
Just babble that everyone is so smart, and in fact only discuss the obvious neural network settings, but not complex approaches
Asaulenko submitted 20 returnees to the grid and is happy... isn't it funny?