Machine learning in trading: theory, models, practice and algo-trading - page 1211
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
My trading robot is always on the upside. I think it always gets better, I don't know when it's possible to trade with one order at a time. So I need to know when probably the robot will work worse than usual... because the risks are limited, my profit is purely a question of time... and sometimes you have to wait for a week... And in a week if there were no such drawdowns, you could get a lot more...
Martin and drawdown are two inseparable friends
And no matter how you twist the trend / flat, it will always be that way.
PS
Can you give me a link to the book, please?
Esteemed forum members, could you please tell me, because it is too lazy to read 1200 pages, has anyone here tried to implement machine learning based on the results of trading on the closed morders of the Expert Advisor?
https://www.mql5.com/ru/code/22710
Martin and drawdown are two inseparable friends
And no matter how you twist the trend / flat, it will always be so.
PS
Do you want to give me the link to the book?
Martin and drawdown are two inseparable friends
And no matter how you twist the trend / flat, it will always be so.
PS
Can you please send me the link to the book?
The preliminary results (since I haven't made all predictors yet) on creating a model that determines profitable models (1) were not so bad, here is the breakdown by y - profit on the independent sample, and by x - 1 - TP+FP, and 0 - TN+FN.
The target was profit of 2000, well it hasn't been achieved so far, but only 3 models out of 960 have entered the loss area, which is not a bad achievement.
The table of conjugacy
The average unclassified financial result is 1318.83, after classification 1 - 2221.04 and 0 - 1188.66, so the average financial result of the models has increased by 68%, which is not bad.
However, it is not clear yet whether this model can work with models built on other data.
Logloss training - surprisingly, the test sample (on which the model is automatically selected - not the training sample) and the independent (examination) Logloss_e converge almost perfectly.
So does Recall.
And the indicator Precision surprised me, since by default I usually use it to select the model, I had no training because it immediately equaled 1 on the first tree.
And the different metrics on the test and exam - the result surprises me a lot - a very small delta.
From the graphs of course it is clear that the model is retrained and could have stopped training at 3500 trees, or even earlier, but I didn't adjust the model and the data is actually with default settings.
Mistake somewhere, there is no such thing as an even test and track. Or grail, then share :D
It's not a grail, I've got another 100k models and I've got not very good results with them - all 70% of losing models, but profitable ones too.
I think it's the effect of a closed system, ie some kind of stationarity obtained, because the models are similar to each other, I just managed to identify their features well, so there is such a small discrepancy between the results.
I'm finishing the planned predictors, and here's what thought occurs to me - maybe I immediately remove the models that I would not choose myself (large drawdowns, strong imbalance of buys and sales, very small probability distribution, etc.), then the information about obviously bad models will decrease, but there will be more emphasis on choosing a model from hypothetically good (of course, the good model on the test sample may have bad results in the exam). So I don't know whether to cut the sample or not, what do you think?
Well, I will also give up the bare profit as a target - I will select models by a number of criteria - alas, this will reduce the target "1", but maybe there will be deeper relationships, which will allow to evaluate the model on the test results.
Please advise, I am too lazy to read 1200 pages, has anyone tried to implement machine learning based on results of trading on closed orders of Expert Advisor?
There's no need to read this topic, believe me, you'll litter your mind, try to do immediately as hereWORKING FORESTS PREVIOUSLY TRENDS This is an excellent introductory course on the application of MO in algorithotrading, and in general MO is a very broad subject, in fact MO is an extension of classical statistics, mostly with heuristics and engineering tricks, so it's not a science but a technogenic shamanism, which on the one hand is interesting, but on the other hand is fraught with speculation and abuses. If I've learned about the Indicator Generation it's more probable that the trader has forgotten what he originally started to do, and the MO is a bottomless hole, you can't come back to it, besides you should have a good mathematical background, at least a Bachelor's degree of technical specialty, in order to really deal with indicators instead of boring the parameters of libraries and packages.
Not a grail, there are 100k more models and the result is not very good for them - yes completely unprofitable cuts well - only 2% hit, but profitable models slaughtered too many.
I think it's the effect of a closed system, ie some kind of stationarity obtained, because the models are similar to each other, I just managed to identify their features well, so there is such a small discrepancy between the results.
I'm finishing the planned predictors, and here's what thought occurs to me - maybe I immediately remove the models that I would not choose myself (large drawdowns, strong imbalance of buys and sales, very small probability distribution, etc.), then the information about obviously bad models will decrease, but there will be more emphasis on choosing a model from hypothetically good (of course, the good model on the test sample may have bad results in the exam). So I don't know whether to cut the sample or not, what do you think?
Well, I will also give up the bare profit as a target - I will select models by a number of criteria - alas, this will reduce the target "1", but maybe there will be deeper relationships, which will allow to evaluate the model on the test results.
well, of course, if there is an obvious futility, you can remove