Machine learning in trading: theory, models, practice and algo-trading - page 489
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
I'm having a blast, too. I tried it on random samples - the results are amazing. I haven't done TC yet.
Maxim says it takes a long time to learn. I have about 23 hours. But even if I do it once every 3 months - what a rubbish).
And for 3 months it exactly enough, then do not test.
I have not got into such thickets. My Expert Advisor is not complex, I optimized it for 12 hours and then forgot about it. Today I have tried it with the same settings.
I have not got into such a jungle yet. The Expert Advisor is not complicated, I optimized it for 12 hours and then forgot about it. Today I ran it with those settings.
Yes, forward is a shitty one. I have 6% bad trades on forward (random sampling). Network - 5 layers, 50 neurons.
What is yours?
Today I decided to check my network based on Percetron. Optimized to May-early June 2016, EURUSD, spread 15 pips.
The tail itself.
I am still confused by the result.
I don't think so, you can't optimize forward like that, forward will always be bad (or random) in this case depending on what market phase you are in, I already have a lot of versions of such systems on backtest, which on forward work like a coin) this is what is called overfitting
Yes, the forward is lousy. I have 6% of unsuccessful trades on my forward. Five layers, 50 neurons.
What's yours?
Three layers, each with 9 neurons. The picture shows a very long section from 2004 to 2016. I have chosen a long history, to see if there will be a stable result over the whole interval. On the other hand the drawdown is the largest on the forward one, but on the other hand the robot starts to earn on the second half of the forward.
wolf-forward is needed, you can't optimize like that, forward will always be bad (or random) in this case, I already have a bunch of versions of such systems-billionaires on backtest, which on forward work like a coin ) this is called overfitting
Let's see in another half year.
Let's see, in half a year.
Try to constantly check NS error when new data arrives (on the test sample), if the error has increased by a given% then retrain NS automatically, and so on for the whole period of backtest... but this requires fast learning, but you do not need a large training set either. In short, to use NS as an internal optimizer
I'm trying to write an article based on such a scheme, maybe I'll finish it soon
Try to constantly check NS error when new data arrives (on the test sample), if the error has increased by a given% then retrain NS automatically, and so on for the whole period of backtest... but this requires fast learning, but you do not need a large training set either. In short, to use NS as an internal optimizer
I am trying to write an article on such a scheme, maybe soon I will finish it
With respect.
Try to constantly check NS error when new data arrives (on the test sample), if the error has increased by a given% then retrain NS automatically, and so on for the whole period of backtest... but this requires fast learning, but you do not need a large training set either. In short, to use NS as an internal optimizer
I am trying to write an article on such a scheme, maybe soon I will finish it
Please write a description of the mat, for dummies. I started reading, but what is Sugeno, Mamdani no shit I could not understand)
Something like in the article about naive Bayesian classifier.https://www.mql5.com/ru/articles/3264
Will there be fast optimization in the article? I would like to have a look.
Sincerely.
Yes, through random woods, very fast
Please write a description of the mate apparatus, for dummies. I started reading, but I couldn't understand what Sugeno, Mamdani is).
Something like, as in the article about the nayve Bayesian classifier.https://www.mql5.com/ru/articles/3264
So plenty of info on the internet :) there are 7 stages, they are quite lengthy to describe, and gave links. Mamdani and Sugeno differ only in logical inference (non-linear and linear)
I just do not see the point in copying the same thing