Machine learning in trading: theory, models, practice and algo-trading - page 182
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
the fourth and final part of the article about RNeat is out...
I daresay at least one person here would be interested in reading it
http://gekkoquant.com/
Yes, it is interesting.
I am testing it on M30 quotes. A good result requires tweaking the genetics parameters.
But it is a very promising model.
Good luck
Yes, it's interesting.
I'm testing on M30 quotes. A good result requires tuning of genetics parameters.
But it is a very promising model.
Good luck
mytarmailS:
Maybe you should just keep it simple.If you find the crowd you'll find the big guys...
I think that "easier" everything has already been stolen before us, you are too general about this "crowd" as I understand only small and medium speculators, and besides speculators there are also investors and hedgers and all sorts of transfers between countries, many positions are therefore opposite and can not form a crowd, it is not clear how and why look for actions of small speculators
You have to give up looking for any kind of graphic patterns or indicators.
Definitely, everyone goes through it, but not everyone outgrows it, it's the same in life, most people don't outgrow the teenage way of thinking.
mentioned an interesting solution here, https://www.mql5.com/ru/forum/96886/page2#comment_2866637
However if you look for patterns in how big players move their bids, how they are executed, how the price behaves after large markups or icebergs, etc., In stocks it may be difficult because there are too many ECNs plus darkpools.
Interesting topic, thanks.
small and medium speculators alone, and besides speculators, there are also investors and hedgers and all kinds of transfers of dough between countries, many positions
it's all a crowd... imho and you and me too, as unpleasant as it may be....
that's what the crowd is all about... imho and you and me too, as unpleasant as it may be....
If your crowd includes big players and smart money, then playing against such a crowd is futile
Again, when we talk about division, here is a typical example of the network, where the first signal is correct, then two wrong signals and the current. as we see the fourth signal is different from the second and third, as the second and third were false, ie they need to turn over, then the last, since it is different from the previous two, you must also turn, and then according to the plan ....... Yes, at the moment of receiving the first buy signal we get a minus, but the second buy signal belongs to the same class as the previous one, which was a minus, so it is reversed, and the last sell signal is different from the previous two, it means it belongs to a different class. And if those were minus, then this will be plus. The main thing is for the division to be stable, even if the network made a mistake and started mirroring signals, the main thing is to be stable.
So it's like this....
Yes, it's interesting.
I'm testing on M30 quotes. A good result requires tuning of genetics parameters.
But it is a very promising model.
Good luck
I tried to make a working model with rneat a couple of months ago, but it didn't work, the model is overtraining too. The first generations may be a little successful on OOS, but the longer the training - the less correlation between the results on Sample and OOS. And this moment to stop learning is quite difficult to catch, even crossvalidation will not help there.
Concerning the example in the article - my result was completely different than the author's. The model in OOS was trading on the plus side for about a year, and then lost 20% of balance and stopped trading. The result is positive, but not "5 years in profit" like author's. The possible use of predictors that the author uses (the U.S. government indices) constantly overdraws and should not be trusted. So this article is doubtful.
I tried to make a working model with rneat a couple of months ago, but it didn't work, the model is overtraining too. The first generations may be a little successful on OOS, but the longer training - the less correlation between the results on Sample and OOS. And this moment to stop learning is quite difficult to catch, even crossvalidation will not help there.
Concerning the example in the article - my result was completely different than the author's. The model in OOS was trading on the plus side for about a year, and then lost 20% of balance and stopped trading. The result is positive, but not "5 years in profit" like author's. The possible use of predictors that the author uses (the U.S. government indices) constantly overdraws and should not be trusted. So this whole article is questionable.
I will try it tomorrow too....
I also need to try it with my own data, but if there are a lot of predictors it must be a long process...
Did you have a long learning curve?
Sarcasm with a hint. The drawing is by hand. There's no problem breaking it up on a machine like this, or in a cooler way. The main thing is to make it work in the future...
The first two graphs are really simple, any model would be able to divide the space that way. But it's impossible, in my opinion, to find predictors that would group into two targets so easily.
The third chart is more realistic for forex. But the models will be completely stumped here.
I wanted to find some examples with two forex indicators, train the model, and draw a map of space partitioning, but I could not, 2 indicators is too little.
It's easier to show an example like this -http://playground.tensorflow.org - you can see such charts for neuronics. All such "islands of classes" as you have on the third graph - will not have clear circular borders in the model. There will be some bridges between them, branches in different directions, etc.
It's easy to draw class boundaries by hand, but models will do a much worse job. That's why I like your picture, because it's hard to find such predictors and target and model that everything works so nicely.
I should try SVM, if it's so nice to divide areas of the same class in space, great, thanks for the tips.