Machine learning in trading: theory, models, practice and algo-trading - page 3011
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
First you have to realise that the model is full of rubbish inside...
If you decompose a trained wooden model into the rules inside and the statistics on those rules.
like:
and analyse the dependence of err rule error on the frequency of its occurrence in the sample
we obtain
Then we are interested in this region
Where the rules work very well, but they are so rare that it makes sense to doubt the authenticity of statistics on them, because 10-30 observations is not statistics.
Finally, what I've been saying for years has started to reach the masses! :)
Has anyone else tried adding volumes to training? Are the results the same? Or do you have them give you improvements?
I've noticed that models like volumes under the chart run through indicators.
I haven't dug deep in this direction - just observations.
You have completely misunderstood my post: there is no such thing as "hope", either there is a numerical estimate of a feature's suitability or there is not. And there is a numerical assessment of the suitability of the trait in the future.
Interesting, exactly about the future, will you reveal the secrets?
Train 5k
validid 60k
model training - 1-3 seconds
rule extraction - 5-10 seconds
checking each rule (20-30k rules) for validity 60k 1-2 minutes
of course it's all approximate and depends on the number of attributes and data
What kind of model is this?
Does the rule estimation algorithm run on a single core?
What kind of model is it?
Does the rule estimation algorithm run on a single core?
Forrest
on one
It's finally happened, what I've been saying for years is starting to reach the masses! :)
I don't think anyone still understands what you're saying.)
He's got it all clear and simple, like Occam's razor.
That test was with real volumes from CME for EURUSD: cumulative volume, delta, divergence and convergence by 100 bars. Total 400 columns + 5 more of some kind.
Without changing any model settings, just deleted 405 columns with CME data (price deltas and zigzags remained) for a total of 115 columns and got slightly better results. I.e. it turns out that the volumes are sometimes selected in splits, but they turn out to be noise on OOS. And training slows down 3.5 times.
For comparison, I left the charts with volumes above and without volumes below.
I hoped that the volumes with CME would bring additional information/ regularities that would improve learning. But as you can see, the models without volumes are a bit better, even though the charts are very similar.
This was my 2nd approach to CME (I tried it 3 years ago) and again unsuccessful.
It turns out that everything is taken into account in the price.
Has anyone else tried to add volumes to the training? Are the results the same? Or do you have them give improvements?
Did 3 more tests without volumes and compared to the ones I did with volumes. Already changed hyperparameters of the model.
Total of 4 tests: in 3 without volumes OOS is better and in 1 worse. I.e. sometimes volumes add a little bit. In general, everything is at the level of error. You can achieve more by brute force of hyperparameters than by adding volumes. Neither significant improvements nor significant deteriorations they do not give.
I expected more from volumes.
I wonder, about the future specifically, can you tell me the secrets?
I've written a few times.
forrest
on one
What percentage are you sampling?
It seems to me that there is very little usefulness on forrest if he split uses half a predictor each.
what you're saying still nobody understands, in my opinion )
He's got it all clear without words and simple, on the principle of Occam's razor.
No, it's just that when I came to the thread and started talking about culling rules from the trees and evaluating them you laughed at the idea.
I have made the next step now - creation of conditions for creation of potentially high-quality rules through evaluation of quantum segments of predictor, and again I face total misunderstanding.