Machine learning in trading: theory, models, practice and algo-trading - page 3010
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
That test was with real volumes from CME for EURUSD: cumulative volume, delta, divergence and convergence by 100 bars. Total 400 columns + 5 more of some kind.
Without changing any model settings, just deleted 405 columns with CME data (price deltas and zigzags remained) for a total of 115 columns and got slightly better results. I.e. it turns out that the volumes are sometimes selected in splits, but they turn out to be noise on OOS. And training slows down 3.5 times.
For comparison, I left the charts with volumes above and without volumes below.
I hoped that the volumes with CME would bring additional information/ regularities that would improve learning. But as you can see, the models without volumes are a bit better, even though the charts are very similar.
This was my 2nd approach to CME (I tried it 3 years ago) and again unsuccessful.
It turns out that everything is taken into account in the price.
Has anyone else tried to add volumes to the training? Are the results the same? Or do you have them give improvements?
Have you tried our market, it seems to be less efficient?
Or grain futures, there may be some seasonal cycles there.
Have you tried our market, it's less efficient, isn't it?
It was hoped that volumes with SMEs would carry additional information/legalities that would improve the learning curve. But as you can see, models without volumes are slightly better...
what model can you get?
That test was with real volumes from CME for EURUSD: cumulative volume, delta, divergence and convergence by 100 bars. Total 400 columns + 5 more of some kind.
Without changing any model settings, just deleted 405 columns with CME data (price deltas and zigzags remained) for a total of 115 columns and got slightly better results. I.e. it turns out that the volumes are sometimes selected in splits, but they turn out to be noise on OOS. And training slows down 3.5 times.
For comparison, I left the charts with volumes above and without volumes below.
Ihoped that the volumes with CME would bring additional information/ regularities that would improve learning. But as you can see, the models without volumes are a bit better, even though the charts are very similar.
This was my 2nd approach to CME (I tried it 3 years ago) and again unsuccessful.
It turns out that everything is taken into account in the price.
Has anyone else tried to add volumes to the training? Are the results the same? Or do you have them give improvements?
You completely misunderstood my post: there is no such thing as "hope", either there is a numerical estimate of trait fitness or there is not. And there is a numerical estimate of the trait's fitness in the future.
A teacher is a set of traits and labels, not what you wrote :) or rather, it's a person in general, or an algorithm that generates that data 😀
I see you have an irrepressible desire to spit in my direction, but do you have to save your saliva, or just mark?
First you have to realise that the model is full of rubbish inside...
If you decompose a trained wooden model into the rules inside and the statistics on those rules.
like:
and analyse the dependence of err rule error on the frequency of its occurrence in the sample
we obtain
Then we are interested in this region
Where the rules work very well, but they are so rare that it makes sense to doubt the authenticity of statistics on them, because 10-30 observations is not statistics.
To me, that's the way to fit. Refining the rules within the model is refining what the model has "seen".
I see you have an irrepressible desire to spit in my direction, but should you save your spit, or just to mark?
It's a normal reaction to nonsense, it's the basics of the Ministry of Defence.
with such aplomb, like blah-blah, blah-blah, blah-blah, blah-blah.
Well, it's exactly the same in fiction. They come up with a lot, don't have time to write it down :) and then the editorial board in the person of Aleksey Nikolayev cuts everything down.
A good editor is good.
The main thing is that he should not forbid to publish what already works, but he does not understand how.