Machine learning in trading: theory, models, practice and algo-trading - page 1760
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
I guess I'm not ready to answer specifically. I do not see a fundamental difference between low and high timeframes.
The difference is that trends in lower prices make up trends in higher prices. The staircase is high-frequency, and up-down is low-frequency. The aim is to reduce the lag. If I change the lag from 15 minutes to less than one hour, I'll profit...
The difference is that the trends in lower prices make up the trends in higher prices. The staircase is high-frequency, and up-down is low-frequency. The parameters of the stairs are the same, so are the algorithms of the parameters.
The goal is to reduce the lag.
If you want to decrease lag up to one hour from a 15 min lag, you'll make a profit...
Just another song, about lag:https://www.mql5.com/ru/forum/224374/page19#comment_13522664
The difference is that the trends in the lower prices make up the trends in the higher prices. The staircase is high-frequency, and up-down is low-frequency. The aim is to reduce the lag. If I decrease the lag less than one hour from a 15 min lag, I'll already profit...
Thanks for the dialog. I went to bed. )
A thought occurred, the archiver can be used as a test for randomness. Random will not be compressed by the archiver.
We need to test cotier, different sizes, several samples. Try to do decomposition into components, whether compression improves.
Flac is a worse compressor, but it seems to be better suited for it. flac was also interesting, because it uses approximation + noise.A thought occurred, the archiver can be used as a test for randomness. Random will not be compressed by the archiver.
You should check the cotier, different sizes, several samples. Try to do a decomposition into components, whether compression improves.
flac compresses worse, although it seems to be better suited for that. Rorschach: flac is worse for compression, although it seems to be more suitable for that.Try to compress pi record - it is definitely not random, but will the archiver "guess" before that).
A thought occurred, the archiver can be used as a test for randomness. Random will not be compressed by the archiver.
We need to test cotier, different sizes, several samples. Try to do a decomposition into components, whether compression improves.
flac compresses worse, although it seems to be better suited for that. Rorschach: flac is worse for compression, although it seems to be more suitable for that.And what is the point, to change the number of different, it is clear why and random is appropriate. But it's not clear which way to go - a new kind of compression without reducing the quality of the original or the methods used in compression to apply to the series and what to see? fxsaber suggested today a problem with probabilistic logic for trading))))
Try compressing the Pi record - it is definitely not random, but will the archiver "guess" before it).
Pi should be at the level of random, the link is
and what is the point, to change the series differently, it is clear why and random is appropriate. But it is not clear which way to work - a new kind of compression without reducing the quality of the original or the methods used in compression to apply to the series and what to see? Today fxsaber proposed a problem with probabilistic logic for trading))))
This is a kind of randomness test. If there are no dependencies, it will not be compressed. If there are, then it will be compressed, the more patterns, the more it will be compressed.
Just another song, about lag:https://www.mql5.com/ru/forum/224374/page19#comment_13522664
I looked it up, there are valid points in the thread, which I agree with. But somehow the idea is not highlighted. Different TFs are different degrees of thinning. and we are looking at thinned series with different frequencies. And after different averages, we isolate waves of different thinning. Given that waves do form in the price series, we see them. But the price series is formed by waves with different frequencies. And the highest frequency ones have a value smaller than the spread. The low-frequency ones cannot be predicted with sufficient accuracy. But if we correctly decompose and separate the high-frequency components and take into account the parameters of the low-frequency ones, then there is something in it, at least I haven't seen any papers on this subject.
quote:
...Although approach is classical :-) We have some data, based on shaky theory we may use a polynomial description, interpolate, check it, and use polynomial roots for extrapolation...
it's a kind of randomness test. If there are no dependencies, you can't compress them. If there are, then it will compress, the more patterns, the more it will compress.
Maybe then in the other direction, as a tester of identifying patterns, only then inside we need the logic of separating patterns, and so there is something. I like the idea with pi))))