Machine learning in trading: theory, models, practice and algo-trading - page 1764
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
1) yes it's a common ZZ, there is no difference from the point of view of AMO what kind of ZZ to use
2) yes exactly, AMO is trying to build a kind of DF (digital filter), which like all filters lags, so in the middle is always better prediction than on the edges
1. There may be predictors associated with ZZ. How do you determine ZZ period? Does the result change when ZZ period changes?
It is understandable that the tendency is more likely to continue than to end... Do the predictors have a memory of past values?
3. How can this be traded - if the pivot point cannot be determined?
4. Maybe look only for points that can generate income? For example, my target is "the start of drawing of the current segment ZZ will overlap the start of drawing of the next segment ZZ", i.e. the decision to enter the market and classification is made on the next bar, after the change of the vector ZZ, if the segment is long, it usually overlaps the entry point - trawl is used.
5. You need an investor password, from where you get quotes, to get them as well.did you get anything out? and the question, and back and compare the series, in places where compression passed a big change?
It turns out that compression depends on the distribution. Normal compressed by a factor of 5, and the range bars by a factor of 15. If price and random have the same distribution, then the level of compression is almost the same.
It turned out that compression depends on the distribution. Normal is compressed by a factor of 5, and the range bars by a factor of 15. If price and random have the same distribution, then the compression level is almost the same.
I do not understand, Range bars of course will be compressed more,they are normalized to the opening price of closing. Generally flak and simple image compression, sound has strict limitations on the working area, colors are known, the sound of 20 to 20 kHz. And we encode the repetition of changes. What you feed to the input of the archiver, and what it encodes - compresses. Have you tried ticks?
Good need to unarchive and compare where there were changes, if it is at least similar sections, then there is sense in the work.
If you don't get hung up on ForEx... Russian exchanges (MOEX, FORTS), for example, provide more information on quotes. This is the ratio of the volume of open positions, a table of all deals. A year ago I was addicted to "all my trades". I made an indicator showing all deals as an accumulative balance. It allowed me to observe interesting things:
You can often see significant discrepancies between price increments and balance increments on all trades (which is not supposed to be logical). You can observe when there are significant volume downloads at a flattening price. Next comes the payoff for greed! Since the closing of Buy positions is performed by Sell deals, in fact, it does not prevent visualization. The important thing here is the preponderance of the Open Interest. I mean that such additional information on the entry of NS would not prevent it. :)
Valeriy Yastremskiy:
It is a good idea to unzip and compare where there were changes, if it is at least similar areas, then there is sense in the work.
No changes, 7z compression, lossless.
No change, 7z compression, lossless.
strange, and what is the amount of data, kilobytes or mb? unequivocally compressible data, but strange. In letters files of course 100 percent, but sound and image usually loses. Somewhere there is a misunderstanding. At the input by the way one of the bar prices or all 4 and the time? Documentation on the archiver, there the beginning of the end of the compressed section to look for.
strange, and what is the amount of data, kilobytes or mb? unequivocally compressible data, but strange. In letters files of course 100 percent, but sound and image usually loses. Somewhere there is a misunderstanding. At the input by the way one of the bar prices or all 4 and the time? Documentation on the archiver, there start the end of the compressed section to look for.
For the renders. I take ticks, break them into renders with _Point step, differentiate, get sequence +/-1, save to binary as short, compress in 7z. To store 1 bit is enough, and I use 2 bytes (short), hence the compression.
I take ticks, split them into _Point increments, differentiate, get +/-1 sequence, save to binary as short, compress in 7z. For storage 1 bit is enough, but I use 2 bytes (short), hence the compression.
And how do you get the regularities of a series out of here? The wrong patterns are being squeezed. Too small. Also, separated by a dichotomy. That's why back everything is lossless. And without differentiation you get the archive and back to full match? That would be crutches. We have to figure out the start and end characters of the archive and pull out the squeezable sections. Comparison of the original and unzipped file is not quite correct.
Not a bad article on feature extraction
https://towardsdatascience.com/optimize-data-science-models-with-feature-engineering-cluster-analysis-metrics-development-and-4be15489667a