You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Neutron, have you come across this book?
http://www.logobook.ru/prod_show.php?object_uid=11150699
No. Unfortunately, I haven't got my hands on it. I guess you could ask Mathemata.
Obviously, I'll have to get into the business of collecting ticks after all.
And rightly so!
Apparently, I haven't got to the bottom of why the question was "not rhetorical" yet. Hint again sometime.
Look at the First Difference Distribution Function of the series of transactions (entry/exit points, red squares in Fig.2), it is these that need to be prognosticated, not the series of quotient extrema:
You can see that it is close to the "shelf" and there is no "alignment" hassle. Just, bringing it into the +/-1 range. I wanted to draw your attention to this.
1. How can H be < 2, if the minimum possible H=1(i.e. one point)? If H is taken in the quote price, then how can it be > 2?
2. So far I have normalized the data fed to the grid input by the doubled volatility of the studied series, how to do it now? I assume the same way(all this confusion inspired by the 2H dispute) -:)
However, I still prefer to ask.
Look at the Kagi constructions (in black):
1. Hvol, it is average length of Zig-Zag shoulders, it cannot be less than H by construction (see fig. 2 condition of top formation), and can be more than any predetermined value. Its mean value for a Wiener process (random BP) tends to 2H or 2 if we go to values normalized to the breakdown step. This value for the market series is different from 2 and this difference is a measure of arbitrage-free character on the chosen trade horizon H. To better understand this, use Matcad and add all the moduli of lengths of Kagi process sides and divide the obtained number by the step of division H. You'll get the value a little less than 2. By the way, if we subtract two from this value and multiply the modulus of the obtained number by H, then we will have an estimate of the average return (pips per transaction) of this strategy for the selected symbol and a given trading horizon .
2. In this case, normalisation to H or 2H (see Fig.1) is sufficient to normalise the input data (series of transactions) for NS.
... and a consilium of mathematicians with knives in the back alley.
Yeah, as long as it's not coitus :-)
Here is how the series of transactions built on EURJPY ticks looks like for H=10 pips without reference to the time axis:
Agree, colleagues, the eye has something to cling to... Pastukhov himself states that the best TS with such algorithm of price series breakdown is the sign-variable one (always upside down). On a large sample it gives a statistically significant advantage over any other TS. Pastuhov's Supervisor once mentioned that there is no more profitable TS on the right side of kotir (not on history)... Although, I agree with grasn's comment about strategy's real profitability being small (compared to DC's commission). Probably for this reason, the dissertator himself, in the last pages of his work, was speculating about the possible usefulness of identifying significant patterns in a number of transactions.
In general, if one would take and strictly prove the assertion that such partitioning of the quotient is the most profitable of all possible TA constructions. Then we could once and for all forget about all sorts of Fibo-levels, support-resistance lines, alligators, head-shoulders, etc. and concentrate our efforts on the main direction of the assault.
For better understanding take all moduli of lengths of sides of Kagi constructions in Matcadet and divide the obtained number by the step of division H. You will get the value of a little less than 2. By the way, if we take two from this value and multiply the modulus of the obtained number by H, we will have an estimate of the average return (pips per trade) of this strategy for the selected instrument and given trading horizon H.
The moduli of lengths of kagi-positions are the first differences of the kagi-positions series. I do not have them yet that is why I added modules of first differences of a series of transactions (modules of a kagi series should be on average 2H more). The result is 500 and a penny.
Here First is the first difference series of transactions, 21 is the h threshold (in this case it is equal to 7 spreads, i.e. 21 points)
I forgot to tell you, you, look for the average. Divide everything by the number of segments.
Yes, the result is 1.015.
There is a suggestion at this point (temporarily) to end consideration of the vertical breakdown and go back to the NS, actually. Two-layer to do. I am not ready to process the data yet, as the data suitable for processing is practically absent.
By the way, gentlemen, a question for everyone:
Where can I find teak history(for experiments... -:))?
Yes, the result is 1.015.
There is a suggestion at this point (temporarily) to end consideration of the vertical partitioning for now and return to NS, actually. Two-layer to do. I am not ready to process the data yet, as the data suitable for processing is practically absent.
By the way, gentlemen, a question for everyone:
Where can I find teak history (for experiments... -:))?
And this is actually an interesting question. If there weren't any transactions in the given second, there will be no price data, right? So the tick history will be very different from one brokerage company to another - am I right?
This is actually an interesting question. If the DC is not transacting at a given second, it won't be transmitting price data either, will it? That is, the tick history will differ significantly from DC to DC - do I understand it correctly?
Of course. Every brokerage company has its own filter and the settings of these filters are different for different brokerage companies. Besides, policy and conditions of some brokerage companies are different as well. All this variety of ways to take money honestly from people is reflected in the quotations given by brokerage companies to their clients. In some brokerage companies not all clients get the same quotes. That is, they (i.e. us) are also divided into groups. Someone gets Filter A, someone gets Filter B, and someone interbank... Depending on deposit size and potential DC trading skills of its owner.
In general, if strictly prove the statement that such a kotir division is the most profitable of all possible TA constructions, then we may once and for all forget about all sorts of Fibo levels, support-resistance lines, alligators, heads and shoulders, etc., and concentrate our efforts on the main direction of the assault.
I gave up on all that nonsense a long time ago! Maybe someone needs a mathematical justification for the unprofitability of well-known approaches, but I don't need it. The mass loss-making practice is enough to understand that there is no point in looking for a black cat in a dark room, especially since it's not there (and has never been). The question, as it seems to me, should be put in a slightly different plane:
To what extent the historical data on a segment N(and no more) is suitable for predicting the next one. Why one segment is well predicted and the other is not predicted at all and identify areas with more than 50% predictability. Work on them and smoke bamboo the rest of the time.
Yes, it came out to 1.015.
Where can I find a tick history (for experiments... -:))?
It should have been 2. Did you probably divide the amount by 2H instead of H?
I attached a file with EURUSD ticks for half a year. File format: Date, Time, Seconds (since 1970), Ask, Bid.
P.S I gave a way of estimating the profitability for the TS based on Kagi splitting in the above post. Let me clarify that this is an estimate of the maximum return. Realistically, it will be lower because of the inevitable forecasting errors of the expected movement of a number of transactions.
It should have been 2. Did you divide the sum by 2H instead of H?
No, I divide by H :
My result depends on the size of H. Maybe it's the data, or maybe it's my mistake, but I can't find it yet. Maybe it's because I'm not using a cagi split, but a split over a number of transactions? Have a look at my listing, please, if you have the time. It's pretty straightforward there.
By the way, see what a characteristic picture I get of the distribution of first difference series over a row of transactions, if I do a vertical breakdown of minutes with one spread(3 points):
Thanks for the ticks !
Maybe the point is that I'm not using a kagi breakdown, but a breakdown into a number of transactions?
The evaluation should be done for a Kagi breakdown, not a series of transactions.
Did the kagi. Now it's just over 2(2,153) then it gets closer to 2, then further away, depending on H , but always just over 2