a trading strategy based on Elliott Wave Theory - page 224
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
by the value H, positive or negative, depending on the sign of
of the price difference at the end and the beginning of the interval. It is calculated as
=(Price value at the end of the interval - price value at the beginning of the interval)/H, therefore
therefore it can only take values 1 or -1.
If we examine tick charts with low values of H, we can see that the price changes by 2H, 3H or more per tick. What is the value of the price change in this case?
by the value H, positive or negative, depending on the sign
of price difference at the end and the beginning of the interval. It is calculated as
=(Price value at the end of the interval - price value at the beginning of the interval)/H, therefore
therefore it can only take values 1 or -1.
If you examine a tick chart with small values of H, then it is possible that
the price per tick changes by 2H, 3H or more. What is the value in this case
Price change?
Yes, there is an inaccuracy here. The phenomenon, in delta modulation, is called "steepness overloading".
It is evaluated negatively. In principle the formula itself is correct. Then it goes like this:
The price change is the interval at which the value of the price changes
by the value H, positive or negative, depending on the sign
of price difference at the end and the beginning of the interval. It is calculated as
=(Price value at the end of the interval - price value at the beginning of the interval)/H
therefore can only take positive or negative integer values
as well as negative values.
Correspondingly in the case of 2H, 3H etc. it will be 2, 3 etc.
Thanks to solandr's tip I downloaded the teak archive from the website.
No need to send second time a parcel.
Thank you.
counter-directional price spikes?
Looking at the figure.
I consider the segment from Oren[i-1] to Oren[i], from the area of function values, to be a counter-directional jump in price relative to the segment from Oren[i-2] to Oren[i-1]. The criterion for the counter-directional jump is the fulfillment of the inequality:
(Orep[k]-Orep[k-1])*(Orep[k+1]-Orep[k])<0.
The section from Orep[i] to Orep[i+1] is co-directional in relation to the section from Orep[i-1] to Orep[i]. The criterion for the co-directionality of the jump, is the fulfillment of the inequality:
(Open[k]-Open[k-1])*(Open[k+1]-Open[k])>0.
Now let us turn our attention to what Pastukhov writes in his paper:
Vt and Ut in sense is a number of all jumps multiplied by H
Nt and Mt in sense is just sum of all counter-directed jumps.
Then the statement I gave in the above post:
FAC=1-2/H, is correct.
to Grans
Sergei, look at the price behavior and tell me: is this a trend market or a pullback market?
Right! - It is impossible to answer unambiguously - the question is not correct. At TF=1 this is a trend market. Indeed, the sum at any part of the time series of products of adjacent spikes is always positive and a position should be opened in the direction of price movement. At the TF=50, on the contrary, we see a pronounced flat! Indeed, the sum over any segment of the time series (TF=50) of the products of adjacent spikes is always negative and a position should be opened against the previous price direction.
Now, two words about how "long" the sum should be. I've already written about statistical results. The conclusion is the only one, the number of sum members must be at least 100. In this case, the fluctuations of obtained results will not exceed 10%. This is probably sufficient accuracy for application purposes.
Now, pay attention! Let's look at your drawing from the previous post. What you're highlighting with your eyes as a trend should have about a hundred intervals to be reliably highlighted. If this section is divided into 100 intervals, the TF will be 100 times less than the one where you "highlighted" the trend with your eyes. And it is not a fact that on a TF 100 times less there will not be a flat! Remember the example with the cosine. But it will be a credible flat on which you can make money. Think about this imaginary paradox.
Now, let's divide your "trend" not into 100 intervals, but into, say, 10. Oh! Indeed, the sum of products of neighbour price jumps is positive - TREND! Yes, except that the indetification error is at the level of 30%. This is what is called a "stochastic trend".
That's it, I can't explain it any other way.
If the FAC is positive, we have a case of a deterministic trend; if the FAC is negative, we have a deterministic flat, or in other words, a pullback in price behavior.
Sergey, thank you for your patient explanation. You may be right in your reasoning. I think it will take some time for some of us to change our views on this problem. I'll only note the following from my point of view:.
I do not see any paradox. I do not divide the series into intervals. I think this operation has no substantiation and makes a huge mistake. I analyze a series as a whole, without using any window.
I don't need a trend. I'm satisfied with the strength of the relationship between samples. This is generally sufficient for forecasting (I gave examples earlier).
I don't highlight the trend with my eyes. If I need to reliably know whether it is a trend or not, I use an additional criterion. Conclusions drawn on the basis of this criterion are confirmed with my eyes. That's it, more precisely.
The function sin() has statistic values of 2.127. For it the "no trend" criterion lies in the range (0: 1.9) and is almost immediately in this range. This can be classified in my approach as a state close to a "flat"
Pastukhov transformations in a sense "roughen" the series and are aimed at quite a different use. I do not see any convincing arguments in favor of using these transformations for trend detection, by any method, including autocorrelation.
A trend detection methodology should not have any input parameters. You have two of them: the first is the window size, the second is the parameters for the kagi, rengo ... constructions. Only the initial series! It has everything in it!
...Then the statement I made in the post above:
FAC=1-2/H, is correct...
I confess, I even hesitated a little, in my rightness. But quickly
came to my senses. I suggest you do the same.
OK, to hell with it, H-Hurst, not everyone
understands its calculation algorithms anyway, let's look at FAC. I understand it's a function of
autocorrelation. The formulas can probably be found here or in textbooks.
I've looked at the tutorial and looked at the FAC implementation in Statistica.
Constructed three rows of data, the first one is like this: 1,-1,1,-1 etc.
The second row is: 2,-2,2,-2 etc.; and the third is 2,-1,2,-1 etc. The H-volatilities for them,
are 1, 2 and 1.5, respectively. The FAC value, calculated in Statistica, for
lag = 1, for all three series is -0.995, which is in general natural,
based on the understanding of autocorrelation. For lag =2, it would be 0.993, etc...
Note, the three series are completely different in
H-volatility and the same in FAC(for the same lag).
Either your FAC is not the same as the conventional one, or you have an error in your reasoning
.
.
North Wind, I'm influencing the FAC on a series of first differences and you on the original series. Hence the difference.
Of course, if two series X and Y are defined, then the correlation coefficient is calculated using the formula:
r=SUM(X*Y)/SUM(X^2).
If we now go to the definition of the autocorrelation coefficient, we have:
r=SUM(X[i]*X[i-1])/SUM(X[i]^2),
going from this to the first differences, we get:
r=SUM{(X[i]-X[i-1])*(X[i+1]-X[i])}/SUM(X[i]-X[i-1])^2),
or, in first approximation:
r=SUM{sign((X[i]-X[i-1])*(X[i+1]-X[i]))}/N, where N is the summing window.
Which is, in fact, what was asserted.
...North Wind, I'm affecting FAC on a series of first differences and you on the original series. Hence the difference.
Of course, if two series X and Y are defined, then the correlation coefficient is calculated using the formula:.
Please note, the example I gave of "1,-1,1,-1,1,1,-1..."
is converted to "-2,2,-2,2..." in the form of first differences. FAC(lag=1) for these series
are identical in value, and fully corresponds to the theoretical notions,
stating that for such series the correlation with the previous value
is close to 1. At the same time, the H-volatility for these considered series is different,
i.e. it turns out that your formula is not quite correct.
No two series X and Y were used here. It's just that in order to test
three rows of data were calculated independently of each other.
You calculate the FAC for the rows "3,-3,3,-3..." and "1,-1,1,-1...",
you show me the result, I calculate the H-volatility. Then we compare.
you show the result, I calculate the H-volatilities. Then we compare.
For the first row:
FAC=SUM{sign((X[i]-X[i-1])*(X[i+1]-X[i]))}/N={sign((-3-3)*(3-(-3))+sign((3+3)*(-3-3)+...+}/N=
={-1+(-1)+...+(-1)}/N=-1,
H-volatility (let's denote by H),
H=(sum of absolute values of all price movements)/(sum of movement inversions)=h*N/N=1*h, where h=3.
FAC by meaning, is dimensionless. H-volatility is amplitude dimensional, so we normalize it by h for comparison.
I argued FAC=1-2/H, we have -1=1-2/1=-1 i.e. the identity.
For the second series:
FAC=-1,
H=h*N/N=1*h, where h=1.
We have -1=1-2/1=-1, i.e. the identity.
Which was required to prove.
you show me the result, I calculate the H-volatility. Then we compare.
For the first row:
ФАК=SUM{sign((X[i]-X[i-1])*(X[i+1]-X[i]))}/N={sign((-3-3)*(3-(-3))+sign((3+3)*(-3-3)+...+}/N=
={-1+(-1)+...+(-1)}/N=-1,
Yes, I agree. Only I did it differently, but the result is the same.
H-volatility (let's denote by H),
H=(sum of absolute values of all price movements)/(sum of movement inversions)=h*N/N=1*h, where h=3.
FAC by meaning, is dimensionless.
Yes, that's what I think too.
H-volatility is amplitude dimensional, so we normalize it by h for comparison.
But that's where it's wrong. Everything is already set in h. So there's no need to normalize it.
I argued FAC=1-2/H, have -1=1-2/1=-1 i.e. the identity.
You, at the expense of "normalisation", reduce all cases to one case where =1.
Besides, instead of the "direct" formula for calculating H-volatility you seem
You use your own formula which is erroneous and therefore your result is in error.
For the second row:
FAC=-1,
H=h*N/N=1*h, where h=1.
We have -1=1-2/1=-1 i.e. an identity.
Which was required to prove.
Let's calculate FAC and H-volatility for
another series, e.g. 3,-1,3,-1, etc. I argue that FAC would be =-1, H-volatility =2.
H-partitioning is done at h=1. No differences need to be taken, the series is pure.
By the way, another interesting example of a series, 1,2,-3,1,2,-3. What do you think will happen?