You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Sorry again for the late responses. I wish I had looked earlier - it would have saved you time, but I hope it was not wasted. So I didn't offer to read the Hurst criterion using a muving - I offered to take the algorithm from the standard delivery and substitute what you need instead of the muving values. In that algorithm, that you have posted (I haven't looked at the last one yet), there is one variable - the median of the sample. How do you see it? If the channel goes horizontally - then it's OK and you get what you need, but in the general case not. That is, you have to take the difference between the actual price and the projection of that price on each bar. Let me try to be more specific: if you approximate the closing price by a muvin, then you should take the difference between the muvin value and the closing price on each bar. If a non-linear regression, then accordingly the value of this regression, if linear, then the value of the regression line, but all this for each bar. That's why I wrote that you must at least have an array of projections - each bar has its own projection. Then you can estimate: take not the whole sample, but only a part of it, build an interval - if everything is still inside the range, take the whole sample and build a projection into the future (extrapolation).
Good luck and good luck with trends.
And this is a general approach, both for linear and non-linear approximations.
So, as far as I understand, the algorithm for calculation of Hurst parameter by your methodology should be as follows:
1. We take a sample of points for which we want to obtain the Hearst parameter. For example for clarity, let us take a sample from 0 to N.
2. Let's take successively a part of sample from 0 to M where 0<M<=N. That is theoretically we have N samples having the following ranges: 0-1, 0-2, 0-3, 0-4,...0-(N-1), 0-N.
3. For each sample we construct a linear regression channel. We get an array of channels and their projections into the future.
4. Calculate the difference between the closing price of the bar M and the projection on this bar of the linear regression channel, constructed for the sample 0-(M-1). That is, the data of the linear regression projection plotted for the PAST, not including the current bar, is taken when calculating the difference? Right?
5. Then we have such an array of differences from which we determine the RMS (S)
6. We find R as a difference between maximal and minimal values of the sample
7. Calculate Hearst parameter.
Now do I understand correctly how to calculate Hearst parameter or not?
If I understand your idea correctly, it seems to me to be a VERY IMPORTANT addition to the method of calculating the Hearst parameter given by the formula in the book. No emphasis is placed on this circumstance of the calculation.
Good luck and good trends.
By this you mean the fact that the concept of a level (its stated value) is meaningful only for the current moment in time? And in some time levels will naturally change as the channel, along which the price is moving, passes some distance and the limits of confidence intervals will be situated in other places in the future. Or did you mean something more by this phrase? For example, do you mean the speed at which the price reached this level? I suppose maybe you meant the Hearst parameter calculation? That is, if the price has almost reached a level, but Hearst shows the continuation of the trend, the level will be broken, even though not at once? Perhaps, this is especially relevant for the levels within the confidence interval.
Vladislav, and what width of the confidence interval do you take specifically for the case of Hearst calculation, as well as for general searches for the optimal sample?
90%
95%
99%
99.9%
Or do you consistently set different widths of the confidence interval in your general search for the optimal sample? For example, did you search for a sample of 90% and find one sample, then you search for 95% and find another one, and so on up to 99.9%?
Or on the basis of experiments you found that, for example, samples obtained for confidence intervals larger than 95% are of little use for prediction and should be discarded in the analysis?
Or maybe you are only guided by the fact that subsequent constructed intervals should be smaller than the initial one constructed by 2/3 samples?
But still, when building the first interval, you should set its width, right?
And one more question, concerning sequence of calculations (final calculation time). I understand that when searching for a linear regression channel, we should start taking samples from the current moment deep into the past. Suppose we have found a set of samples that meet the convergence requirements. But we still have uncounted bars and we count further, obtaining samples that fall outside the interval. Then what criterion could be taken for the fact that further calculations are meaningless and we can end the cycle of enumeration of samples? At first glance I imagine that it is enough to calculate the same number of bars equal to the number of bars in the longest successful sample? Or do you have some other options? For example, is it enough to count only 30% of the longest sample or some other number of bars? Or do you estimate the entire array of prices for the last half year independently of the results and then estimate the calculated errors for the approximation of price series by functions of other orders? For example, the quadratic one, which you have already mentioned.
Please tell me, do you apply any other functions for approximation? For example, harmonic, exponential, logarithmic, power functions (higher than second order), etc.? Or, if applied to the Forex market, is the application of only two functions - linear and quadratic - sufficient for successful trading in this market?
Of course you can.
By this you mean the fact that the concept of level (its declared value) is meaningful only for the current moment of time? And in some time the values of levels will naturally change, because the channel, along which the price moves, passes some distance and the limits of confidence intervals will be located in other places in the future.
Right. Coincidence of the pivot zone with one of the pivot levels considerably increases the accuracy of calculation.
And what width of the confidence interval do you take for the Hearst calculation and for the general search of an optimal sample?
I believe the sample is true until the 99% confidence interval is broken. I also take into account 90 and 95% - it is often the end of a pullback and restoration of the thrend.
But anyway, when you build the first interval, you must set its width, right?
Absolutely - in standard deviations - the most universal way.
Tell me please, do you use any other functions for approximation? For example, harmonic, exponential, logarithmic, power (higher than second order), etc.? Or, if applied to the Forex market, are just two functions - linear and quadratic - sufficient for successful trading?
No - harmonic functions are a special case of the quadratic form. And as for the rest - see considerations about the potentiality of the price field and not only with respect to the Forex market - everywhere where the price field is potential, that is, the profit does not depend on the price trajectory, but only on the difference between the prices of opening and closing positions.
Regarding the criteria - methodologically, I wrote: the price trajectory minimizes the potential energy functional. For more details, see .....
Good luck and good trends.
Now my script finds linear regression channels that satisfy irreducibility principle, i.e. RMS on all channel sample is less than RMS of 2/3 of sample and non-sampling principle on last 1/3 for 99% confidence interval (everything is according to your recommendations). But now a small technical question has arisen. Since there are several "true" channels that operate at the current moment in time, there are scatter regions for such channels, like everywhere else in statistics. I.e. suppose that one of "true" channels is a linear regression channel based on a sample from the current time up to 200 bars ago on the period H1. If the sample varies for example within the range of 190-210 bars, the above mentioned 2 conditions will be fully satisfied. We look at the RMS value for these samples and select the smallest value. According to your strategy, this channel is applicable for the forecast.
Then we move to another time frame, for example on M15. We are trying to find a similar channel in the same time frame. And we get the following result. The optimal channel (with minimum skewness) at M15 appears to be a channel of linear regression obtained not on a sample of 800 bars (200*4) as would be natural, but on a sample of 640 bars! That is, the time domain gives me a sampling variance of up to 25% (this is the maximum - usually less). Also because of this at the current time we have differences of about 5-10 points in the definition of the confidence interval bounds themselves. Since we seem to take as a sample the average bar price (O+H+L+C)/4 and do not carry out any pattern analysis, then the optimal channel time frame plotted for the same time interval at different timeframes must be the same, right? Or it is not so and in this case we should also apply statistical methods of parameter estimation? And the time interval for the optimal channel also has its own variance, which can explain this divergence of samples for the optimal channel at different timeframes?
Accordingly, I have a question. What do you do in this situation? What do you rely on in your calculations? For example, do you take a channel built on a smaller timeframe as a basis for decision-making, or do you estimate the confidence interval limits additionally by averaging the channel limits obtained on different timeframes? That is, if you calculate the same channel on 4 timeframes (M5, M15, M30 and H1), the averaged estimate of confidence interval limits for the same channel will probably be twice as reliable? And you will be able to rely upon it to a greater extent than upon calculation of the channel by one timeframe separately? Or maybe you have another approach? Although, perhaps, in this situation you do not average anything and just look for the nearest appropriate level of Murray, as you've already mentioned?
On what timeframe do you perform the main calculations? You said that your programme calculates data for half a year in 30-40 seconds. I assume that timeframe should not be smaller than H1? Is it so?
Vladislav, please advise on the std_dev[][] array. As far as I understand, this array has dimension Nx2, where N is a number of calculated channels. The values of the cells can be as follows:
std_dev[n][0] - RMS value for 2/3 of sample in channel n
std_dev[n][1] - RMS value for the whole sample of channel n (RMS for projection)
Or I am mistaken and this array contains something else? For example it's possible to have 3rd cell std_dev[n][2] which would contain start bar number for sampling.
By the way, what other variants can be used to build projections besides standard one? The projection repeats the function that was taken as the approximating function + boundaries of confidence intervals that repeat the approximating function in form? What else can you think of in this area? I, for example, could suppose that a projection could be constructed from data obtained several bars ago. It seems even more reasonable this way because if we carry out the projection only by the current moment, the price destroys some channels that formed it several bars ago when approaching the reversal zone and the remaining channels move their interval limits to the undershooting zone. In other words, if we see the reversal zone and the price is close to it, one of the channels considered "true" will fail to meet one of the two conditions. How do you handle this problem? Do you also use the forecast made several bars ago in your analysis of the current situation?