Volumes, volatility and Hearst index - page 3

 

Regarding dimensionality, what changes over time, normalised figures can be used. Most conveniently in the range of 0 to 1 within a nontemporal window. Some time ago I posted several indicators which allow to normalise volume, ATR, st.deviation. One variant of the indicator takes a purely rectangular window (a la Stochastic). In the other one - the adaptive channel.

Here is, for example, a picture where Adaptive Renko is in the 0th window, and StdDev, ATR and Volume normalized in the adaptive channel are in sub-windows. You can clearly see the FALSE breakdowns of the Renko channel - in fact, that's why I brought it out - to show the usefulness of volatility estimation - you can assess where the impulse and where the correction are.


 
Candid:

With Hearst, more reasoning would still be helpful. Classically Hearst is the slope of a linear regression on a log-log plot. This method is insensitive to the presence of a substrate, i.e. the presence of a constant multiplier at N. You replace it with the slope of a ray drawn to a point from the origin. This is only correct if the points lie on a line passing through the origin. Do you have a graph of points of different TFs in coordinates Log(N) - Log(High-Low) ?

You are absolutely right, exactly through the origin of coordinates. The chart in coordinates Log(N) - Log(High-Low) I haven't built, I do not see the need for that yet. Instead of it, I can offer you something more interesting. Remember geometry problems, which were solved by building? It would seem that the construction procedure is completely arbitrary. However, if it is correct, it leads to the right result. And there is something similar here.
Do you remember Hurst's construction by Peters? There, before counting something, he prepared a series. In his case it was necessary for a number of reasons. The values of this series were of any kind, so some normalization was needed. To do this, the RMS was calculated and the series was normalized by it. The RMS was also calculated and subtracted from the series to provide a zero sum. And there was no time, by the way. Instead of time, he used count numbers - the same ticks. And, of course, he couldn't say anything about the coefficient in the formula.

We have this preparation - the construction of a new series, which has such characteristics that remove both RMS and coefficient c from the formula. As a result of linking to the renko-grid, the ugly curve of the 5-digit price chart acquires the following properties. Each tick changes the price by 1 point (Peters could only dream of this). That is, all returns become +/-1. Hence the RMS = 1 as well. Now imagine a situation where the price goes in one direction all the time. The price chart is a straight line, i.e. R=N (for each 1 tick the spread increases by +1 point). Obviously, this is the most trendy behaviour, which should lead to h=1. It is so, because R=N is the formula for determination of h, where N enters in 1st degree. But it also shows that c=1 and it cannot be otherwise. This is, of course, a limiting case, but c must be the same for all cases.

Candid:

By the way, three-digit ones are even more stable then :) . Interesting.

At this point I wondered. :-) The "on my fingers" explanation I gave above is nowhere attached to the point size. So have no doubt, for 3-digit points the result will be the same. You just need to have the methodology of row construction to hold up. And of course, the reno-grid must be 3-digit.

However, the difference for the trader will be significant. If a 3-digit point contains 10 pieces of 4-digit. 4-digit ones, then in the limit of Brownian random walk, a 3-digit tick should contain 100 pcs. 4-digit ones. As they say, feel the difference. It's like going to a completely different fractal level (your way of saying horizon). Like going from M15 to D1.

By the way, the word "stability" somehow doesn't really fit here. It's not about stability, it's about how fast the reach boundary expands. If the series is stationary, then the expansion of any fractal level reaches the next level in a certain time, etc. In this situation you are right - the volatility will be the same at all levels. If the series is non-stationary, then fluctuations between trendiness and reversion at one fractal level averaging out may give a very different picture at the next level.

 
Svinozavr:

Regarding dimensionality, what changes over time, normalised figures can be used. Most conveniently in the range of 0 to 1 within a nontemporal window. Some time ago I posted a few that allow normalisation of volume, ATR, st.deviation.

Peter, normalising to the interval [0,1] is my favourite form of data representation. However, this normalization may be natural and universal, or it may be quite artificial, e.g. on the difference (max - min) of the window. In the second case it is equivalent to simple proportional compression. This is not very informative.

I unfortunately don't know the content of your normalisation method, so I can't say anything. Especially about volume rationing, which I don't think has anything to do with renko-channel.

 
The first thing that comes up on the thread topic after burying tick volume is its own tick volume: a volatility meter. Just an indicator which would show the number of small ZigZags inside a big bar. For example, such an indicator with a ZigZag with a min knee of 1pp would be fully consistent with the current tick volume in MT4. But such a ZigZag cannot be calculated precisely as there is no tick history and we wanted to see it by so-and-so. But a ZigZag with a bigger knee is a different matter. It will be possible to see what cycles are there and how they change over time. It's easy to implement.
 
Yurixx:

Peter, normalisation to the interval [0,1] is my favourite form of data representation. However, this normalization can be natural and universal, or it can be quite artificial, such as the difference (max - min) of a window. In the second case, it is equivalent to simple proportional compression. This is not very informative.

Purely in arithmetic it's not equivalent to proportional compression if we mean a hard coefficient by which the parameter being normalised is multiplied. If we mean something like logarithms, it makes no difference for the purposes of identifying momentum/corrections.

I unfortunately don't know the content of your normalisation method, so I can't say anything.

Let me explain about the method. The figure in the lower sub-window shows the st.deviation and the adaptive channel. This is what is normalised by (result - 1st subwindow).


Especially about volume rationing, which I don't think has anything to do with the renko-channel.

This has as much to do with Adapt.Renko as with any other breakout-channel method. Namely, momentum confirmation (in conjunction with other price volatility indicators - e.g. St. Dev). I don't seem to have succeeded in explaining the purpose of this in the previous post. However, maybe you missed it? ))) I haven't said anything new now...
 
Yurixx:

Now imagine a situation where the price goes in one direction all the time. The price chart is a straight line, i.e. R=N (for every 1 tick the spread increases by +1 point). Obviously, this is the most trendy behaviour, which should lead to h=1. It is so, because R=N is the formula for determination of h, where N enters in 1st degree. But it also shows that c=1 and it cannot be otherwise. This is of course a limiting case, but c must be the same for all cases.

And for random walk can you get a general formula? But don't refer to Einstein, his formula of random walk is for Close-Open and you need it for High-Low. It is crucial for you that proportionality coefficient in formula for random walk should equal 1. But if it equals 1 for Close-Open (of course, I do not remember the formula but I believe on your word, it must be 1 for Close-Open), then it must be different for High-Low from 1, because High-Low is always larger than Close-Open (I mean the expected payoff, of course).

My point is this: When you get rid of the influence of primary filtering your proposed value becomes quite an objective characteristic. (And for 4-scores on 5-scores, and even more so for 3-scores, the influence of the primary filtering should be significantly suppressed).

But there are still no sufficient grounds to compare the absolute values of this value with "calibration" for Hearst. It means that at 0.5 the series is accidental, above - trendy and below - reversible.

For this characteristic we need to make our own calibration.


 
Candid:

Can you get a general formula for random walk? Don't refer to Einstein, his formula for random walk is for Close-Open and you need it for High-Low. It is crucial for you that proportionality coefficient in formula for random walk should equal 1. But if it equals 1 for Close-Open (of course, I do not remember the formula output, but I believe on your word, it will be equal to 1 for Close-Open), then it must be different for High-Low from 1, because High-Low is always larger than Close-Open (I mean rial expectations, of course).


SB on Forex is one-dimensional, the price moves only up and down. Einstein derived a formula for Brownian motion, which is flat, there are two coordinates. Ideally, the principle of independence of motions allows us to consider motions on axes separately. But Einstein's formula determines the path of a Brownian particle, i.e. its removal in time T from the starting point. As you understand, you cannot separate the motions here, because this removal is determined from the coordinates by Pythagoras' theorem. So I won't refer to Einstein, especially since I haven't used his formula or referred to him anywhere.

I do not understand anything about Close-Open. I have never had it. The spread is defined by High-Low, while Close and Open do not play any role in this process. This is the first time I've heard from you that they are in Einstein's formula. However, if you call the start point Open and the end point Close, then yes. :-)

I only used the Hearst formula, which is actually the definition of the Hearst exponent. The only critical thing for me was that the coefficient in this formula is constant and does not depend on the nature of movement - trend or counter-trend. Then it can be determined from some particular case.

About the general formula for SB - it is an interesting task. And I can solve it. On one condition. You tell me (or give me a link) how to calculate in general terms the spread over time T, if the distribution of the process and its dependence on T is known. But I failed with the scaling and did not find a source.

Candid:

But so far there are insufficient grounds for comparing the absolute values of this value with "calibration" for Hearst. i.e. to assume that at 0.5 the series is random, above it is trendy and below it is reversible. You have to do your own calibration for this characteristic.



Yes, I agree with this formulation of the question. It still needs to be sorted out.

 
hrenfx:
The first thing that comes to the topic of the branch after burying the tick volume is its own tick volume: a volatility meter. Just an indicator that would show the number of small ZigZags inside a big bar. For example, such an indicator with a ZigZag with a min knee of 1pp would be fully consistent with the current tick volume in MT4. But such a ZigZag cannot be calculated precisely as there is no tick history and we wanted to see it by so-and-so. But a ZigZag with a bigger knee is a different matter. It will be possible to see what cycles are there and how they change over time. It's easy to implement.

Made a volume indicator as described:

In this case no tick volumes are used. Only price data from lower timeframe is taken (PeriodData parameter).

All the same cyclicities are visible.

In the indicator the Pips parameter sets the min. knee of the ZigZag in points. Of course, for a long time interval it would be better to set this parameter not in points, but in relative values of price change (the alteration in the code will be minimal).

Files:
myvolume.mq4  2 kb
 
Yurixx:


Yes, I agree with that point of view. It still needs to be sorted out.

On synthetics we can calibrate, like Bernoulli process, p - continuation probability q - reversal probability, p > q - trend, p < q - reversal, p = q - random walk. That is, the important thing is not to work with probabilities +1 and -1, but with probabilities of matching the sign and changing it.
 
Candid:
The synthetics can be calibrated,

I did that yesterday. Only I didn't calibrate, but looked at what the indicator showed on a clean SB. The result was unexpected for me. The average value on M10, H1 and H4 is around 0.54. Now I think why?

Of course, it would be optimal to obtain this formula for the SB in the analytical form. But here we have this problem with the spread. What does it mean - modulus average, RMS of random walk or something else - no one writes about it.