Volumes, volatility and Hearst index - page 24

 

to Yurixx.

Сергей, это ко мне ? О каком вопросе речь ?

Exactly to you. Right here: https://www.mql5.com/ru/forum/128060/page22, post 15.09.2010 13:30

toYurixx

I have a "physical" question. Hearst after all published in 1951 .

I really don't understand the physical interpretation of it a bit. But in terms of fractal analysis, it makes perfect sense. The characteristic that FreeLance mentioned, self-similarity, is essentially an object of study in fractal analysis.

The value for the Nile of 0.7 means that the future will be very similar to the present (and practically along the entire length of the process). Indeed, the shape of the future flow is very similar to the past, which is already valuable information.

The market is precisely a very weakly self-similar process, with a magnitude almost equal to 0.5 or slightly higher. Take any plot, for example. You will not find such a plot on the history, moreover it is even a problem with similar ones. It can be easily checked by correlation. It is rare to find a plot with correlation around 0.8-0.9, literally 10-15 pieces over 10 years (). Exactly for this (and other reasons) - wave analysis never works - there is no similarity, especially on long stretches.

If we start from resemblance, the strategy has to be built in a completely different way. I will think about it, I will express my thoughts

What do you mean by that?

And where is the line on unambiguity?

Yes, I forgot to highlight it in your post.

So a value of 0.99 unambiguously indicates that the process tends to continue moving in the current direction. Another thing is if the Hurst we have is local. Then it itself can change at any moment. And accordingly the prognoses will change.

So I asked you - how to define this univocalness? Where is the line between unambiguous and not so unambiguous.

 

About uniqueness. Only a Hearst value of 1.0 or 0.0 can provide it. The opposite condition is complete uncertainty (i.e. the same probability of all outcomes). This, as you know, is 0.5. And in between there is only a probability measure of one outcome or the other. So there is no facet. Singularity itself is a facet - a limiting condition. As much a facet as 0.5.

And the physical interpretation just arises from the understanding of the limit states. A value of 0.7 says the future is likely to be similar, 0.8 says it will be similar, 0.9 says it will be very similar. That's about right.

As for Hirst's results in relation to Neil, in light of the facts presented in this thread, it must be understood that he had very small amounts of data. Consequently he was in a scissors position - either small intervals or small statistics. In these circumstances his score for SB is in the region of 0.55 - 0.60, which is not very far from 0.7. So I would estimate his result to be that the Nile spills have some measure of persistence, but it is not too great. I.e. it's definitely not SB, but it's also not so badly not SB that you can be sure of long trends.

I won't say anything about Aswan. It is a living refutation of Hearst's Law, we don't like it.

 
Yurixx:

About uniqueness. Only a Hearst value of 1.0 or 0.0 can provide it. The opposite condition is complete uncertainty (i.e. the same probability of all outcomes). This, as you know, is 0.5. And in between there is only a probability measure of one outcome or the other. So there is no facet. Singularity itself is a facet - a limiting condition. As much a facet as 0.5.

And the physical interpretation just arises from the understanding of the limit states. A value of 0.7 says the future is likely to be similar, 0.8 says it will be similar, 0.9 says it will be very similar. That's about right.

As for Hirst's results in relation to Neil, in light of the facts presented in this thread, it must be understood that he had very small amounts of data. Consequently he was in a scissors position - either small intervals or small statistics. In these circumstances his score for SB is in the region of 0.55 - 0.60, which is not very far from 0.7. So I would estimate his result to be that the Nile spills have some measure of persistence, but it is not too great. I.e. it's definitely not SB, but it's also not so badly not SB that you can be sure of long trends.

I won't say anything about Aswan. It is a living refutation of Hearst's Law, we don't like it.

I am confused only by one thing - the received power dependence (formula), and it is not important here which index is 0.5 or 0.7 - all the same - a "forecast" by it for a long term leads to results which are obviously not credible. However, it does not matter at all. Let's not waste our time on it.

Ok, as I wrote above - I take as the basis a fractal characteristic - self-similarity, and do not use R/S analysis as a tool. I'll keep thinking.

 
Farnsworth:

The market is just a very weakly self-similar process, with a value almost equal to 0.5 or slightly higher. Take any plot, for example. You won't find one exactly like it on the history, moreover, there is a problem with similar ones. It can be easily checked by correlation. It is rare to find a plot with correlation around 0.8-0.9, literally 10-15 pieces over 10 years (). Exactly for this (and other reasons) - wave analysis never works - there is no similarity, especially on long stretches.

If we start from resemblance, the strategy has to be built in a completely different way. I'll think about it and present my thoughts.

You're talking about literal similarity here, actually patterns.

In reality it is possible to talk about similarity not for local characteristics but for their average values (for example see High-Low/Open-Close| data in this thread on p. 14). However, my experience with statistics made me somewhat skeptical about the possibility to derive a trading system from them (statistics). The confidence intervals, you see, always turn out to be wrong and I am beginning to suspect a fundamental law.

 
Candid:

However, my experience with statistics has made me somewhat sceptical about the possibility of deriving a trading system from them (statistics). The confidence intervals, you see, always turn out to be wrong and I am beginning to suspect a fundamental law.


I would like to know which statistics and for which values. There are no normal distributions in the market. All distributions for all variables are non-normal. :-)

In fact, they are all non-normal. So the three sigma rule will not work, but what will? How to determine the confidence interval if distributions are unknown and there are long, heavy tails sticking out from everywhere?

And finally. Of course, few people know about statistics, but there is no way out of them in Forex. Therefore, it is unrealistic to find something, for which statistics gives good confidence intervals and it is stable, imho. This is the fundamental law that professionals are picking up more and more subtle manifestations of stationarity, even short-term ones.

 
Yurixx:


I would like to know which statistics and for which values we are talking about. There are no normal distributions in the market. All distributions are non-normal for all variables. :-) In fact, they are all non-normal.

Well you know very well that I work almost exclusively with empirical distributions. So I don't give a shit whether they are normal or abnormal, whatever they are. To paraphrase a famous character - we have no other distributions for you :)

So the three sigma rule will not work, but what will? How to determine the confidence interval if distributions are unknown and there are long, heavy tails sticking out of everywhere?

No problem at all. If you take an interval of, say, 90% of the events, that will be the empirical 90% confidence interval:)

And finally. Of course, few people know about statistics, but there is no way out of them in Forex. Therefore, it is unrealistic to find something, for which statistics gives good confidence intervals and it is stable, imho. That is the fundamental law that professionals pick up more and more subtle manifestations of stationarity, even short-term ones.

That's what I didn't get. Were you just trying to justify my fundamental hypothesis?

By the way, have you tried to ponder to what extent we can talk about short-term (local) manifestations of such a non-local characteristic as stationarity?

 
Candid:
You well know that I work almost exclusively with empirical distributions.
No problem at all. Take an interval in which, say, 90% of events fall, it will be an empirical 90% confidence interval :)

Well, that's a different matter then. That I understand. I support that. I'm like that myself. :-)

Candid:

That's what I don't get. Were you just trying to justify my fundamental hypothesis?

By the way, haven't you tried to ponder to what extent we can talk about short-term (local) manifestations of such non-local characteristic as stationarity?

Exactly. I tried to find something like that, too. Then I understood that "everything was stolen before us". I had to console myself with something. So I went after the professionals.

I have long wanted to explore this question - about the lifetime of stationarity. But I still cannot formulate the correct statement of the problem. But it is an interesting topic. Some time ago it was already picked up on a forum, but without special result.

 

While Sergey is thinking about it, I'll make an offtopic :).

I've calculated the distribution of kickbacks


The most probable value is 0.23. Fibo, by the way.) But there are no other levels close to it.

What is the confidence interval? 90% of reversals took the interval from 0.11 to 0.6. So, we can be 90% sure that the 23% pullback will happen only when it passes the 0.6 level :)

 

What do you call a kickback? I mean it is clear as it is, but it is not clear how the distribution is formed. Any segment of the WP is a rollback in relation to the previous one. But you seem to consider only those segments that are smaller than the previous one and take their ratio. Or what ?

The last sentence is not clear to me either. Or is it a joke ?

You didn't, by any chance, build a |Open-Close|/(High-Low) ratio distribution? It's not blurred with time, so it can (or should) be stationary. Besides, it is entirely on the interval [0,1].

 
Yurixx:

1. what do you call a kickback? I mean it is clear as it is, but it is not clear how the distribution is formed. Any segment of the WP is a rollback in relation to the previous one. But you seem to consider only those segments that are smaller than the previous one and take their ratio. Or how ?

2. The last sentence is not clear to me either. Or is it a joke ?

3. Did you build a |Open-Close|/(High-Low) ratio distribution by any chance? It's not blurring over time, so it could (or should) be stationary. Besides it is entirely on the interval [0,1].

1. By a rollback I mean any reversal in the process of forming a segment of a High-Low that does not result in a switch in its direction. This is a restriction from above. From below I have also restricted, however, not to bother with rubbish at all. In general, this distribution is approximate, in fact the statistics on kickbacks probably a bit wrong to collect. But is it really necessary? :)

2. If we suppose that the true value of any pullback is 23%, then we can find the level beyond which the pullback will not go with 90% confidence. It's up to you to decide how serious this assumption is :)

3. no, i didn't. Can you give me a reason why it should be built? It doesn't take long to build it, but what trade ideas can you test by looking at it?