Volumes, volatility and Hearst index - page 31

 
Vita:

Farnsworth 18.09.2010 22:08

Has already given a good definition of self-similarity:

He did bring it, but it had no effect on further discussion

Examples with geometric similarity help to illustrate the point of Hearst as a self-similarity coefficient. For example, you can give a geometric interpretation of R/S analysis - take a ruler of size 1, measure R/S with that ruler, take a ruler of size 2 and repeat the measurements. And so on, as long as it is relevant. Actually, in this way, the equality of distributions is evaluated, and the self-similarity coefficient is calculated in the process.

I have a slightly different "geometric" impression, namely that a row of size 1 takes a ruler of size 1, a row of size 2 takes a ruler of size 2, etc.

In any case, I would very much like you, Candid, to give your geometric interpretation or, so to speak, to show me in pictures, what the geometric meaning of such a definition is:

Personally, I see that Hurst, the self-similarity coefficient, in the above definition has been simplified to a single measurement of a characteristic similar to R/S using a ruler of infinite length. Obviously, series that do not have an infinite normalized spread would, by such a definition, have a Hurst coefficient equal to zero. What is your opinion?

It is not quite logical of course to ask for help in geometrical interpretation from a person who has just tried to take the discussion beyond it :).

I'm afraid to disappoint you, but I can't offer any geometrical interpretation, except for abundant R/S diagrams found in literature. I think it is clear from them that the Hearst figure can only be a marginal characteristic.

Generally I have never positioned myself as a specialist in R/S analysis, on the contrary, I have long and repeatedly stated that I have always neglected it due to the computational "heaviness" and hence unreality (at least for me personally) of any representative testing. So I advise you not to try to see patent truths in my interpretations.
 
FreeLance:

It's not like I was addressing you personally. But since you replied - the contrived self-identification did not work :)

As for the question, it is not to the errors in the interpretation of the results of the analysis of the same process (such hasty conclusions kindly faa1947 demonstrates - by deleting every second observation, requires that the period in units is kept), but in the fact of the cyclicality of the moving average of the sum of random series.

This makes it impossible for me to understand the quoting process itself and the resulting price trajectory.

And if the alleged geometric wandering of a quotient is the result of a series of random processes (smoothed by DC filts and coarsened discretisations of taframes), then how is this consistent with the uniform distribution (and ultimately the Gaussian) of some popular models?

By the way, the "trend-wave-noise" model over a "very long period" does not bear out with respect to forex - there cannot be a trend here by definition.

Gold, oil, sugar - a trend is needed there. To estimate inflation...

;)


As I "think", the fact of cyclicality is exactly what I wrote about. There is a slight difference in the integral characteristics of the bias. In fact, the same sample is being evaluated, and it is clear that it correlates well with itself and will appear prevdo cycles


As for the quotation process - I don't know what it is either. The only thing is - I found a good approximation in modelling it.

 
Candid:

The tenacity with which many people try to interpret similarity solely as geometric similarity is truly amazing. Despite the perfectly specific example of similarity given, I am referring to the statistical ratio of High-Low to |Close-Open|. That is the real similarity. By the way, Yuri, your example on ZZ may be even better, but it seems to be from a personal account, so I don't bring it here.

Another remarkable example of incomprehensible stubbornness is the requirement of presence of ideal fractals in real rows.

By the way, perhaps the patterns are just segments of "almost undisturbed" fractal development. Which, of course, cannot last long.

I also do not think it is correct to compare minutes to days. I have almost 4 million bars on the Euro minutes for example. And on the days I have 3316. I'm just sure I can find quite a few very similar areas on the minute history.

Even the recent offtopic with the pullback distribution is actually not an offtopic at all, but an example of a real similarity. The price went 100 pips, rolled back 23%, then went another 50 pips (150 in total) and rolled back 23% again - isn't that a similarity?

I suggest that arguments like "here real trees are different from fractal trees, therefore we do not need science about fractals" should not be considered anymore.

In other words, that classical definition that the "classics" keep telling us about, drawing their snowflakes and stuff - we don't see all that, at the level of numbers. Instead we have the "statistical High-Low and |Close-Open| ratio" - which can be explained in traditional Brownian motion. And a 23% pullback - incomprehensible to me personally. OK, I'll leave it out of the way.
 
Farnsworth:

As it seems to me, the fact of cyclicality is exactly what I've written about. In the slight difference in the integral characteristics when offsetting. In fact, the same sample is being evaluated, and it is clear that it correlates well with itself and will appear prevdo cycles

So the ranks are independent at Slutsky's, right? Or am I confusing something?

As for the quotation process - I don't know what it is either. The only thing is - I found a good approximation in its modelling.

Maybe that's another fascination too... Indeed, without a model of the process (including the distributions used) so far I have not been able to prove or disprove anything.

And so it turns out - admiration of the stats. And not even in a demo or in a tester. In Matlab... :о)

I would like to be wrong.

;)

I sincerely wish you good luck.

 
HideYourRichess:
In other words, that classical definition that the "classics" keep telling us about, drawing their snowflakes and stuff - we don't see all that, at the level of numbers. Instead we have the "statistical High-Low and |Close-Open| ratio" - which can be explained in traditional Brownian motion. And a 23% pullback - incomprehensible to me personally. OK, I'll leave it out of the way.
Well again, just compare a real tree to a fractal tree. It takes very specific conditions to grow near-perfect objects. The probability of such conditions existing for any length of time in real life is negligible.
 
FreeLance:

So the ranks are independent at Slutsky's, right? Or am I confusing something?


You wrote about the Slutsky effect, if I'm not mistaken. At least that's what it said, in the sense of "asked". The effect is that strong correlations and pseudocycles appear on aggregated data, particularly on the moving average. These "dependencies" appear even on aggregated random series data, where in principle they should not be. I was kind of asked about it. I gave my own explanation.

Maybe it's another charm too... Indeed, without a model of the process (including distributions used) so far it has not been possible to prove or disprove anything.

I have written what model of the process I use. It is quite adequate to reality. And the "bulls"/"bears" nonsense etc. I don't believe in it. It's not even fascination - it's nonsense.

And so it turns out - admiring the stats. And not on a demo or in a batter even.

I'm writing a list of problems. But why do you need to read it? Do not bother! You'd better go into it, because you'd better write all that crap about admiration, playing the role of a fucking psychologist :o)

In Matlab... :о)

All things considered, the state in MT will be the same, don't worry. Besides, I'm tirelessly "practising" :o)

I would like to be wrong.

If you really want to, you're welcome to be wrong, I'm not against it :o)

I sincerely wish you good luck.

Likewise :o)
 
Candid:
<br / translate="no">

Truly amazing is the persistence with which many try to interpret similarity solely as geometric similarity.

I interpret similarity as the similarity of the models that form the object and the starting conditions.

 
Farnsworth:

That's not how it's written at all and it's understood a bit wrong. ARPSS is essentially an AR model with covariance matrix correction. There are components that extend ARPSS - you can include a trend model(!), a breakdown model(!), many things. What are you saying about it? Do you think I know nothing about it? I'm writing about something else - I'm not applying these models directly to quotes. It makes no sense. I was writing about using stochastic systems with a random structure. That's it - what are you arguing with? That you can apply them on quotes? ARPSS on quotes? Congratulations!

It's maths that doesn't work in this case - none of the necessary conditions are met. Well yes, QUALIFICATION - who's arguing that.

Who reasoned? What are the results to share? Right here: https://forum.mql4.com/ru/34527/page27 gave the result of testing in pips, so far in MathCAD, 25 trades in 150 days. Also in the branch of online systems testing - did some forecasting.

PS: If you can apply ARPSS to quotes and correctly identify the process - show your skills.


You're being very aggressive. I never argue. Thank you for your posts on me.
 
faa1947:

You're being very aggressive. I never argue. Thank you for your posts about me.
No, I'm not. I'm nice. Honest! It's an axiom. :о) And thank you very much!
 

Candid:
Привести то он привёл, но на дальнейшем обсуждении это никак не сказалось - очень жаль, на мой взгляд, когда правильное определение, можно сказать, суть того, что изучается в вопросе самоподобия, никак не сказываетя, хотя бы на расчете самого коэффициента. У меня несколько другое "геометрическое" впечатление, а именно: для ряда размером 1 берётся линейка размером 1, для ряда размером 2 берётся линейка размером 2, и.т.д. - скорее всего, это не так, если под "другой размер ряда" имеется ввиду "другой ряд". Дело в том, что ряд остается неизменным.

There is a geometric interpretation - shoreline length. We always measure the same row, the same shoreline. The fun is that as we increase the accuracy of the ruler, we get more and more shoreline lengths. Do you understand how crude the self-similarity estimate of the shoreline would be if we measured with only one ruler of any length, much less an infinite length? All these measurements of the same shoreline (row) using rulers of different lengths are needed to increase the accuracy of the estimate. If there is a similarity at each scale level, then all points will lie on the same straight line.