Volumes, volatility and Hearst index - page 30

 
Farnsworth:

You can't isolate noise on a quotation - you probably don't understand this because you haven't tried it. And no ARPSS will help you on quotations and you will never find these plots. If only there were more of us millionaire smart guys walking around, there wouldn't be enough castles for everybody. :о) To isolate noise means to find an adequate model.

I think Prival is in this thread. The passages about the Kalman filter, for example, also referred to this. As far as I understand it, ideally the noise should be normal. Then it would be possible to predict not only the trajectories of enemy aircraft, but also the kotirs :)
 
Farnsworth:

I am not a scientist

It's not like I was addressing you personally. But since you replied - the contrived self-identification did not work :)

As for the question, it is not to the errors in the interpretation of the results of the analysis of the same process (such hasty conclusions kindly faa1947 demonstrates - by deleting every second observation, requires that the period in units is kept), but in the fact of the cyclicality of the moving average of the sum of random series.

This makes it impossible for me to understand the quoting process itself and the resulting price trajectory.

And if the alleged geometric wandering of a quotient is the result of a series of random processes (smoothed by DC filts and coarsened discretisations of taframes), then how is this consistent with the uniform distribution (and ultimately the Gaussian) of some popular models?

By the way, the "trend-wave-noise" model over a "very long period" does not bear out with respect to forex - there cannot be a trend here by definition.

Gold, oil, sugar - a trend is needed there. To estimate inflation...

;)


 
Mathemat:
I think Prival is in this thread. The passages about the Kalman filter, for example, also referred to this. As far as I understand it, ideally the noise should be normal. Then it would be possible to predict not only the trajectories of enemy aircraft, but also the kotirs :)

The ARPSS model is written as ARPSS (p, d, q), where d is the differences. They have to be taken until the resulting series is normal. It is argued that d = 2 is sufficient.
 
Candid:

The persistence with which many people try to interpret similarity solely as geometric similarity is truly amazing. Despite the perfectly specific example of similarity given, I am referring to the statistical High-Low and |Close-Open| ratio. That is the real similarity. By the way, Yuri, your ZZ example might be even better, but it seems to be from a personal account, so I don't cite it here.


Farnsworth 18.09.2010 22:08

already cited a good definition of self-similarity:

== equality of finite-dimensional distributions

Examples with geometric similarity help to clearly understand the point of Hurst as a self-similarity coefficient. For example, we can give a geometric interpretation of R/S analysis - take a ruler of size 1, measure R/S with it, take a ruler of size 2, and repeat the measurement. And so on, as long as it is relevant. Actually, in this way, the equality of distributions is evaluated and the self-similarity coefficient is calculated in the process.

In any case, I would very much like you, Candid, to give your geometric interpretation or, so to speak, to show in pictures, what the geometric meaning of such a definition is:

The Hurst index is a marginal measure. And it is defined as the limit, the asymptote to which h in the known formula for the normalized range when the number of counts in the interval increases to infinity.

Personally, I see that Hurst, the self-similarity coefficient, in the above definition has been simplified to a single measurement of a characteristic similar to R/S using a ruler of infinite length. Obviously, series that do not have an infinite normalized spread would, by such a definition, have a Hurst coefficient equal to zero. What is your opinion?

 
faa1947:

If you use ARPSS, I don't get it. The premise of ARPSS is: trend + wave + noise.

That's not how it's written down at all and it's understood a little differently. ARPSS is essentially an AR model with covariance matrix correction. There are components that extend ARPSS - you can include trend model(!), breakdown model(!), many things. What are you saying about it? Do you think I know nothing about it? I'm writing about something else - I'm not applying these models directly to quotes. It makes no sense. I was writing about using stochastic systems with a random structure. That's it - what are you arguing with? That you can apply them on quotes? ARPSS on quotes? Congratulations!

Or qualifications, qualifications first.

It's maths that doesn't work in this case - none of the necessary conditions are met. Well, yes, qualification - who can argue with that.

Much speculation on the subject, but nothing. perhaps you could share the results?

who argued about it? What are the results to share? Right here: https://forum.mql4.com/ru/34527/page27 gave the result of testing in pips, so far in MathCAD, 25 trades in 150 days. Also in the branch of online systems testing - did some forecasting.

PS: If you can apply ARPSS to quotes and correctly identify the process - show your skills.

 
Mathemat:
I think Prival is in this thread. This included passages about the Kalman filter, for example. As far as I understand it, ideally the noise should be normal. Then it would be possible to predict not only the trajectories of enemy aircraft, but also the kotirs :)

Yes I remember. Well you can't apply the Kalman filter to quotes, unfortunately. I mean you can apply it, but what's the point? :о) Otherwise they would have shot bars in the left eye long ago :o)
 

No, no, it's not that simple. Privalych himself said that Kalman does not depend on the distribution of errors. Whatever you put in there, that's how the filter comes out.

Honestly, I don't know what Kalman is. I've never been interested in filters in business.

 
Vita:

Farnsworth 18.09.2010 22:08

Has already given a good definition of self-similarity:

Examples with geometric similarity help to illustrate the point of Hearst as a self-similarity coefficient. For example, you can give a geometric interpretation of R/S analysis - take a ruler of size 1, measure R/S with that ruler, take a ruler of size 2 and repeat the measurements. And so on, as long as it is relevant. In fact, in this way, the equality of distributions is assessed and the self-similarity coefficient is calculated in the process.


Do you only associate geometry with the presence of a ruler? :о) It's kind of a joke. It's a bit different, in my mind, but I won't argue. I've had enough of ARPSS since 1976.
 
faa1947:

The ARPSS model is written as ARPSS (p, d, q), where d is the differences. They must be taken until the resulting series is normal. It is stated. that d = 2 is sufficient.
Have a nice time. :о)
 
Mathemat:

No, no, it's not that simple. Privalych himself said that Kalman does not depend on the distribution of errors. Whatever you put in there, that's how the filter comes out.

Honestly, I don't know what Kalman is. I've never been interested in filters in business.

Alexey - ask Kalman, I assure you he knows better.