[Archive!] Pure mathematics, physics, chemistry, etc.: brain-training problems not related to trade in any way - page 388

 

FreeLance:

<0.5 is even worse - noise wanders randomly :)

If it's less than 0.5 and it's known to persist, it's a miracle. It's not "noise" at all and it doesn't mean that the process will tend (statistically) to return to its mean. Take a sine wave and mix it with a little noise, you get 0.1, 0.2 like that.

0.5 is SP and it can wander around or away from its mean as well, it all depends on the stationarity of the process. (Hurst says nothing explicitly about stationarity, except for some "hints")

0.5 there is hope for memory...

Incremental processes with heavy tails have been proven to have long term memory. You don't need to look at the Hurst curve for that. (And that hope appears at >0.5)

 
Farnsworth:
To be honest - I don't understand what you're doing and what's the point of it all?
Neither do they know what they are doing or why they need it. Just from time to time nerds remember Hearst and start discussing him in a friendly manner. Then they forget about Hearst and bring up fat tails, etc. etc.
 
Candid:
Er ... is that in the broad sense or the narrow sense? :)
both. well, i'll leave you to it, i know it's a long time to write.
 
Reshetov:
Neither do they know what they are doing or why they need it. The nerds just remember Hearst from time to time and start discussing him in a friendly manner. Then they forget about Hearst and bring up fat tails, etc., etc.
darn it all. I've been looking for a long time for a connection between Hearst's indicator and the future, but haven't found one. Of course, it doesn't mean that there isn't one, maybe something needs to be taken into account, additionally.
 
FreeLance:

=0.5 random wandering in noise.

<0.5 is even worse - noise randomly wanders :)

>0.5 there is hope about memory...

>0.79 is naturally natural

It's understandable, we know. Persistence, antipersistence and so on. What does this have to do with the actual prediction of the future, that's the question...

When I first learned about Hirst (from Peters) and got acquainted with his method of calculation, I concluded that it takes a lot of data to be statistically representative. The result, if obtained, only makes sense for investing with a long horizon, not speculative. This is just my imho.

There seem to be techniques for calculating it that don't require much data. That's probably essentially the locality that Farnsworth mentioned here?

I remember a couple of years ago I saw the only practical application of Hearst - to simulate synthetic series with a given Hearst, which should then be fed to the input of the tester. Something didn't work out for me, I gave it up. And then I realized intuitively that the modeling must be "smart": the simulated synthetic series must consider those properties of the real financial series, which are exploited by the trading system itself. Arbitrary modelling without reference to the TS itself is completely meaningless. However, I have not been able to get to the quantitative expression of this "paradigm".

 

Since it is strictly proven that at 0.5 it is wandering. Clearly, in the vicinity...

But, isn't that enough?

;)

Look for dimensional discrepancies, whatever.

And exploit it in good health.

 

Mathemat:

...

And then came the intuitive realisation that modelling must be "smart": the synthetic series to be modelled must take into account precisely those properties of the real financial series that are exploited by the trading system itself. Arbitrary modelling without reference to the TS itself is completely meaningless. However, I have not been able to get to the quantitative expression of this "paradigm".

sort of ==

joo:

Writers. Bloch, Pushkin, Tolstoy, Lem, Shackley. Each is unique in its own way and the reader can easily identify not only the genre of a work from the text, but can also identify the author(this is a kind of indicator, a parameter unique to each author). However, statistically, any sufficiently large text contains a constant number of each of the letters of the alphabet. It is a statistical characteristic of the language in which the work is written. If one randomly generates letters, but with preset statistical characteristics, one can get a text with the right amount of information. But such a text will not have any meaning and moreover it will be impossible (as it is not there) to identify the author of the "work".

 
Candid:

OK, that's my last rejoinder on this point. If you don't agree, do what you want, I'll keep quiet :)

Yeah, you were right about Close-Open. Had to get into Perez's book and re-read some places. Indeed the mean square of the distance the process travels in N steps, i.e. the variance of the SB path, is not the spread. Einstein, Feynman, Feller worked with this dispersion - quite correctly defined concept. The spread was invented by Hirst and the way it is defined by Peters, it is absolutely impossible to use it in any analytical calculations.

Incidentally I have discovered (I have already forgotten this book) that Peters has put much effort to fit his numerical experiments to theoretical results. Especially since there are authors who have obtained much more complicated functions than the simple power dependence in Hirst's formula. This confirms my assumption that Hurst's formula is at best only a first approximation of the actual dependence.

PS

The number of errors (mostly in formulas) in Peters' book makes it completely unusable. It is even more than I noted on first reading.

 
Mathemat:

It's understandable, we know. Persistence, antipersistence and so on. What does this have to do with the actual prediction of the future, that's the question...

When I first learned about Hirst (from Peters) and got acquainted with his method of calculation, I concluded that it takes a lot of data to be statistically representative. The result, if obtained, only makes sense for investing with a long horizon, not for speculation. This is just my imho.

There seem to be methodologies for calculating it that don't require much data. That's probably essentially the locality that Farnsworth mentioned here, isn't it?

I remember seeing the only practical application of Hearst a couple of years ago - to simulate synthetic series with a given Hearst, which should then be fed to the input of the tester. Something didn't work out for me, I gave it up. And then I realized intuitively that the modeling must be "smart": the simulated synthetic series must consider exactly those properties of the real financial series, which are exploited by the trading system itself. Arbitrary modelling without reference to the TS itself is completely meaningless. However, I have not been able to get to the quantitative expression of this "paradigm".


There are models that assume a dependence of the Hearst index on time. It is precisely this dependence as a function, not something that would take a sliding window over the series. But identifying such processes is not an easy task.

Generally speaking, before calculating the indicator we should have a look at the log-log chart. Forex is a process that is, to put it mildly, weakly self-similar and does not follow the power dependence. Such dependence exists only on a narrow scale, which practically nullifies the whole power of fractal analysis (as a mathematical discipline).

There seem to be techniques for calculating it that don't require much data. That must be essentially the locality that Farnsworth mentioned here?

Where does the process start? Does it start all the time or does it end all the time? Or does it never stop? That's the salt in the answer. :о)

 
Farnsworth:

There are models which assume a dependence of the Hearst index on time. This is exactly the dependence as a function, not something that would take a sliding window over the series. But identifying such processes is not an easy task.

Generally speaking, before calculating the indicator we need to have a look at the log-log chart. Forex is a process that is, to put it mildly, weakly self-similar and does not follow the power dependence. Such dependence exists only on a narrow scale that practically nullifies the power of fractal analysis (as a mathematical discipline).

Where does the process begin? Does it start all the time or does it end all the time? Or does it never stop? That's the salt in the answer. :о)

Scientists don't bicker...

Galton's carnations are closer to me.

;)