Stochastic resonance - page 19

 

to Yurixx

Then this dependence is easiest to obtain experimentally. The price series does not have a normal distribution at all and building "models" on this basis will lead to significant error.

 
Avals:
lna01:

P.S. My bad, inattentive, a mistake there, RMS cannot aspire to infinity. Take the sum only for M increments

With N tending to infinity faster than M, we get that the RMS tends to infinity, i.e. the realization can go as far as one wants, which is confirmed by the laws of arcsine.
A normally distributed value can go to infinity, but with infinitesimal probability. That is, it does not require an infinitely large RMS. M is finite by the conditions of the problem. If we write the formula for infinite summation of increments with M, we see that after the first M steps the number of terms in the sum stabilizes and then remains equal to 2M, that is, at step M+1 the first value of X will leave the sum, at M+2 the second one will leave the sum, and so on.
 

Yuri, a first glimpse of this very dependency. The first thing that came to hand was a EURUS clock. The range under study is (10000 - lied) 5000 counts, the size of the window went from 50 to 3000 in intervals of 50. Here's what came out (as expected):


  • X axis - window size
  • Y axis - spread (max(y)-min(y))

PS: the easiest thing to do is to approximate it and get a very accurate analytical function.

 
lna01:
Avals:
lna01:

P.S. My bad, inattentive, an error there, RMS cannot aspire to infinity. Take the sum only for M increments

With N tending to infinity faster than M, we get that the RMS tends to infinity, i.e. the realization can go as far as one wants, which is confirmed by the laws of arcsine.
A normally distributed value can go to infinity, but with infinitesimal probability. That is, it does not require an infinitely large RMS. M is finite by the conditions of the problem. If we write the formula for infinite summation of increments with M, we see that after the first M steps the number of terms in the sum stabilizes and then remains equal to 2M, i.e. at step M+1 the first value of X will leave the sum, at M+2 the second, and so on.

Agreed :)
 

And here's the addiction itself, a little rough:

 
Thank you Sergei. 10000 is too small a number for the interval M 50 - 3000. That's why there are such unsmoothnesses as in the upper part of your curve. Also, the area of small values, which is what I'm interested in, has too big divergences. I will try the idea of calculating this way. The only thing I fear is that I will have to recalculate every time I switch to a new instrument, or t/f, or whatever.
 
Yurixx:
Thanks, Sergey. 10000 is too small a number for an M interval of 50 - 3000. That's why there are such non-smoothnesses as at the top of your curve. Also, the area of small values, which is what I'm interested in, has too big divergences. I will try the idea of calculating this way. The only thing I fear is that I will have to recalculate every time I switch to a new instrument, or t/f, or whatever.

You're welcome, it wasn't a finished result. :о) It seems to me that this is the only normal but perfectly valid way to get a result. Theoretical conclusions can give a rougher estimate, but here we have statistics. You can take the whole sample and run the algorithm with optimal step for window size.

And for some reason it seems to me that the coefficient in the power will be approximately the same for the rest of the cases, but the first coefficient will certainly change and symbolize the spread of the initial sample. By the way, you can check - similar conditions, but take another series in general in another place:

Dependence


The analytical function


The coefficients do not differ much:

Option 1: -0.0005

Variant 2: -0.0004

So, by taking more raw data you may get more or less exact dependence without binding to the first coefficient :o) I'm sure!

 

I'm not arguing, but...

That's basically where I started. But then I discovered that the situation changes for different TFs. It is understandable - less bars (or more) - we get a different N. Such dependence on M, as shown in the above charts, has been obtained by me from the very beginning, but when I move to another TP, as a result of changing the total number of bars, this curve shifts vertically. It turns out that we should not look for a dependence on M, but on the ratio of N to M.

 
Yurixx:

I'm not arguing, but...

That's basically where I started. But then I discovered that the situation changes for different TFs. It is understandable - less bars (or more) - we get a different N. Such dependence on M, as shown in the above charts, has been obtained by me from the very beginning, but when I move to another TP, as a result of changing the total number of bars, this curve shifts vertically. It turns out that you have to look for a dependence not on M, but on the ratio of N to M.

Yes, different timeframes should correct the result and it's probably easier to get the dependence for each of them than to try to find a universal formula (everything depends on price-quality criterion). Perhaps choosing (H+L)/2 would smooth out the differences?

 
Do I understand correctly, the spread is taken over the entire N window? If so, then, imho, it is difficult to count on any constancy. Rather, it can be seen for differences of muwings, e.g. with a higher muwing (with a maximum M).