Stochastic resonance - page 18

 
Avals:

It appears that this CB has expectation=0, D=2*D1/M, RMS=sqrt(2*D1/M)

For the increments I agree.
 
Mathemat:
Avals, if we are talking specifically about returns (closing price increments), then, alas, there is no independence here either: returns are not distributed according to the normal law. It's well described in Peters's books, I gave a link to it on the same thread somewhere on the first pages.


I agree with this, but here the original problem was that X is Gaussian distributed.

"Suppose there is a normally distributed sequence of quantities X..."

 
lna01:
Avals:

It appears that this SV has expectation=0, D=2*D1/M, RMS=sqrt(2*D1/M)

For the increments, I agree.

So the sum of the increments is also normal. And the problem, as I understand it, is to consider finding this sum within certain limits with a certain probability(confidence interval)
 
Avals:
lna01:
Avals:

It appears that this SV has expectation=0, D=2*D1/M, RMS=sqrt(2*D1/M)

For increments I agree

So the sum of the increments is also normal. And in the problem, as far as I understand, it is necessary to consider finding this sum within certain limits with some probability (confidence interval)
So we have the resulting RMS S*sqrt(2) ? Hm ...
 
lna01:
Avals:
lna01:
Avals:

It appears that this SV has a maturity expectation=0, D=2*D1/M, RMS=sqrt(2*D1/M)

For increments I agree

So the sum of the increments is also normal. And the problem, as I understand it, is to consider finding this sum within certain limits with a certain probability (confidence interval).
So we have the resulting RMS S*sqrt(2) ? Hm ...

This is only for the increments of this average. In order to keep the value itself within certain limits, you have to look at the sum of those increments. Its variance is equal to the sum of variances: Ds=(N-M+1)*D=2*(N-M+1)*D1/M. RMS=SQRT(2*(N-M+1)*D1/M), where D1 is variance of original series N is length of original series, M is sliding window length. It is easier and more reliable to montecarry :)
 
Avals:
lna01:
We have a final RMS of S*sqrt(2) ? Hm ...

This is only for the increments of this average. In order to keep the value itself within certain limits, you have to look at the sum of these increments. Its variance is equal to the sum of variances: Ds=(N-M+1)*D=2*(N-M+1)*D1/M. RMS=SQRT(2*(N-M+1)*D1/M), where D1 is variance of original series N is original series length, M is sliding window length. It is easier and more reliable to montecarry :)
For N >> M it is about the same. Well and as it is in fact about RMS expectation, N should be taken equal to infinity :)

P.S. Sorry, I was inattentive, there is a mistake, RMS cannot tend to infinity. You should take the sum only for M increments

P.P.S. S means sqrt(D1)
 
lna01:
Avals:
lna01:
We have a final RMS of S*sqrt(2) ? Hm ...

This is only for the increments of this average. In order for the value itself to stay within certain limits, you have to look at the sum of those increments. Its variance is equal to the sum of variances: Ds=(N-M+1)*D=2*(N-M+1)*D1/M. RMS=SQRT(2*(N-M+1)*D1/M), where D1 is variance of original series N is original series length, M is sliding window length. It is easier and more reliable to montecarry :)
For N >> M it's about the same.
Agreed. But in some practical problems it can be essential.
 
I had time to complete the postscripts in the pre-post, there are corrections
 

Guys, thanks to everyone who responded. Your discussion has cleared my head too. Slightly. :-)

The starting point is the prices. He is, of course, there. Its distribution is probably not normal. I wrote about normal, because many things can be calculated analytically for it and because the real distribution can be approximated with a certain accuracy by a normal distribution.

The task has nothing to do with predicting or trying to determine the probabilities of events in the tails. I must have disappointed you with this, alas. The problem occurred because the moving average has a range (that's right Sergey, that's the question) that significantly depends on the size of M window. And I, by my ingrained habit, want to comparemoving averages for different M. But I can't, because they have different ranges of values. In order to normalize these moving averages to a single interval, you need to calculate the normalization factor, or rather, its dependence on M.

Further having statistics from history and having constructed a distribution function in numbers we can either calculate this coefficient in a straightforward way or approximate the distribution function by Gauss and calculate it analytically. Naturally, absolute precision is of no importance here. It is important that the nature of the relationship is true, not model-based. I can think of many model-based ones ...

2 Mathemat

I hope you now understand that we are not talking about clear-cut boundaries, but about the compensation for differences in values that result from differences in sample size. And with everything you said I agree, completely. :-)

 
lna01:
Avals:
lna01:
We have a final RMS of S*sqrt(2) ? Hm ...

This is only for the increments of this average. In order for the value itself to stay within certain limits, you have to look at the sum of those increments. Its variance is equal to the sum of variances: Ds=(N-M+1)*D=2*(N-M+1)*D1/M. RMS=SQRT(2*(N-M+1)*D1/M), where D1 is variance of original series N is original series length, M is sliding window length. It is easier and more reliable to montecarry :)

P.S. My bad, I was inattentive, there is a mistake there, RMS cannot aspire to infinity. Take the sum only for M increments

With N tending to infinity faster than M, we obtain that the RMS tends to infinity, i.e. the realization can go as far as you want from the line of mathematical expectation*N, which is confirmed by the laws of arcinus.
That is, the sum of an infinitely large series of increments, as one SV, will have an infinite RMS.