You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
to Yurixx
I see, I was talking about a sliding window. After yesterday's ale I don't think very well, but according to first approximations the type of analytical dependence on the window length should be "almost" linear, but rather "almost" exponential, roughly speaking, decreasing from the initial sample size, by the way, and we know it or not.
If I get my feet to the workplace, I'll try to think, though only my backbone's brain is in working order now. :о)
PS: If it's no secret, why do you need it at all?
to Candid
yuri explained in the next post that it was a sliding window we were talking about.
It won't work then:
Yurixx wrote (a):
No, it's just a sliding window of length M samples. Therefore the number of elements in sequence Y is N-M+1.
to Candid
Yuri explained in the next post that he was talking about a sliding window.
to Candid
Yuri explained in the next post that he was talking about a sliding window.
Why do we need to adjust for the dependence of the samples? I would do something simpler: any averaging "chews away" from the sample spread by some percentage, you can probably estimate the value of this percentage of the window length M for samples with the characteristics listed by Yury - analytically or experimentally. I'm not thinking straight at the moment though...
Well, yes, it does, but no clear boundaries are out of the question. If in a million samples there are quite real chances to get a result differing from the expectation by 4 sigma or more (the normal hypothesis gives probability 0.0000634, i.e. expectation of such samples is 63.4 cases), then in a hundred samples such chances are illusory (m.o. their number is 0.00634). But this does not mean that in a hundred of samples we cannot encounter a sample deviation by more than 4 sigmas. It is just extremely unlikely.
Yurixx, this boundary problem can only be posed in probabilistic terms.
P.S. Well, for example: find such values Ymin and Ymax into which Y falls with probability 0.99. It is reasonable to assume that both extremes are equidistant from the m.o. of the general population.
Well, yes, it does, but no clear boundaries are out of the question. If in a million samples there are quite real chances to get a result differing from the expectation by 4 sigma or more (the normal hypothesis gives probability 0.0000634, i.e. expectation of such samples is 63.4 cases), then in a hundred samples such chances are illusory (m.o. their number is 0.00634). But this does not mean that in a hundred of samples we cannot encounter a sample deviation by more than 4 sigmas. It is just extremely unlikely.
Yurixx, this boundary problem can only be posed in probabilistic terms.
Yes, I think that's how he puts it - approximately, you really can't get accurate data. But I am curious, why such a need :o)))
Why make any allowance for the dependence of the samples? I would do something simpler: any averaging "chews away" some percentage from the sample spread, you can probably estimate the value of this percentage from the window length M on samples with the characteristics listed by Yuri - analytically or experimentally. Although I'm not thinking clearly at the moment...
Why make any allowance for the dependence of the samples? I would do something simpler: any averaging "chews away" some percentage from the sample spread, you can probably estimate the value of this percentage from the window length M on samples with the characteristics listed by Yuri - analytically or experimentally. Though I'm not thinking straight at the moment...
If we consider the increment of this quantity, then independence is observed.