Bernoulli, Moab-Laplace theorem; Kolmogorov criterion; Bernoulli scheme; Bayes formula; Chebyshev inequalities; Poisson distribution law; Fisher, Pearson, Student, Smirnov etc. theorems, models, simple language, without formulas. - page 10
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
If A and B are independent random variables, then the variance of the sum of these variables is equal to the sum of their variances.
Imho, just a matter of arithmetic. Convenient :)
I think I've sorted outthe variance for myself at.
Let's introduce a pseudo-definition:
Pseudo-Measureof dispersion of a random variable(relative estimate) - distance between two commensurable sets (i.e. sets of the same size): original set and an "ideal" set consisting only of "averages ", normalized for the space to which the original set belongs.
If we substitute set from linear space into this definition, we get RMS. But if the set is from non-linear space then...
Here, obviously, was my subconscious question that bothered me about variance - Why did the square of the RMS move to variance, which is a more general definition of the measure of dispersion of a random variable ?