Bernoulli, Moab-Laplace theorem; Kolmogorov criterion; Bernoulli scheme; Bayes formula; Chebyshev inequalities; Poisson distribution law; Fisher, Pearson, Student, Smirnov etc. theorems, models, simple language, without formulas. - page 9
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Have a look at the Wiki. It's only a primer on terwer/matstat here. And that's when you have the time.
GaryKa: I am trying to understand the scope of the following distributions:
Generalized Pareto Distribution(GPD) and Extreme Value Distribution(GEV)
I myself know about both extremely roughly. Both distributions are well above the level of this thread.
... well above the level of this thread.
OK, here's a question on the basics - Dispersion and its sample estimation via RMS
Here's a superficial definition from the wiki: The variance of a random variable is a measure of the spread of a given random variable, that is, its deviation from the mathematical expectation.
It is logical to suppose that it is something like the mean absolute deviation. Where does the square of the modulus of variance come from? Why not the cube or e.g. the power of -1.8? Why is it a power function of the modulus at all?
Clearly it is one of the characteristics, and one can enter or use another definition of a measure of the spread of a random variable around its mean if one wishes. But it is the measure that appears most often in textbooks.
OK, here's a question on the basics - Dispersion and its sample estimation via RMS
Here is a superficial definition from the wiki: The variance of a random variable is a measure of the spread of a given random variable, that is, its deviation from the mathematical expectation.
It is logical to suppose that it is something like the mean absolute deviation. Where does the square of the modulus of variance come from? Why not the cube or e.g. the power of -1.8? Why is it a power function of the modulus at all?
Where does the square of the modulus of difference come from?
No, not at all.
It's just the way it is. Dispersion is thought of as a measure of the spread of a random variable relative to its mean - and the concepts are often confused. Historically, it has been calculated as the sum of the squares of the variance.
But in fact the variance is a reasonable measure of dispersion only for normally distributed quantities. It is for them that it is very convenient: the "three sigmas law" confirms this. Anything that differs from the mean for a Gaussian value by more than three sigmas is very rare - a few tenths of a percent of the entire sample.
For quantities distributed differently (say, for Laplace quantities), it is more reasonable to take as such a measure not the second moment of the distribution, but the sum of the moduli of the variances.
But the variance is, and will remain, the second momentum, i.e. the sum of the squares.
OK, the second central point has a name of its own - "dispersion".
But why take the moment of inertia from physics? Where is the analogy of rotational motion for a random variable? Where is the direction of the axis of rotation passing through the centre of mass?
What is it?
How do you explain the variance to a schoolboy on his fingers?
For example, the mathematical expectation is the average. In general, if we replace all special cases with such an average, the cumulative effect of such a set will remain the same.
Mathemat:
But in fact the variance is a reasonable measure of dispersion only for normally distributed quantities.
I am of the same opinion,
Perhaps dispersion was taken as a special case of covariance - a measure of linear dependence of a random variable on itself. Self resonance of some sort )). You should ask Fisher .
Covariance did not exist when dispersion was invented.
And what does the moment of inertia have to do with it? Many physical/mathematical phenomena are described by similar equations.
If you need dispersion as a second momentum, use what you have.
But if you need it as a measure of dispersion, you'll have to think.
I can give you another example: the covariance of two different discrete quantities is calculated as the scalar product of two vectors. So look for analogies, right down to the angle between random variables...
OK, the second central point has a name of its own - "dispersion".
But why take the moment of inertia from physics? Where is the analogy of rotational motion for a random variable? Where is the direction of the axis of rotation passing through the centre of mass?
What is it?
How do you explain the variance to a high school student with your fingers?
For example, the mathematical expectation is the average. In general, if we replace all special cases with such an average, the cumulative effect of such a set will remain the same.
I am of the same opinion,
Perhaps dispersion was taken as a special case of covariance - a measure of linear dependence of a random variable on itself. Self resonance of some sort )). You should ask Fisher .
There is also a point here. In calculating the second point, deviations from the mean are squared. Therefore, the contribution to the variance of strong deviations from the mean is taken into account stronger, and stronger disproportionately. In other words, the variance "pays more attention" to values that deviate strongly from the mean, and it takes them into account first of all to characterize the dispersion. If compared with the mean deviation modulus, for example, the variance is said to have a "greater sensitivity to outliers", meaning precisely the above.
Well, to reduce the variance to apples and oranges, you usually take the square root of it. The resulting value has the dimension of the random variable itself, and is called the standard deviation (RMS, indicated by the lowercase letter sigma). Not to be confused with the standard deviation of the sample.