Bernoulli, Moab-Laplace theorem; Kolmogorov criterion; Bernoulli scheme; Bayes formula; Chebyshev inequalities; Poisson distribution law; Fisher, Pearson, Student, Smirnov etc. theorems, models, simple language, without formulas. - page 9

 
Good afternoon! What is a system of differential equations?
 
Dimka-novitsek: Good afternoon! What is a system of differential equations?

Have a look at the Wiki. It's only a primer on terwer/matstat here. And that's when you have the time.

GaryKa: I am trying to understand the scope of the following distributions:

Generalized Pareto Distribution(GPD) and Extreme Value Distribution(GEV)

I myself know about both extremely roughly. Both distributions are well above the level of this thread.

 
It's from one obscure word to another, but all right, I'll probably try it myself for now. I think I understand the principle, though!
 
Mathemat:

... well above the level of this thread.

OK, here's a question on the basics - Dispersion and its sample estimation via RMS

Here's a superficial definition from the wiki: The variance of a random variable is a measure of the spread of a given random variable, that is, its deviation from the mathematical expectation.

It is logical to suppose that it is something like the mean absolute deviation. Where does the square of the modulus of variance come from? Why not the cube or e.g. the power of -1.8? Why is it a power function of the modulus at all?

Clearly it is one of the characteristics, and one can enter or use another definition of a measure of the spread of a random variable around its mean if one wishes. But it is the measure that appears most often in textbooks.

 
GaryKa:

OK, here's a question on the basics - Dispersion and its sample estimation via RMS

Here is a superficial definition from the wiki: The variance of a random variable is a measure of the spread of a given random variable, that is, its deviation from the mathematical expectation.

It is logical to suppose that it is something like the mean absolute deviation. Where does the square of the modulus of variance come from? Why not the cube or e.g. the power of -1.8? Why is it a power function of the modulus at all?

There is such a thing as moments of a random variable. So "variance" is a proper name, so to speak, for the second central moment. I.e. it is logically correct not "Dispersion is a measure of deviation of a random variable from expectation", but "The second central moment of a random variable is called dispersion. It is a parameter that characterises the deviation of a random variable from its expectation." catch the difference? In that sense you are right, the definition given in the pedivikia is incorrect.
 
GaryKa:

Where does the square of the modulus of difference come from?

Taking the modulus from the difference is an unnecessary operation, because the square of both positive and negative numbers will be a positive value. There is no modulus in the generally accepted formulas. As far as I understand it, it is the square of the difference that is used, and not the other degrees (imho), largely because of this, and the simplicity of working with squares and square roots.
 
C-4: Taking the modulus of the difference is an unnecessary operation, because the square of both positive and negative numbers will be a positive value. There is no modulus in the generally accepted formulas. As far as I understand it, it is the square of the difference that is used, and not the other degrees (imho), largely because of this, and the simplicity of working with squares and square roots.

No, not at all.

It's just the way it is. Dispersion is thought of as a measure of the spread of a random variable relative to its mean - and the concepts are often confused. Historically, it has been calculated as the sum of the squares of the variance.

But in fact the variance is a reasonable measure of dispersion only for normally distributed quantities. It is for them that it is very convenient: the "three sigmas law" confirms this. Anything that differs from the mean for a Gaussian value by more than three sigmas is very rare - a few tenths of a percent of the entire sample.

For quantities distributed differently (say, for Laplace quantities), it is more reasonable to take as such a measure not the second moment of the distribution, but the sum of the moduli of the variances.

But the variance is, and will remain, the second momentum, i.e. the sum of the squares.

 

OK, the second central point has a name of its own - "dispersion".

But why take the moment of inertia from physics? Where is the analogy of rotational motion for a random variable? Where is the direction of the axis of rotation passing through the centre of mass?

What is it?

  • mean deviation - no
  • the rate of change in the density of values near the matrix expectation - no
  • more variations ...

How do you explain the variance to a schoolboy on his fingers?

For example, the mathematical expectation is the average. In general, if we replace all special cases with such an average, the cumulative effect of such a set will remain the same.


Mathemat:

But in fact the variance is a reasonable measure of dispersion only for normally distributed quantities.

I am of the same opinion,

Perhaps dispersion was taken as a special case of covariance - a measure of linear dependence of a random variable on itself. Self resonance of some sort )). You should ask Fisher .

 

Covariance did not exist when dispersion was invented.

And what does the moment of inertia have to do with it? Many physical/mathematical phenomena are described by similar equations.

If you need dispersion as a second momentum, use what you have.

But if you need it as a measure of dispersion, you'll have to think.

I can give you another example: the covariance of two different discrete quantities is calculated as the scalar product of two vectors. So look for analogies, right down to the angle between random variables...

 
GaryKa:

OK, the second central point has a name of its own - "dispersion".

But why take the moment of inertia from physics? Where is the analogy of rotational motion for a random variable? Where is the direction of the axis of rotation passing through the centre of mass?

What is it?

  • mean deviation - no
  • the rate of change in the density of values near the matrix expectation - no
  • more variations ...

How do you explain the variance to a high school student with your fingers?

For example, the mathematical expectation is the average. In general, if we replace all special cases with such an average, the cumulative effect of such a set will remain the same.


I am of the same opinion,

Perhaps dispersion was taken as a special case of covariance - a measure of linear dependence of a random variable on itself. Self resonance of some sort )). You should ask Fisher .

There is also a point here. In calculating the second point, deviations from the mean are squared. Therefore, the contribution to the variance of strong deviations from the mean is taken into account stronger, and stronger disproportionately. In other words, the variance "pays more attention" to values that deviate strongly from the mean, and it takes them into account first of all to characterize the dispersion. If compared with the mean deviation modulus, for example, the variance is said to have a "greater sensitivity to outliers", meaning precisely the above.

Well, to reduce the variance to apples and oranges, you usually take the square root of it. The resulting value has the dimension of the random variable itself, and is called the standard deviation (RMS, indicated by the lowercase letter sigma). Not to be confused with the standard deviation of the sample.

Reason: