Bernoulli, Moab-Laplace theorem; Kolmogorov criterion; Bernoulli scheme; Bayes formula; Chebyshev inequalities; Poisson distribution law; Fisher, Pearson, Student, Smirnov etc. theorems, models, simple language, without formulas. - page 10

 

If A and B are independent random variables, then the variance of the sum of these variables is equal to the sum of their variances.

Imho, just a matter of arithmetic. Convenient :)

 
No, the condition is less strict - the random variables must be uncorrelated, independence is optional.
 
Alexei, I gave a definition, but I forgot to put the inverted commas in it :)
 
the man was developing a direction, he started with arithmetic - or more precisely - with conditions. I would start with the same...
 

I think I've sorted outthe variance for myself at.

Let's introduce a pseudo-definition:

Pseudo-Measureof dispersion of a random variable(relative estimate) - distance between two commensurable sets (i.e. sets of the same size): original set and an "ideal" set consisting only of "averages ", normalized for the space to which the original set belongs.

If we substitute set from linear space into this definition, we get RMS. But if the set is from non-linear space then...

Here, obviously, was my subconscious question that bothered me about variance - Why did the square of the RMS move to variance, which is a more general definition of the measure of dispersion of a random variable ?

Reason: