Bernoulli, Moab-Laplace theorem; Kolmogorov criterion; Bernoulli scheme; Bayes formula; Chebyshev inequalities; Poisson distribution law; Fisher, Pearson, Student, Smirnov etc. theorems, models, simple language, without formulas. - page 5
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Let's move on. The local Moab-Laplace theorem. Picture from the same place:
The picture shows how with increasing number of trials the binomial frequency distribution tends to normal, i.e. the curve becomes more and more like a Gaussian curve (bell). And there is even a qualitative estimate of the approximation error. Thus, if we, for example, want to calculate, what is the probability that with n=200 rolls of the die m0=20 to m1=30 fives will fall (I remind that the probability of falling out of fives is 1/6), then we will not need to sum up 11 numbers with factorials, and it will be enough to calculate the corresponding area under the curve, which equation we already know. Formulas there are cumbersome, I will not give here.
Actually, in our age of personal computers this theorem is not very actual for practical computing, but 200 years ago it was quite relevant. Besides, it plays an important role in theoretical research, because the normal distribution has been studied up and down, and it is easy to work with.
Further we will talk about it, about the normal distribution, though it is not declared by the topicstarter.
Of course, I'm not pulling, at least I'd like to make some chowder... But it's not like anyone's going to help me yet. What's a five-star cook if there's only one?
On the horizontal (abscissa) is the number of successes in the overall test series. On the vertical (ordinate) is the relative frequency, i.e. the proportion of successes in the total number of trials.
I forgot to add: binomial distribution becomes similar to normal distribution not only when n*p >= 5, but also under additional condition: p should not be too close to 1. Well, say, at p~0.5, n~10 is already quite similar.
Start by yourself and at the same time try to explain to homebrew humanitarians why they need Pearson distributions. I didn't even know they existed before you addressed me...
And explain why to express Poisson and normal (both are quite practical distributions) through the spherical horse "Pearson distribution".
But about the Gamma distribution, I'll think about it.
It's not that simple. But the Kolmogorov criterion should definitely be somewhere near the end. Chebyshev inequalities are only needed for fairly rough estimates.
Let everything remain as it is, and we will choose what we can explain on the basis of what we have learnt.
I searched and found this. I see that chi-squared and gamma are special cases of Pearson distributions.
I do not see any reason to talk about Pearson distributions here, because I cannot explain the practical usefulness of such a deep vacuum-spherical horse to the readers of the branch.
I will definitely talk about chi-squared here.
Yes, perhaps we can talk about the gamut as well:
The sum of n independent exponentially distributed random variables with parameter b obeys an Erlang distribution with parameters b, n.
I searched and found this. I see that chi-squared and gamma are special cases of Pearson distributions.
I do not see any reason to talk about Pearson distributions here, because I cannot explain the practical usefulness of such a deep vacuum-spherical horse to the readers of the branch.
I will definitely talk about chi-squared here.
Yes, perhaps we can talk about the gamut as well:
The sum of n independent exponentially distributed random variables with parameter b obeys an Erlang distribution with parameters b, n.
Now you can see in the article https://www.mql5.com/ru/articles/250 how and why this two-parameter Erlang distribution was introduced and another two-parameter distribution I introduced appeared in the body of formula (18).
Yusuf, who were you talking to just now?
Now you can see in the article https://www.mql5.com/ru/articles/250 how and why this two-parameter Erlang distribution and another two-parameter distribution, which I introduced, were introduced in the body of formula (18).
I'll have another look. I still don't understand how you got these probability distributions, when the article doesn't mention a terver...
You said. There are several methods of generating a normal distribution - here, for example. But they, too, rely on a uniform distribution as a basis.
You can, of course, also "directly". We will first generate a normal distribution and then apply the function inverse of the integral function of the normal distribution to the results. But the problem is the same: it is necessary to first generate a uniform one.
Good uniform generators are described in the literature. And the last 64-bit one for Windows is not bad either, much better than the standard C-shaped one.
But the standard one is not so bad either. Anyway, the effects of its "un-naturalness" are not so easy to detect.
Natural normal - what do you need it for, S?
This shows that the solutions to the material balance equations and the terver law coincide and they are mutually complementary in interpreting the results of the phenomena analysis.
Yusuf, I'm sorry, but I personally am always "stressed" by the science. What does the Erlang distribution have to do with it?
Let's try one more "perception" - answer, since you are so abrasive in terms, why there are different distributions? Who registers a NEW distribution discovered by someone else? I can make up all these distributions ... a shitload of them, but no one will accept them as something new. So, what is a new distribution that is not yet known?
Let's listen to Alexei's presentation first, since he was the first to do so.
Yusuf and everyone else, please don't take it as a diminution of your knowledge on the subject.
This way the sequence starts to get cluttered with additional terminology and getting ahead of ourselves.