What is it? - page 15

 
Candid >>:

Вы неправильно посчитали СКО, для этого процесса оно пропорционально n. После второй серии испытаний относительное отклонение от матожидания уменьшилось.

Well, well, for some reason I was sure that distribution of number of hits on Red (if there is no zero, i.e. p=q=0.5) is binomial, which in turn is well approximated by normal, for which Laplace theorem is valid... Maybe you are confused with variance, which is equal to npq?

 
Mathemat писал(а) >>

Well, well, for some reason I was sure that distribution of number of hits on Red (if there is no zero, i.e. p=q=0.5) is binomial, which in turn is well approximated by normal, for which Laplace theorem is valid... Maybe you have confused it with dispersion, which is equal to npq?

Maybe I did. But isn't RMS=Root(Disp)?

How would it be according to Comrade Laplace?

 

I think I'm beginning to get the hang of what Candid is talking about. About the (Bernoulli's) process. In this case, the cumulative sum of elementary test outcomes, i.e., say, 1 for Red, and 0 for Black.

And you and I, lasso, are talking about a probability distribution.

Laplace's theorem is a special case of the Central limit theorem. The Laplace theorem is a special case of the Central Limit Theorem, which is about the convergence of the probability distribution with the variance npq.

 

Yes, that's right, I got confused about n, the correct root of n. I don't know what you're talking about, but the lasso example is about the process :).

He has a mistake, the expectation after the second series is not 1000 by 1000 but 1100 by 900. He also seems to confuse the probability of getting 1000 after 2000 trials and the full probability of two unlikely series of 1000 trials in a row ( A1 && B2 ).


P.S.

After 2nd series n = 2000 A3 = A1 && A2 = {(600K, 400Ch in series 1) AND (600K, 400Ch in series 2)}.......... .................................................................................

..................................................................................... MO=1100 Disp= 2000*0.5*0.5 RMS=22.36 3*SCO = 67.08 Deviation(A3)=(1200-1100)/22.36=4.47

 
Mathemat >>:

Ну вот, и тут меня нашли. Но я еще не готов :)

Can't attach...

G.Sekei. "Paradoxes in probability theory and mathematical statistics".

4.5 M in deja vu...

 

And if you compress it, how hard would it be? Can you send it to my email address (see profile)?

 
Mathemat >>:

Ну вот, и тут меня нашли.

Did you find it or did you get it? :)

 

Well, I haven't figured it out yet myself. I should probably try to do something myself to get a feel for your idea. And once I get a feel for it, maybe I'll get some new ideas.

 
avatara писал(а) >>

Did you get it?

Me, too. Please.

big[mylogin]@mail.ru

 
lasso писал(а) >>

6,000 versus 4,000 at 10,000 is understandable. We are not going to go beyond normality.

Once again the same question, but I will put it in a different way.

We create a new object - a system of events (e.g. roulette). There are no zeros. Red/Black - 50/50. We have done 1000 trials. Event A1 happened (one event) in which Red fell out 600 times, Black fell out 400 times. Correspondingly, there is extremely small, but admissible P(A1) for example = 0.0001.

That's it, we have forgotten about this thousand tests. We start with a clean slate.

Question: With next 1000 trials (in the same system) the probability of which event is more - A3={Red falls out 600 times, Black falls out 400 times} or A4={Red falls out 400 times, Black falls out 600 times}

Or P(A4)=P(A3) ? How do you calculate it according to Mr Bernoulli's scheme?

If forgotten, already happened then the probability is the same as before the first test. And before the first test, the probability of getting 600/400 twice is different - equal to the square of the probability of getting 600/400 once. These are simply different events.

Reason: