Stochastic resonance - page 17

 

to Yurixx

I see, I was talking about a sliding window. After yesterday's ale I don't think very well, but according to first approximations the type of analytical dependence on the window length should be "almost" linear, but rather "almost" exponential, roughly speaking, decreasing from the initial sample size, by the way, and we know it or not.

If I get my feet to the workplace, I'll try to think, though only my backbone's brain is in working order now. :о)

PS: If it's no secret, why do you need it at all?

to Candid

yuri explained in the next post that it was a sliding window we were talking about.

 
Avals:


It won't work then:

Yurixx wrote (a):
No, it's just a sliding window of length M samples. Therefore the number of elements in sequence Y is N-M+1.

Yes, then I don't understand it all either.
 
grasn:


to Candid

Yuri explained in the next post that he was talking about a sliding window.

Looks like I just missed that post :(. Still can't figure out how to correct for the dependency of the counts
 
lna01:
grasn:


to Candid

Yuri explained in the next post that he was talking about a sliding window.

I guess I just missed that post :(. Still can't figure out how to correct for the dependency of the counts

Why do we need to adjust for the dependence of the samples? I would do something simpler: any averaging "chews away" from the sample spread by some percentage, you can probably estimate the value of this percentage of the window length M for samples with the characteristics listed by Yury - analytically or experimentally. I'm not thinking straight at the moment though...

 

Well, yes, it does, but no clear boundaries are out of the question. If in a million samples there are quite real chances to get a result differing from the expectation by 4 sigma or more (the normal hypothesis gives probability 0.0000634, i.e. expectation of such samples is 63.4 cases), then in a hundred samples such chances are illusory (m.o. their number is 0.00634). But this does not mean that in a hundred of samples we cannot encounter a sample deviation by more than 4 sigmas. It is just extremely unlikely.

Yurixx, this boundary problem can only be posed in probabilistic terms.

P.S. Well, for example: find such values Ymin and Ymax into which Y falls with probability 0.99. It is reasonable to assume that both extremes are equidistant from the m.o. of the general population.

 
Mathemat:

Well, yes, it does, but no clear boundaries are out of the question. If in a million samples there are quite real chances to get a result differing from the expectation by 4 sigma or more (the normal hypothesis gives probability 0.0000634, i.e. expectation of such samples is 63.4 cases), then in a hundred samples such chances are illusory (m.o. their number is 0.00634). But this does not mean that in a hundred of samples we cannot encounter a sample deviation by more than 4 sigmas. It is just extremely unlikely.

Yurixx, this boundary problem can only be posed in probabilistic terms.

Yes, I think that's how he puts it - approximately, you really can't get accurate data. But I am curious, why such a need :o)))

 
grasn:

Why make any allowance for the dependence of the samples? I would do something simpler: any averaging "chews away" some percentage from the sample spread, you can probably estimate the value of this percentage from the window length M on samples with the characteristics listed by Yuri - analytically or experimentally. Although I'm not thinking clearly at the moment...

Experimentally it's easy, I would do so - I suspect that we are talking about non-normally distributed variables :), for them even with independence summation of distributions may have a much less nice and compact answer. Dependence gives additional terms when summing random variables, but I can't figure out what those terms are for this case. In a word, I join to your question: If it's no secret, why do we need it at all? :)
 
Yurixx, don't you think that price series (or series of differences) are related to a normal distribution of counts (i.e. the price series is a classical Brownian process)?
 
lna01:
grasn:

Why make any allowance for the dependence of the samples? I would do something simpler: any averaging "chews away" some percentage from the sample spread, you can probably estimate the value of this percentage from the window length M on samples with the characteristics listed by Yuri - analytically or experimentally. Though I'm not thinking straight at the moment...

Experimentally just as simple, I would do so - I suspect that in fact we are not talking about quite normally distributed quantities :), for them even at independence summation of distributions may have much less nice and compact answer. Dependence gives additional terms when summing random variables, but I can't figure out what those terms are for this case. In a word, I join to your question: If it's no secret, why do we need it at all? :)

If we consider the increment of this quantity, then independence is observed.
 
Avals, if we are talking specifically about returns ( closing price increments), then, alas, there is no independence here either: returns are not distributed according to the normal law. This is well described in Peters' books, I gave the link in the same thread somewhere in the first pages.
Reason: