Zero sample correlation does not necessarily mean there is no linear relationship - page 46

 

It's my turn to give an example with a picture.

Suppose there is a sample of two processes (not random, but after all a non-random process is a degenerate case of a random one, so it will do for an example) on the interval t = -10 ... 10:

x1(t) = cos(2*pi*t)

x2(t) = sint(2*pi*t) + h(t), where h(t) is the Heaviside step,

and the sample sampling rate is quite large (much greater than the frequency of sines and cosines themselves) fd >> 1

Here are the graphs of these processes:


Obviously, due to the orthogonality of sine/cosine, the value of the instantaneous correlation coefficient is zero throughout the sample, except at point 0, where the QC is difficult to determine in any way due to the discontinuity of the process.

Nevertheless when we stupidly substitute given processes into formula for linear correlation coefficient we get nonsense: arithmetic mean of time for the second process during time turns out to be not 0 but 1/2, and we are forced to write it into formula, having on an output value different from 0, and the shorter sample is taken (for interval [-10;10] the coefficient calculated in such way will be one and for interval, for example [-3;3] - another). This can be easily checked with the built-in QC procedure in any package, even in Excel.

Here already intuitively there should be a feeling of contradiction: if we divide a sample in two by a point t=0 and in the same way calculate a KK for each part, in both cases we get 0, but it turns out that by joining two "zero" parts together we have not zero? How can this be?

The reason is that the non-stationarity of the process x2(t) is not taken into account, and hence the fact that in this case we cannot take the arithmetic mean over time as an estimate of the mean. Moreover, by construction we know how this average actually changes over time. Therefore, the calculation procedure must precisely reduce both parts, based on a priori knowledge of the processes, to a form which allows stationarity to be asserted.

In other words, the formula for the linear QC should not substitute x1(t) and x2(t), but x1(t) and x2'(t) = x2(t)-h(t), i.e. isolate the stationary term from the second process. Then the result of the formula calculation will coincide with the expectation.

 
Integer:

Don't talk about any one, be specific, name of the textbook, quote from it with the definition. Even so, are you sure you got the definition right, how can you be so sure? Have not tried to touch the correlation coefficient with your own hands (to experiment, play) to understand, realize, feel what it is?

How can you get so stuck up that you have to stoop so hard?

I don't know what a twist is (unless it's a dance of some kind), I looked up the definition of correlation on wikipedia:

Are you trying to critically assess what is written on the fence somewhere? What does this have to do with random values? Only some asshole could have written this definition. If it's the same in all the textbooks on hip-hop or whatever it is, then all these textbooks were written by assholes who don't understand what correlation is and have fucked up the students' brains.


TViST (probability theory and statistics for short) is my major, I studied it in the institute and passed the exam for 5 semesters, with honours. Well, honestly, I'm not going to notarize screenshots here. Anyone wishing can open any, I repeat, any textbook, which will appear at hand, though our, though foreign, and be convinced, what at definition of correlation it is a question, and what not. If one considers that all of them were written by assholes, then one shouldn't read them at all? No, I'd rather put this forum in the category of fences and critically assess first what is written here and then what is written there.

 
alsu:


Here already intuitively there should be a sense of contradiction: after all, if we divide the sample in half by t=0 and calculate the QC for each part in the same way, we get 0 in both cases, but it turns out that by sewing two "zero" parts together, we have not zero??? How can this be?

Nope. Not looking. Zero for one half, non-zero for the other half.
 
alsu:

TViST (probability theory and statistics for short) is my major subject, I studied it in the institute and passed the exam for 5 semesters, with honours. Well, honestly, I'm not going to notarize screenshots here. Anyone wishing can open any, I repeat, any textbook, which will appear at hand, though our, though foreign, and be convinced, what at definition of correlation it is a question, and what not. If one considers that all of them were written by assholes, then one shouldn't read them at all? No, I'd rather put this forum in the category of fences and critically assess first what is written here and then what is written there.

Strangely enough, it seems that my teacher, who taught me at the correlation institute, didn't read these textbooks... lucky for his students:)

 
alsu: ... Obviously, due to orthogonality of sine/cosine the value of the instantaneous correlation coefficient throughout the sample is zero, except at point 0, where the CC is difficult to determine in any way due to the discontinuity of the process.
Integer: Nope. Not looking. Zero for one half, not zero for the other half.

Yes, for the other half not zero. Visual deception.


A follow-up question:

Dear ones, what data for price time series (FX) do you use when drawing conclusions about stationarity, distributions, ergodicity, correlation and other statistical stuff? The question is without a quibble. Just often taking one of the bestband readings quantified by astronomical time? But that is ... how shall I put it... unacceptable. It makes sense to analyse the sequence of price readings from "real" trades, taking into account real volumes. Maybe that's the point - in preparing the data for analysis.

 

Interesting discussion. Maybe they'll get to the bottom of it here at least.

I have repeatedly tried to find out this question, talked to smart (it would seem) people, but it seems no one understands, just puff up their cheeks )))

The physical meaning of correlation is the cosine of the angle between the vectors (where the coordinates of the vectors are both initial samples).

So QC really only "compares" curve shapes, it is not affected by scaling (changing vector length) or displacement (moving the vector origin).

I don't know about quotes, but in signal processing QC is only valid for I(1). In particular, it is quite good at detecting signal periodicity.

I would like to understand what is the sense of using QC for I(0), because it is a comparison of "forms" of two almost completely random series, there cannot be, by definition, any similarity of forms.

And this is all for local application.


Separately, I would like to understand the meaning of calculating QC, distributions and other statistics for the entire series at once. This is an average hospital temperature for N years, what is the point of it?

There is no stationarity in either I(1) or I(0) in the market.

 
airbas: In the market, there is no stationarity in either I(1) or I(0).

What I(1) and I(0) are you talking about for the market?

I(0) is by definition a stationary process. Where is it in the quotes?
 
Demi:
Yes? And I was once taught that the correlation coefficient of cosine and sine varies smoothly from -1 to +1. Turns out it's 0........

From -1 to +1 the cross-correlation _function_ changes. And the sample correlation coefficient is a _number_. And this number is a constant for two samples given in advance. If we take the values of a pair of orthogonal functions on a uniform grid as samples, the coefficient will be equal to zero. This follows from the definition of orthogonal functions - the integral of the definition written as a sum will be surprisingly similar to the definition of sample covariance.

Integer:

The correlation coefficient shows nothing else and the calculation of correlation has nothing to do with normality or ergodicity or stationarity. What kind of textbooks are you reading?

If the main thing for you is to substitute numbers into the formula and get a number, stationarity and ergodicity are not important.

The property of ergodicity allows you to estimate the correlation function for the general population on the basis of a sample of that population. If this property is not fulfilled, the number obtained by the formula can be thrown out.

With stationarity, it's easier to give an example. Take a pair of random processes, stochastic differentials of which have the form:

dX(t) = mu_1 * dt + sigma_1 * dW_1;

dY(t) = mu_2 * dt + sigma_2 * dW_2;

dW_1, dW_2 are correlated Wiener processes (with rho correlation);

mu_1, mu_2, sigma_1, sigma_2 are positive constants.

The example is that the correlation coefficient on a pair of undifferentiated series will tend to unity (for any mu_1 and mu_2 - to sign(mu_1 * mu_2) ) with increasing sample size regardless of correlation between increments. The whole point is that on the I(1) process the sample mean does not converge to a constant.

mu_1=0.01; mu_2=0.05; sigma_1=1; sigma_2=1; rho=0.5:

mu <- c(0.01, 0.05)
sigma <- matrix(c(1, 0.5, 0.5, 1), 2, 2)

simulate.random.walks <- function (num.points, integrated = T) {
  ret.val <- matrix(rnorm(num.points * 2), num.points, 2) %*% chol(sigma)
  ret.val <- do.call(cbind, lapply(1 : 2, function (i) { ret.val[, i] + mu[i] } ))
  if (integrated) ret.val <- apply(ret.val, 2, cumsum)
  ret.val
}

num.points.grid <- trunc(exp(seq(log(10 ^ 2), log(10 ^ 6), length.out = 25)))
cor.integrated <- sapply(
  num.points.grid,
  function (num.points) { cor(simulate.random.walks(num.points, T))[1, 2] }
  )
cor.non.integrated <- sapply(
  num.points.grid,
  function (num.points) { cor(simulate.random.walks(num.points, F))[1, 2] }
  )

png(filename='c:/Users/User/Desktop/bgg.png', 800, 600)
par(mfrow = c(2, 1))
plot(num.points.grid, cor.integrated, xlog = T, t = 'o')
abline(h = 1, col = 'red', lty = 'dashed')
plot(num.points.grid, cor.non.integrated, xlog = T, t = 'o')
abline(h = 0.5, col = 'red', lty = 'dashed')
dev.off()

airbas:

I don't know about for quotes, but in signal processing QC is only for I(1) and is valid. In particular, it is quite good at detecting signal periodicity.

Do you know which university you graduated from? I'll know who needs to be checked more thoroughly for perception adequacy at interviews.

Integer, I have the same question for you, if it is not too difficult.

GaryKa:

Dear, what data do you use for the price time series (FX) when drawing conclusions about stationarity, distributions, ergodicity, correlation and other statistical things? The question is without a quibble. Just often taking one of the bestband readings quantified by astronomical time? But that is ... how shall I put it... unacceptable. It makes sense to analyse the sequence of price readings from "real" trades, taking into account real volumes. Maybe that's the point - in preparing the data for analysis.


Read the definitions in any textbook and get the gist. It makes no difference at all whether you use bid/ask/midprice. The numerical characteristics may be slightly different, but the conclusions about stationarity will be the same.

 

Check yourself for adequacy afterwards:

Свойство эргодичности позволяет оценивать корреляционную функцию для генеральной совокупности на основе выборки из оной. Если это свойство не выполняется - число, полученное по формуле, можно выкинуть.

 
Anonymous, you know, I read the forum regularly, almost the whole forum, I have not seen a single post from you that is adequate.
Reason: