Dependency statistics in quotes (information theory, correlation and other feature selection methods) - page 14

 

faa1947: На всех ученых советах, на которых я присутствовал в свое время подобное ваше выступление было бы последним навсегда.

Well, I'm not speaking at an academic council here. But, again, I will try to find arguments and present them here. Although, on the other hand, it's not so easy to compare: it's a completely different method. So you have to look for something similar in publications.

Practically valuable. And it manages to handle non-stationary random processes with unknown distributions.

Co-integration? Or multiple differences of the original process that manages to fit a successful Dickey-Fuller test?

 
HideYourRichess: In my opinion, even if mistaken, the essence of the formula cannot change, as well as the conditions of its applicability, from the fact that it is written in other characters.

There is the Shannon definition of entropy, in which independence is mandatory.

And there is a definition of mutual information, in which the Shannon definition is applied purely formally, since it is still assumed that dependencies exist.

If you want to dig into the philosophical depths and contradictions of the definition of mutual information - please, dig. I would prefer not to bother about it and just use "American" formula with probabilities, not bothering about independence.

A more complete system looks like this: market alphabet <-> quote alphabet -> problem alphabet. The topikstarter only considered the last pair, the quote is the problem.

I don't know what your problem alphabet is. I have a system of a pair of bars separated by Lag distance. One bar, in the past, is the source and the other is the receiver. The alphabets of both are identical (as far as bar returns are concerned, of course).
 
Mathemat:

Co-integration? Or multiple source process differences that manage to fit a successful Dickey-Fuller test?

DF has nothing to do with it - the goal is prediction. We are looking for regression equation, the residual from which would have almost constant mo and variance with simultaneously min prediction error.
 
faa1947: One is looking for a regression equation, the residual from which would have almost constant mo and variance with simultaneously min prediction error.

The whole problem is the term "almost", as usual.

And a prediction error is an error in predicting the past. So much for econometrics... Of course, I'm a little bit of a convincer.

P.S. Don't mind me. Just a thought (turn off the recorder, please, it's not for the press): As soon as the calculations of some econometric "science" become perfect and automated, they become useless.

 
Mathemat:

The whole problem is the term "almost", as usual.

And a prediction error is an error in predicting the past. So much for econometrics... Of course, I'm a little bit of a convincer.

P.S. Don't mind me. There's just this thought (turn off the recorder, please, it's not for the press): Once the calculations of some econometric "science" become perfect and automatable, they become useless.


The office has already written it down - purely in private, to their tablets....
 
Mathemat:

And a prediction error is an error in predicting the past.


Even fiction relies on the past.

It depends on WHAT we take from the past. If we take non-stationary VR analysis - then it is hopeless and all kinds of tricks in testing do not save us. If we managed to single out and analytically formulate components with a residue in the form of white noise, it is a different story. The important thing is that millions of educated people have followed this way for decades, leaving the Chukchi song called TECHNICAL ANALYSIS for the suckers.

 

That's right, for suckers. The suckers always stick to the established procedures and never try anything sideways.

And stationarity is there for a reason. For example, if we investigate a stationary series of information (even though the initial series of quotes or returns is non-stationary), we can hope for good results that work in the future.

 
Mathemat:

That's right, for suckers. The suckers always stick to the established procedures and never try anything sideways.

And stationarity is there for a reason. For example, if we investigate a stationary series of information (even though the initial series of quotes or returns is non-stationary), we can hope for good results that work in the future.

Of course, if by stationarity we mean constant mo and variance. But there is another hitch: we have to be sure that the regression coefficients are also "almost" constants.
 
faa1947: But there is another hitch: you have to be sure that the regression coefficients are also "almost" constants.
And what - does econometrics give such guarantees?
 
Mathemat:
And what - econometrics gives such guarantees?

People who have EViews don't ask such questions, hee-hee
Reason: