Dependency statistics in quotes (information theory, correlation and other feature selection methods) - page 13

 

No, the problem has not changed. It's just an atomic problem, indivisible. And to get the overall picture, you have to scan the Lag variable as well.

I can post excerpts of my results from a few months ago (but I have them in text form). It's not mutual information, as the topicstarter has, but Matrix frequencies. There are also the results of calculating the "chi-square test for independence of variables" statistic (at the time I didn't know what mutual information was, but I was already concerned about a common measure of variable dependence and experimented with different criteria). Nevertheless, these figures aren't boring at all either.

I will post them tomorrow (well, I mean today, but later), because I don't have access to the computer I was calculating on.

P.S. This has nothing to do with "universal regression etc": (18) is a crudely mechanistic approach to price, while here it is fundamentally statistical.

 
Mathemat:

No, the problem has not changed. It's just an atomic problem, indivisible. And to get the overall picture, you have to scan the Lag variable as well.

I can post excerpts of my results from a few months ago (but I have them in text form). It's not mutual information, as the topicstarter has, but Matrix frequencies. There are also the results of calculating the "chi-square test for independence of variables" statistic (at the time I didn't know what mutual information was, but I was already concerned about a common measure of variable dependence and experimented with different criteria). Nevertheless, these figures aren't boring at all either.

I will post them tomorrow (I mean today, but later), as I have no access to the computer on which the calculations were made at the moment.

P.S. This has nothing to do with "universal regression etc": (18) is a crudely mechanistic approach to price, while here it is fundamentally statistical.

(18) in ATS mode gives, even if bad, but the result, without using stops and TP, and bring your fine statistical approach to this level, then we will compare.

Gold from 25.11.2009 till 02.09.2011, H4, 0.1 lot, max drawdown 10.32%, MO 27,6

ׂ

 
Mathemat:

No argument, it all makes sense. Let's start with point 1.

1. "Define exactly what we take": First - the task-cell, then the indivisible one.

Fix the integer Lag. It will be "distance between bars", i.e. the modulus of difference of their indexes at specified timeframe in MT4.

Objective: to determine whether there is a statistical relationship between the following two random variables: 1) the return of the "master" bar with index sh, and 2) the return of the "slave" bar with index sh+Lag.

This is what we take: all pairs of bars with a distance between them equal to Lag. It's extremely precise.

Where and what is there to doubt? Let's deal with the first item first. If it works - let's move on to the second point.

It is almost an ACF, but the formula is different. ACF is an integral part of statistics. It's great for looking for dependencies of all kinds. It has been used in both theory and practice very extensively since the advent of ARIMA.Any new thing must begin by indicating similarities and differences with commonly known and well-established similar things. If this is not done, the idea is non-negotiable in the dingiest houses in London. That's what I've been talking about all through this thread. You should always start with a round-up of the literature. No circumlocution - no bazaar in the quote from your post.

Next. I see sh, I understand that ACF is counted from sh=1, not from an arbitrary place. But there is an ACF. How does your suggestion resemble or differ from this. Just don't obfuscate the point (dependencies in BP) with words from TI.

 
faa1947: It is almost an ACF, but the formula is different. ACF is an integral part of statistics. It is excellent at looking for dependencies of all sorts.

It's not necessarily an ACF. And you are dead wrong about ACF looking for dependencies of all sorts. Take a look at the correlation. There are the limitations of correlation analysis near the end of the article, where the picture is. That's why I gave up on ACF. Linear correlations between bars detected by Pearson correlation are too weak and short-lived.

You should always start with a rundown of the literature. No circumlocution - no bazaar in the quote from your post.

This way we won't be able to move for a long time. But in general I agree with you: some kind of argumentation is still needed. I will think about it - if you are not satisfied with the last sentence of the previous paragraph, concerning linear dependences.

Just don't obfuscate the point (dependencies in BP) with words from TI.

So you have decided to forbid me to use TI to find dependencies?

2 yosuf: I'm not going to compete with you. Continue to improve your tool, but do not go into this thread, please. It is an afterthought here.

 
Mathemat:

Found an article on information entropy (Wiki). Quote 1 from there:

This is entropy, conventional entropy. Is that the definition you're interpreting?

Yes, I'm willing to agree that the letters of the alphabet should be statistically independent so that there is no redundancy or dependencies. This is roughly what the archiver is doing, creating an alphabet that is clearly different from the alphabet used to create the text.

But that's not what we're counting! On what we're counting, next.


The talk of the topicstarter (and mine too) was not about information entropy, but, damn it, about mutual information (Wiki again)!!!

Mutual information is a statistical function of two random variables describing the amount of information contained in one random variable relative to the other.

Mutual information is defined through entropy and conditional entropy of two random variables as [next comes the formula for I(X,Y)

Just to point out, it follows from the same pedivics that the formula for calculating mutual information can be as follows:

Mutual information (between X and Y) = Entropy (X) - Conditional entropy (between X and Y)

That is if we do not write scary looking formulas from American sources, but go by definitions.

Here X and Y are two different systems, and there is a dependence between them, on X and Y.

If we want Total Mutual Information, then it's like the topicstarter's:

Total mutual information (between X and Y) = Entropy (X) + Entropy (Y) - Entropy of the combined system (X and Y)

Why is it written "entropy of the combined system" and not "conditional entropy", because in fact the total entropy of the system of two systems can be either independent or conditional. It is clear that if X and Y are unrelated, and independent, then we should count as joint probabilities (entropy addition theorem), and if there is a relationship, then as conditional.


Now for our interests. How this whole bypass can be applied to the market. Suppose the model is as follows. There is a system X - market (alphabet), it has a finite and definite number of states (symbols) that appear with a certain frequency (symbol probabilities). There is a second system Y - a record of quotations. Quotes (alphabet) have also a limited set of symbols with certain frequencies. What can be deducted from it all?

1. You need to know the alphabet of the market. Something always happens there, buy and sell, someone goes bankrupt, someone comes up with new money, mass hysteria takes place, etc. That is to say, the alphabet is very vast and can hardly be described so easily.

2. Even if it is possible to describe the alphabet of the market, there is a question about the stationarity of the processes taking place in the market. It should be understood that TI is absolutely oriented on the constancy of the properties.

The alphabet of the second system, quotes. It is different from the alphabet of the market. Probably it already is. And you need to know which one. If we simply divide the range of changes of quotes on timeframe into quantiles and make them into alphabet, what do we get. More precisely, do we get the full or partial mapping of information from the market alphabet into the alphabet of quotes? What part of the information is lost? Or maybe nothing is lost and the market alphabet is just redundant. Etc.

 
Mathemat:

It's not nearly, or even at all, an ACF. And you are wrong about ACF looking for dependencies of all sorts. Look at the correlation. There are the limitations of correlation analysis near the end of the article, where the picture is. That's why I gave up on ACF. Linear correlations between bars detected by Pearson correlation do not interest me, as they are too weak and short-lived.


The elaborateness of the correlation is its strong side, but at the same time to the weak side you have attributed the known limitations of the correlation. But it is precisely these limitations that allow us to reason meaningfully about a quantity called "ACF", the probability of confidence in that quantity, the conditions for calculating that confidence, and generally to assess the permissibility of any reasoning about these quantities, depending on whether the correlation limitations are met. Even having mastered everything, armed with a tool, in practice one encounters serious difficulties and constantly falls into fornication.

Try writing the same about the subject of the topical.

ACF shows trends quite concretely, and together with ACF looks for cycles. And what does "information dependence" look for, what kind of a beast is it and how does it show up in quotes or in increments? There are plenty of publications on the psychology of the market, where one can find an explanation of the formation of trends and cycles, but what is the psychological basis of "informational dependence", in what publications is it written? and does it affect the quotes? On what basis can the resulting pictures be trusted? Where are the probabilities of credibility of the result? Where are the conditions for the applicability of all this? Just questions. This topic reminds me more and more of the topic with hfenks (if I remember correctly), who also bent unknowingly on the topic of dependencies.

From the thesis point of view, exclusively preliminary, there are signs of scientific novelty, but without comparison with correlation it is all empty rubbish (sorry).

 
HideYourRichess:

Just to point out that it follows from the same pedivics that the formula for calculating mutual information could be as follows: [...]

Why is written "entropy of the merged system" and not "conditional entropy", because in fact the total entropy of the system of two systems may be both independent and conditional. It is clear that if X and Y are uncorrelated and independent, one should count as joint probabilities (entropy addition theorem), and if there is a connection, then as conditional.

I suspected you would point this out. Fortunately, in any case, formulas written through probabilities (rather than entropies) remain the same - regardless of what's there depending on what or not. So this reasoning adds nothing new.

There is a system X - market (alphabet), it has a finite and definite number of states (symbols) which appear with a certain frequency (symbol probabilities). There is a second system Y - a record of quotations. Quotes (alphabet) have also a limited set of symbols with certain frequencies. What may be deducted from it all?

I draw your attention to the fact that this is no longer the system the topicstarter was considering. I am not so naive as to be seriously suggesting that it is possible to learn the alphabet of the market. And I try to set realistic goals for myself.
 
faa1947: The elaborateness of the correlation is its strength, but at the same time to the weak side you have attributed the known limitations of the correlation. But it is precisely these limitations that allow us to meaningfully reason about a quantity called "ACF", the probability of confidence in that quantity, the conditions for calculating that confidence, and generally assess the permissibility of any reasoning about these quantities, depending on the fulfillment of correlation limitations.

Absolutely right. Half of the terver/matstat talks about the central limit theorems and the implications about them, which relate specifically to the normal distribution. It is a perfectly 'worked out' distribution. Nevertheless, there are some random variables that do not obey it even in the limit. Why should I deal specifically with Pearson correlation just because it is perfectly worked out?

ACF quite specifically shows trends, and together with CHAKF looks for cycles.

Neither cycles nor trends are of interest at the stage of data mining yet. It is the dependencies that are not detected by ACF in principle that are of interest.

And what does "information dependence" look for, what is this beast and how does it show up in quotes? Or in increments? There are plenty of publications on the psychology of the market, where one can find an explanation of the formation of trends and cycles, but what is the psychological basis of "informational dependence", in which publications is it written? and does it affect the quotes? On what basis can the resulting pictures be trusted? Where are the probabilities of credibility of the result? Where are the conditions for the applicability of all this? This thread reminds me more and more of the thread with hfenks (if I remember correctly), who also spouted unknowingly about dependencies.

You ask too many questions. I will ask you: you know at least one researcher, who, before starting something very new and very strange, would first make a complete and absolute substantiation of the applicability of the new - and then proceed to obtain results, the hint of which flashed in his head in a split second? Usually it is the other way round: first the new is applied without regard to substantiation and rigour, and then, if something interesting comes out, the substantiation starts. Do you understand what I mean?

And by the way, about hrenfx: he also did analysis based on Pearson correlation.

From the thesis point of view, exclusively preliminary, there are signs of scientific novelty, but without comparison with correlation all this is idle rubbish (sorry).

No big deal. Well, we are not discussing a dissertation here, but just a curious idea, from which something may come out in the future. I'm well aware that it may not. Then why waste time on a taut justification?

 
Mathemat:

Absolutely right. Half of the terver/matstat talks about the central limit theorems and the implications about them, which relate specifically to the normal distribution. It is a perfectly 'worked out' distribution. Nevertheless, there are some random variables that do not obey it even in the limit. Why should I deal specifically with Pearson correlation just because it is perfectly worked out?

Neither cycles nor trends at the data mining stage are of interest yet. It is the dependencies that are of interest that are fundamentally undetectable by ACF.

You ask too many questions. I will ask you too: you know at least one researcher who, before starting something very new and very strange, would first make a complete and one hundred percent justification of applicability of this new - and then proceed to obtain results, a hint of which flashed in his head in a split second? Usually it is the other way round: first the new is applied without regard to substantiation and rigour, and then, if something interesting comes out, the substantiation starts. Do you know what I mean?

And speaking of hrenfx: he also did an analysis based on Pearson correlation.

No big deal. Well, we are not discussing a dissertation here, but just a curious idea, from which something may come out in the future. I'm well aware that it may not. Then why waste time on a taut justification?

Why should I deal specifically with Pearson correlation just because it is perfectly worked out?

Practically valuable. And one manages to handle non-stationary random processes with unknown distributions.

Usually, it is the other way round: at first, the new is applied without regard for substantiations and all sorts of strictures, and then, if something interesting is obtained, the substantiations are made. Do you understand me?

No. First the ford is measured, and then everything else. At all scientific councils I have attended in my time such your speech would be the last for ever.

Why then waste time on stretched justifications?

Stretched is not necessary. But one needs to understand what is being discussed at the level of comparison with the existing one.

 
Mathemat:

I suspected you would point that out. Fortunately, in any case, formulas written through probabilities (rather than entropies) remain the same - regardless of what's there depending on what or not. So this reasoning adds nothing new.

In my opinion, even if erroneous, the essence of the formula can not change, as well as the conditions of its applicability, due to the fact that it is written by other symbols.

Mathemat:
I draw your attention to the fact that this is no longer the system that the topicstarter was considering. I'm not so naive as to seriously talk about learning the alphabet of the market. And I try to set realistic goals for myself.
A more complete system looks like this: market alphabet <-> quote alphabet -> task alphabet. The topikstarter only considered the last pair, the quote is the task.
Reason: