I would like to share the link - page 2

 

My point is different. It's just not worth spending a lot of time proving the inconsistency of EMH - there are no fish there anyway. Yes, there are tails, yes the reason is to react to a bundle of information rather than individual news stories. Yes, it is now scientifically proven. But the market is as unsteady as it ever was and it hasn't gotten any easier to make money from it.

p.s. hehe, a few more articles like this and you'll get into the ideas of fractal statistics, causality is one of the cornerstones there.

 
C-4: .... You get into the ideas of fractal statistics, causality is one of the cornerstones there.

I am familiar with it. I just find it underdeveloped compared to other methods.

It's just not worth spending a lot of time proving EMH invalid - there's no fish there anyway.

I'm not interested in proving anything. The idea is completely different. The market is non-stationary. It is a given. It cannot be changed. But that doesn't mean you should close your eyes and hope for the odds. The usual scientific approach is to take a bite out of what we understand and can bite off.

 

faa1947: толстые хвосты являются результатом памяти в котире.

This is a known fact.

And why do we need a memory in the form of obscure tails, if we have unlimited access (memory) to past data?

If only the tails would show the future behaviour of the quotient, then it would be invaluable information, because we are not trading in the past, but in the future.

 
LeoV:

This is a known fact.

And why do we need a memory in the form of obscure tails, if we have unlimited access (memory) to past data?

If only the tails showed the future behaviour of the kotir, then it would be invaluable information, because we are not trading in the past, but in the future.

Yeah, the hell you do. Just grabbing at everything.

Saw an article the other day that uses changes in the law of the distribution to make predictions. That's some unusual thinking.

 

I'll share.

About the tails - there is one delightful result. Let me explain the methodology of the calculation.

We all know how the first differences of a currency series are roughly distributed (roughly like exp(-a|x|), or so). I set out to determine which parts of this distribution are the "true carriers of external information", so to speak. What we do is this. Let's count RMS returns over some large interval of time and for each quotient let's calculate probability ratio of its belonging to Laplace distribution relative to normal one with the same variance. I will not dwell on how to calculate it, there is wikipedia.

Interesting cases emerge when we plot the distribution of the likelihood ratio itself (or rather, its logarithm:


In the figure it is clipped to the right by 2, but the tail theoretically goes to infinity. So the whole thing is just a sharp cliff at the value of 1/2*ln(pi). It turns out that a small fraction of quotes gives a sharply different occurrence of Laplace - a distribution with thicker tails than the Gaussian one. And these quotes are computable.

It seems it is possible to effectively build a trend-flat analyzer based on this fact and determine compliance with the criterion already on the current bar. Well, or at least effectively identify disasters and respond to them quickly.

 
alsu:

I'll share.

About the tails - there is one fascinating result. Let me explain the methodology of the calculations.

We all know how the first differences of a currency series are roughly distributed (roughly like exp(-a|x|), or so). I set out to determine which parts of this distribution are the "true carriers of external information", so to speak. What we do is this. Let's count RMS returns over some large interval of time and for each quotient let's calculate probability ratio of its belonging to Laplace distribution relative to normal one with the same variance. I will not dwell on how to calculate this, there is wikipedia.

Interesting things happen when we plot the distribution of the likelihood ratio itself (or rather, its logarithm:


In the figure it's clipped to the right at 2, but the tail theoretically goes to infinity. So the whole thing is just a sharp cliff at the value of 1/2*ln(pi). It turns out that a small fraction of quotes gives a sharply different occurrence of Laplace - a distribution with thicker tails than the Gaussian one. And these quotes are computable.

It seems it is possible to effectively build a trend-flat analyzer based on this fact and determine compliance with the criterion already on the current bar. Well, or at least effectively identify disasters and respond to them quickly.

Very interesting.

When we talk about distribution, we base it on a fairly large number of observations. On the graph I see a figure of 20,000. I agree that with that many observations we can draw conclusions about the law of the distribution. But we are interested in the bar following the current one. And here the greater the number of observations, the more "average" conclusions can be drawn about the last bar.

There is a curious figure of 30. Before 30 we are considered to have t-statistics, and after 30 we have z-statistics if we sample a normal population.

So the question is. Is it possible to use the identified pattern on large samples to use it on small samples, assuming that this small one belongs to a large one?

 
By the way, made a selection of tails from the link above
Files:
tail.zip  19 kb
 
faa1947:

Very interesting.

When we talk about a distribution, we base it on a sufficiently large number of observations. In the graph I see a figure of 20,000. I agree that with that many observations we can draw conclusions about the law of the distribution. But we are interested in the bar following the current one. And here the greater the number of observations the more "average" conclusions can be drawn about the last bar.

There is a curious figure of 30. Before 30 we are said to have a t-statistic, and after 30 we have a z-statistic if the sample and the population are normal.

So the question is. Is it possible to use the identified pattern on large samples to use it on small ones, assuming that this small one belongs to a large one?

The nature of the distribution does not change. Incidentally, the study itself started with the fact that the strange behaviour of Likelihood Ratio is noticeable one might say to the naked eye:


 
By the way, I found a rather interesting application to this case in passing. If some task is to analyse some "sliding" characteristics of a series, then if we drop the bars with an anomalous LR from consideration, the analysis results are much smoother. This makes it possible to estimate model parameters more accurately, being less concerned with external influences.
 
alsu: It turns out that a small fraction of the quotes gives a sharply different probability of belonging to a Laplace distribution, a distribution with thicker tails than the Gaussian distribution.
This suggests that some sort of pattern exists. Not always and not everywhere - and that is understandable. Which can be used in trading accordingly.
Reason: