How can I tell the difference between a FOREX chart and a PRNG? - page 28

 
Mathemat:

Strange to hear you say that. Do you really believe that ranking really doesn't take absolute values into account in any way?

The main requirement for non-parametric methods is robustness to "noise" and distributions (especially fat tails). This can be achieved at the slightest sacrifice of accuracy, which is often elusive and misleading.

This accounting depends on the chosen statistical rank measure (some functional), so on small samples the coefficients for Spearman, Kendall and Hefding will show unequal values. So what to use? For the different media of the value-generating system, such as the type and order of the trend carrier function, one or the other measure will be better. Yes, a non-parametric method can estimate QC approximately, but is it necessary if the type of this correlation is unknown? Non-parametric QC is non-parametric in the sense that the measures chosen to measure it are only insensitive to monotonic transformations of observations, which is not always the case in the market either. SB with demolition provides often abrupt non-monotonic rank conversions.

In contrast, linear QC gives a value that is understandably applicable.

Alexey, let's define and distinguish between long tails and fat tails, because they are mutually inverse. According to my research there are no distributions with long tails in the market.

 
-Aleksey-: Alexey, let's define and distinguish between long tails and fat tails, because they are mutually inverse. According to my research, there are no distributions with long tails on the market.
Googled:

A frequency distribution with a long tail has been studied by statisticians since at least 1946.[8] The term has also been used in the finance[9] and insurance business[4] for many years (also referred to as fat tail, heavy tail or right-tail[ 10]).

I can't tell the difference. Poke my nose in to see where I'm wrong.

Yes, a non-parametric method can approximate QC, but is it necessary if the type of this correlation is unknown?

No one is saying that non-parametric methods solve all problems. But often their estimates are more adequate than parametric ones - just when the type of correlation is unknown.

According to my research, there are no distributions with long tails in the market.
Take a look at the distribution of returns. It approximates quite accurately by an exponential law, i.e. a law with fat tails.
 

If the tail is long, it is thin. The exception is the triangular distribution and similar(trapezoids). And vice versa. And if you call long thin tails thick, it's confusing, as thick ones are more likely to be short. This is imho, not from googling.

The whole issue here is what the distribution is. Classical theory doesn't allow you to define this concept unambiguously(moreover, it doesn't even allow you to build it), so I don't use it. My approach is the evolution of a quasi-stationary distribution in some space that defines the measure of error.

 
-Aleksey-: The whole question here is what the distribution is. Classical theory does not allow to define this notion unambiguously (moreover, it does not even allow to construct it), so I do not use it. My approach is the evolution of a quasi-stationary distribution in some space that defines the measure of error.
I'm not that good at subtleties. The talk was about something else - about non-parametric methods and the fact that they often turn out to be more adequate than parametric ones - especially if the distribution is unknown. Not more accurate, but more adequate.
 
Mathemat:
I'm not that good at the finer points. That's not what we were talking about - non-parametric methods.
And what to speak about them, all these coefficients have different sensitivity to non-monotonic transposition of ranks, accordingly they show different things. We may come up with a lot of them. But it is not known which one to choose when the type of correlation is unknown.
 
faa1947:
Isn'tAlexEro right about matlab? It's a holy thing, shining in the sky, paid for, mad dough.....

It's not Matcad's fault, I've already written above why the decrease is happening.

Once again, AlexEro, the decay comes from the fact that you actually count lcorr not from cos(w*i) (a function infinitely continuing on both sides of the number axis), but from cos(w*i)*[h(i) - h(100-i)] , where h(t) is the Heaviside function (unit step). Simple way to check: the more samples of the sinusoid you set, the less the decrement will be. The complicated way to check: substitute the specified expression into the formula for lcorr explicitly and get a triangle.

 
-Aleksey-:

If the tail is long, it is thin.


In TV it's just the opposite, not like in zoology: if the tail is long, then it's thick.) It's all about normalizing the area under the graph by 1, i.e. the "tail" pumps out some of the probability from the central area. In general, "thick" (or "long", if you like) means different things depending on the context - it may be distributions decreasing slower than the Gaussian, or distributions with infinite variance, etc.

 
alsu:

It's not Matcad's fault, I've already described above why you get the decrease.

Once again, AlexEro, the decay comes from the fact that you actually count lcorr not from cos(w*i) (a function infinitely continuing on both sides of the number axis), but from cos(w*i)*[h(i) - h(100-i)] , where h(t) is the Heaviside function (unit step). A simple way to check: the more samples of a sinusoid you set, the smaller the decrease will be. The hard way to check: substitute the specified expression into the formula for lcorr explicitly and get a triangle.

(in the tired voice of Professor Preobrazhensky)

"Excuse me, who was standing on who?"


Excuse me, where is it I am "counting the Heaviside function window"? Show me, please, poke me with your nose.

Fuck, I'm turning into Allochka here. This is some kind of conspiracy, a diversion.

I don't care how Matlab counts,

I don't care how physicists program in Fortran,

I don't care what's in the head of a Matlab outsourced programmer,

I don't care what's in the mind of a stoned Hindu taskmaster at Matlab, and that he thinks it's "correct" to program autocorrelation, and that for his stoned "correctness" he thinks that the lack of samples at the end of a sample interval must be "compensated" with a Heaviside window ALL THE TIME, which depletes all autocorrelation.

I don't care about that. I don't use matlab, have never, ever used it and don't intend to. The matlab drawings I cited are Privalova's, I gave a link to them there as well.

I just don't understand how you can twist the discussion in such a way. It's not a discussion, it's a soviet demagogy. I'm talking about the definition of autocorrelation, the meaning of this concept, I show you theoretical foundations and simple rules for checking the correctness of any autocorrelation algorithm, I show that in Matlab and in Privalov damping autocorrelation starts right from the first countdown, and I am shown the explanation that I AM MY Fault, becauseI "count the Heaviside window" . I'm being sued by myself at !


Fucking hell, is there even one person here who knows what I'm talking about? Ow!

 
AlexEro:

Fucking hell, is there one person here who knows what I'm talking about? Ow!

There is. There you go, you promised!

P.S. Why don't you go to the "What's an INDICATOR" thread? Maybe in a year you'll write something sensible...

 

While Alex is thinking about what an INDICATOR is, a question for everyone - there are two samples of SILVER and GOLD. Daily data, 420 observations.

Spearman's AC is 0.52, the rank correlation coefficient is statistically - significant and the rank correlation relationship between the scores on the two tests is significant.

Pearson's KC is 0.64.

So? Direct correlation. Practical conclusion?

Reason: