From theory to practice - page 570

 
Alexander_K:

Well, it's a xi-square, obviously.

It's the variance. I'm wrong, of course - the sum of the moduli of the increments is directly proportional to the variance.

You just have to count the sum of the increments - we have a normal distribution.

But again, what should we do with it? Should I work with the levels in the weekly window? I'd rather stop working on the market altogether - I'm bored and don't need any money.

Nothing else besides neural networks comes to mind...

Something is wrong here, chi-squared decreases faster than exponent(geometric p=0.5), and we do this: take Laplace and add the left exponent to the right, you get exponent again, I have a formula there, somewhere a little closer, somewhere further, but there is a dip in zero. It's not clear.

 
Novaja:

Something is wrong here, the chi-squared decreases faster than the exponent (geometric p=0.5), and we do this: we take Laplace and add the left exponent to the right, we get the exponent again, I have a formula there, somewhere a little closer, somewhere further, only there is a dip in zero. It is not clear.

On Eugene's data it is clearly visible practically normal distribution both for sum of increments and for sums of moduli of increments and for increments of sums of increments.

In general for everything.

There is no sense to investigate further. Everything is found - both Laplace and Gauss. But all this in large time windows and large time intervals between quotes. Pretty sure that in Erlang flows of high >=300 orders of magnitude everything is perfect in general.

И... Nothing. Emptiness. For a trader to wait for 1 trade a month is unthinkable - one can go crazy.

 

It is clear that it is possible to get from an exponent to a normal distribution, in a Pearson (chi-square) distribution with increasing degrees of freedom or in an Erlang (Gamma) distribution with increasing k, with increasing point spread, with transforming a uniform distribution, on one side of the uniform distribution is an exponent and on the other side is a normal distribution. All these are random processes. The question is to find deviations from randomness.

PS Alexander, once again Doc was right about your service provider, there both time intervals and increments themselves have deviations from the exponent, you and Doc confirmed the logarithmic distribution, it decreases slower than the exponent, throughout the distribution, not only in the tails, try to give pure values, as you did there evenly every second read, on neuronka, let someone help, see the result. You have an exception to the rule, which by the way confirms the rule that it will mostly be exponentiated and Cauchy and the difference is only in the tails of the distributions, where the gaps are usually.

 
Novaja:

It is clear that it is possible to get from an exponent to a normal distribution, in a Pearson (chi-square) distribution with increasing degrees of freedom or in an Erlang (Gamma) distribution with increasing k, with increasing point spread, with transforming a uniform distribution, on one side of the uniform distribution is an exponent and on the other side is a normal distribution. All these are random processes. The question is to find deviations from randomness.

PS Alexander, once again Doc was right about your service provider, there both time intervals and increments themselves have deviations from the exponent, you and Doc confirmed the logarithmic distribution, it decreases slower than the exponent, throughout the distribution, not only in the tails, try to give pure values, as you did there evenly every second read, on neuronka, let someone help, see the result. You have an exception to the rule, which by the way confirms the rule that it will mostly be exponentiated and Cauchy and the difference is only in the tails of the distributions where there may be gaps.

I'll flounder with levels for a month and then go to neuronics. I don't see any other options... Working with known distributions is nice, of course, but they are very deeply hidden... Alas...

 
Alexander_K:

A histogram! If there is a normal distribution - grail, no - yes, nothing changes...

Uncles, until you forget all about your distributions, there will only be dust in your pockets)) I'm willing to bet 100$ on this )))

And another 100$ that it'll take another 500 pages to get to you)

You have a time series, not just statistics. Change the textbook.
 
Novaja:

This is a confirmation, I tried to fit the exponent to your data, you can see that the peak is smaller and the middle does not fit. Check it yourself, or send me more data, intervals and increments and I will try to fit the exponent.

No, you don't have to. Everything is already clear and understandable to me.

Or just let my TS work without additional conditions, ACF, Hurst, asymmetry and other stuff (I'm afraid it will either be +0% or fail),

Or apply sums of increments and moduli to inputs of NS (the process is practically Gaussian with non-delta-correlated ACF) and let it mow the cash.

Amen.

P.S. I don't believe in ZigZags, sorry :)

 
Alexander_K:

and then on to neuronics. I don't see any other options...

hmmm, I don't like the phrase I read on the internet, but I will say: POPKORN POPKORN!

 
secret:

Uncles, until you forget all about your allocations, there will only be dust in your pockets)) I'd be willing to bet 100$ on that )))

And another $100 that you won't get this until another 500 pages)

Why just 100? Why not 1k? 10к? You still have doubts? What about the secret? What about the screenshot in your profile?
 
Evgeniy Chumakov:
Why only 100? And not 1k? 10к? You still have doubts? What about the secret? What about the profile picture?

I don't want to ruin the losers too much)

 
Alexander! Build a histogram from this file, I think it would be interesting. Although who knows.
Reason: