From theory to practice - page 577

 
Alexander_K:

1. Speed = Sum of increments (Absolute) / time

2. Lambda = Sum of increments (Absolute)/number of real quotes

3. Time = Observation window

4. Standard deviation D = Sqrt(Velocity * Lambda * Time)

5. My graph has expectation +- std deviation *quantile.

About the cube root...

I tried to read the standard deviation as SUMM(ABS(returns))/DEVEL(N,0.3333333) or even SUMM(ABS(returns))/DEVEL(N,0.4) instead of SUMM(ABS(returns))/DEVEL(N,0.5).

It seems to work better, but I'm not sure yet. I need to look and read more...

So it turns out: Standard deviation D =Sqrt(sum of increments*sum of increments*time

------------------------------------------------------------

time* number of real quotes )

total: sum of increments/Sqrt(number of real quotes), time is dropped here altogether, right?

 
Novaja:

Total results: Standard deviation D =Sqrt(sum of increments*sum of increments*time

------------------------------------------------------------

time* number of real quotes )

total: sum of increments/Sqrt(number of real quotes), time is out of the picture here, right?

If one considers average velocity per time = observation window, then yes, if instantaneous, say per tick, then no...

 
Vizard_:

Ask Zhenya or someone else to do it.

1.Turkey + envelope.
2.Equity (2x spread).

Insert the formula - look at the cut, etc.
Then you can add ha...


Well, yes - the sum of the increments in the sliding window, if I understand correctly. Nice indicator, but it doesn't do much about trends. But - cool... But not great... I'm confused.

I'm reading a book right now. It's the best I've found in a while. There's a lot in it that we've talked about in this thread + neural networks.

I'm publishing it again.

 
Novaja:

Total results: Standard deviation D =Sqrt(sum of increments*sum of increments*time

------------------------------------------------------------

time* number of real quotes )

Total: sum of increments/Sqrt(number of real quotes), time drops out here altogether, right?

It's funny. If we take also a 4-digit quotation, then "sum of increments (Absolute)" equals the product 0.0001 * N, where N is "number of real quotes", and

Standard deviation D = 0.0001 * N / N^0.5 = 0.0001 * N^0.5 = Sqrt (N/100). So the whole trick is what is taken instead of the real quotes. How they are thinned.

 
Vladimir:

Funny. If you also take a DC with 4-digit quoting, then "sum of increments (Absolute)" is equal to the product 0.0001 * N, where N is "number of real quotes", and

Standard deviation D = 0.0001 * N / N^0.5 = 0.0001 * N^0.5 = Sqrt (N/100). So the whole trick is what is taken instead of the real quotes. How they are thinned.

The topic is interesting, but has nothing to do with the market.

Some strange pictures, names and terms, nothing to do with the market.

There are no increments, this is not the area in which to look for them.

 
Alexander_K:

...

I tried counting the standard deviation as SUM(ABS(returns))/Degree(N,0.3333333) or even SUM(ABS(returns))/Degree(N,0.4) instead of SUM(ABS(returns))/Degree(N,0.5).

It seems to work better, but I'm not sure yet. I'll have to look and read more...

It seems to me that if you take ABS(returns)^4 instead of ABS(returns), there will be a sharper reaction to deviations from the average, and you seem to be catching them.

 
Vitaly Muzichenko:

The topic is interesting, but has nothing to do with the market.

Some strange pictures, names and terms, nothing to do with the market. There are no increments, this is not an area in which to look for them.

You have to read from Chapter 2 onwards. And the forms of distributions are the same as in Forex. And the neural network determines parameters of these distributions. All in all, a good book.

 
Vizard_:

Make a proper tool. On the fly - put in the formula and look at the cut.
I want it to take in a few million observations and have a scalable graph.
That's what I'm saying...

Understood. Roger that.

 
Vladimir:

It seems to me that if you take ABS(returns)^4 instead of ABS(returns), there would be a sharper reaction to deviations from the average, and you seem to be catching them.

:)

 
Maxim Dmitrievsky:
Damn, it's so interesting, I could sit around all day long... but it doesn't bring any money)

:))) Shocked myself... Something's missing...

But Wizard stubbornly advises me to finish one subject. Well, if we have to do it, we will. I'm afraid we can't do it without a neural network. In the book (see above), the guys use neuronet to determine parameters of current distribution "on the fly". Perhaps this is what is missing....

After all, the quantile is always a constant, derived from the quantile Gaussian distribution function, and this is a very crude approximation. The quantile must also be calculated dynamically. This is where a neural network can come in very handy.

Reason: