From theory to practice - page 440

 
Olga Shelemey:
The books I have used to fight the market.

Shelepin L.A.: "Modern science is based on the Markovian paradigm. The review highlights the emergence of a new non-Markovian paradigm (theory of processes with memory)."

That's what I was saying) academics are 50 years behind the reality)

The whole financial world has long viewed the market as a process with memory, and this is news to them)))

There is simply no point in considering it otherwise, because it is impossible to make money on a process without memory, for natural reasons.

 
bas:

Shelepin L.A.: "Modern science is based on the Markovian paradigm. The review highlights the emergence of a new non-Markovian paradigm (the theory of processes with memory)."

That's what I was saying) academics are 50 years behind the reality)

The whole financial world has long viewed the market as a process with memory, and this is news to them)))

There is simply no point in considering it otherwise, because it is impossible to make money on a process without memory, due to natural reasons.

I haven't studied it in detail, but it seems that Shelepin's Markovian process doesn't quite coincide with the generally accepted definition.

With "memory" the main problem is that it is not clear how it (i.e. multivariate process distributions) can be counted in the case of non-stationary processes - there is usually not enough data for that.

One can also make money from a normal random walk with drift (trend), which is quite Markovian.

 
Aleksey Nikolayev:

The main problem with "memory" is that it is not clear how it (i.e. multivariate process distributions) can be considered in the case of non-stationary processes - there is usually not enough data to do so.

It seems to be the same - the dependence between the increments. And what exactly do you see as a problem, why not enough data? I, for example, have no problem with memory retrieval)

By the way, the memory is much better expressed in volatility, you can start your research with it, if someone is looking for "something to grab onto". You can immediately see the aftermath of news and other effects there.

One can also make money on a usual random walk with a drift (trend), which is quite Markovian.

Of course, but here we are talking about Forex) and it has no drift.

 
bas:

It seems to be the same - the relationship between the increments. And what exactly do you see as a problem, why is there not enough data? I, for example, have no problem looking for memory)

By the way, memory is much better expressed in volatility, you can start your research with it, if someone is looking for "something to grab onto". There the aftermath after news is immediately visible, and other effects.

Of course, but we are talking about forex here) and it has no demolition.

I hope we are talking about the dependence of increments as random variables? In that case we need their joint distribution. Two random variables - their joint 2-dimensional distribution, 3 - 3-dimensional, etc. Two-dimensional histograms are sometimes still constructed, but higher dimensional ones are not clear how to represent and the number of data needed grows strongly with dimensionality. It is clear that this is not usually done (but sometimes it has to be done nevertheless). But things are much worse here - for every increment (random variable) there is only a sample of a single volume (value taken from the price chart). That is why we have to make all sorts of assumptions and assumptions (which are not always true). For example, without the assumption about stationarity of increments, their sampling distribution does not converge to their true distribution. The same is true for a bivariate distribution which is needed to determine the pairwise dependence of the increments (e.g. to calculate the covariance function). Briefly, a non-stationary process without "memory" (independent increments) may well acquire "memory" (dependence of the increments) if one uses methods which assume stationarity.

There is, of course, no drift in general. But there may well be some sections where it is present (again, non-stationarity)

 

I can't figure out what's wrong. I calculate the density using the formula

expectation = 0, variance = 55, X = 13.

Density = (1/(MathSqrt(variance) * MathSqrt(2 * 3.14159265358979323846)) * MathPow(2.71828182845904523536, - ((X * X)/(2 * Dispersion) ) );

I got Density = 0.01979.

Checking here

https://planetcalc.ru/4986/

Density = 0.01157


Did I get the formula wrong or is there an error on the calculator website?
 
Evgeniy Chumakov:

I can't figure out what's wrong. I calculate the density using the formula

expectation = 0, variance = 55, X = 13.

Density = (1/(MathSqrt(variance) * MathSqrt(2 * 3.14159265358979323846)) * MathPow(2.71828182845904523536, - ((X * X)/(2 * Dispersion) ) );

I got Density = 0.01979.

Checking here

https://planetcalc.ru/4986/

Density = 0.01157


Did I get the formula wrong or is there an error on the calculator website?

In R:

> dnorm(13,0,sqrt(55))[1] 0.01157429
 
Aleksey Nikolayev:

in R:


I can't figure out where my mistake is then...

 
Alexander_K2:
The only person with the lowest education who can make a colossal speech here is bas. He's not a bad speaker sometimes. Apparently, when he's asleep. In his sleep he has an epiphany. Sometimes it is interesting to read.
education does not give a mind)
Alexander_K2:

Now, the sum of the increments is the price in the moving observation time window, with a starting point of =0.


The sum of increments is how much the graph has traveled in n seconds.
A high one is how much the graph has travelled, a low one is how little it has travelled.
it is the speed.
 

double d = 55 , X = 13;

double p = (1/(MathSqrt(d) * MathSqrt(2 * 3.14159265358979323846)) * MathPow(2.71828182845904523536, - ((X * X)/(2 * d) ) );

Print(p);

0.01157429298384641

 
Aleksey Nikolayev:

double d = 55 , X = 13;

double p = (1/(MathSqrt(d) * MathSqrt(2 * 3.14159265358979323846)) * MathPow(2.71828182845904523536, - ((X * X)/(2 * d) ) );

Print(p);

0.01157429298384641


I don't understand it then, same formula, why there is a different result. NormalizeDouble to 5 digits can't have that effect ...

Reason: