From theory to practice - page 106

 
Nikolay Demko:

I'm not talking about the distribution but the process itself, it's random, there's definitely no pattern there.

Ehhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh..... Geez, Nikolai, you're the one answering your own questions. We're trying to make it a pattern and reduce all this ticky-tacky crap to a simple flow. And what we have left of such conversion will not be the simplest, of course, but it's much easier to examine. It is a kind of filter, a very good tick-flow filter.
 
Alexander_K2:
Uhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhh..... Geez, Nikolai, you're the one answering your own questions. We're trying to make it a pattern and reduce all this ticky-tacky crap to a simple flow. And what we have left of such conversion will not be the simplest, of course, but it's much easier to examine. It is a kind of filter, a very good tick-flow filter.

So you're saying that if we measure a series with a hundred of these filters, then randomness will cancel each other out and a pattern will emerge?

 
Nikolay Demko:

So you are saying that if we measure a series with hundreds of such filters, then the randomness will cancel each other out and the regularity will come through?

Wow! Nikolai, you're thinking even more abstractly than even I am... Well, try it... I'd better get out of here, I'm not comfortable with this scale of thinking...

Where'sVladimir? In the clingy armsof the unforgettable SanSanych, I guess... Well, me too!

Sincerely,

Alexander_K and Schrodinger's cat nearby :)))))))))))))

 
ILNUR777:
It looks like he just bumped into intraday volatility associated with session specifics. And he's smoothing this thing out over time. He thinks he will make the step sizes linear if he takes longer time interval at night and smaller time interval in the daytime from the tick stream. Thus, by changing the time distribution in ticks, it picks up what it needs. Which will eventually strengthen some local extremums and weaken the others. By placing the most significant prices with larger weighting, when further exponential averaging will be applied to them.

His formula doesn't see sessionalism, and the exponential distribution doesn't help. If he had time-dependent reception rates using a table function, then yes.

It's purely random intervals.

 

it is a filter from normal information)))
overlay market intervals with their own intervals))

 
Alexander_K2:

!!!!!!!!!!!!!

1. And I argue that it is useful. If you take the increments for the same currency pair in a huge sample (at least 1,000,000 increments), for different time periods, you will see that the parameters of the increment distribution do not change from the word "at all".

2. The Cauchy distribution as a type exists, but in Forex it does not.

3. !!!!!!!!!!!!!! Yes, you're right - this is definitely a topic for a doctoral dissertation. Look, the equation itself is of course for continuous time, but numerically we solve it by finite difference methods with discrete time. No?

PS We are talking about increments between tick quotes, not between OPEN or CLOSE prices or the like.

1) It's clear that if you add almost any sample of 1000 increments to a million sample, the changes won't be noticeable. But what we need is something else - for both of these samples to be consistent (equally distributed).

Besides, if we talk about dependence of increments, to study the structure of this dependence we will have to study joint distributions of increments in one way or another (they will no longer be equal to the products of univariate ones). In doing so, we will quickly become convinced that a million is not that much.

3) We must first make sure that the solution exists at all. For example, we can generate a sample using a Cauchy distribution and calculate its mean, but that's no reason to assume that it has a expectation.

 

A question on an abstract topic. Suppose there is a sample of 15,000 units (say, the general population). How many units should the sample population consist of in order to retain the properties of the general population, and what method should be used to collect the sample population?

 
Dennis Kirichenko:

A question on an abstract topic. Suppose there is a sample of 15,000 units (say, the general population). How many units should the sampling frame consist of in order to retain the properties of the general frame, and what is the method to collect this sampling frame?


No, I have to answer Denis, so I'm coming to the front again like a carpetbagger.

You calculate the variance of a given general population. And with a given accuracy, i.e. confidence probability from this variance, you calculate the sample size using the formulaN=(Z^2)*(S/E)^2 where

Z - quantile of distribution of general population

S - standard deviation

E - precision of measurement

And the method is as simple as a boot - the sample has to be random, i.e. using a random number generator.

 
Alexander_K2:
It's a bit tricky, isn't it? It's VERY easy in Wissim. :)))))))))))))))))
And I kept thinking - what is this wonder advertising? I'll give you credit for the PR, but why all the trouble? Although the local "explorers" are good enough - they have enough toys for another ten years.
 
bas:
The coin is the benchmark, the basic point of reference. A process without memory, on which it is by definition impossible to make money. And if someone claims that "maybe you can, you should build a model in VisSim and see", then he does not understand the most basic fundamentals.

What makes you think it's impossible to win? Coin is a process on which you can win, and even infinitely much. But you can also lose.

So, it is you who do not understand these very basic fundamentals. Nor do those who hold a similar view to yours.

Reason: