1. the transition from continuous to discrete model must be done correctly.
2. Both models can be the same (continuous and discrete), but the condition must be fulfilled, the step in + and in - must be the same. Its magnitude.
3. took bars to analyse, that your statement would be true "They are the same. Only one model has integrals and the other has sums." Prove that all bars are the same. Can you prove it?
The discrete coin model will turn into a pure (familiar to us) market if we take the price of a coin equal to 1 pip, and during each tick we toss the coin 1000 times.
The continuous model will turn into a pure market if we divide the continuous time into ticks and round the price to 1 point.
Both models converge to the pure market and are the same under the above conditions.
Thanks for the link to the article. Read it. It suggests using the ARFIMA model, you random walk. These are different models. It would be interesting to read the following article, both yours and the author's. Where it is proved that your proposed market models are adequate. Not just asserted in words, but proved mathematically ... and given the calculation of this figure ...
H.Y. Just many people grasp this beautiful word adequacy, but how to calculate it even do not know. You wrote in your post that 100% adequate model does not exist. Absolutely agree with you. The question is how much the proposed model is adequate to the market by 20, 30 or 99.999999999%....
Both ARFIMA and the coin model are methods of generating exchange rate-like curves (series). In the next article I will propose a way to evaluate the quality of exchange rate-like curves.
The adequacy of a model to reality is not assessed by itself. The model is built to solve a specific practical problem (earning money on a course, constructing a building). If the problem is solved completely, the model is considered adequate. If the problem is solved by 50%, the model is considered adequate by 50%. So it is necessary to define the task. The coin model is designed to generate course-like curves. Well, the model generates curves. The curves are not very similar to the exchange rate, but the model is simple. So I'm going to stop at 20%.
Try to model a stack, the stack has a clear structure, the stack sees orders for a certain number of points up and down.
Generator passes through all cells (it can be not +1 -1 but generation of random volumes), then after all cells of the stack are passed by the generator, the calculation of where to move the middle point of the stack is made.
And don't forget to restart SRAND after generating 32768 rand, otherwise your sequence will be repeated.
Are you proposing a model of price formation? It all comes down to how we generate volumes in cells. Volumes are not random. The further away from the midpoint, the higher the volume. We need a specific model for volumes.
Virty: Are you proposing a pricing model? It all comes down to how we generate the volumes in the cells. Volumes are not random. The further away from the midpoint, the higher the volume. We need a specific model for volumes.
Let's assume the volumes are random with 1 lag. We generate a random glass, add to the previous values of the glass, mutually subtract the volumes nearest to the middle as realised transactions, and then calculate a new middle of the glass, the cycle is over.
Try modelling a glass, the glass has a clear structure, the glass sees bids for a certain number of points up and down.
Generator passes through all cells (it can be not +1 -1 but generation of random volumes), then after all cells of the stack are passed by the generator, the calculation of where to move the middle point of the stack is made.
And don't forget to restart SRAND after generating 32768 rand, otherwise your sequence will be repeated.
There is one inaccuracy in the article, if we take a drunken sailor as an analogy, then the size of the step is different. Roughly speaking 1 step is 80 cm long if moving from the pub, step back (to the pub) 60 cm. The trend is the same, it is also known that the downward movement of the market is faster than the upward movement. And in the article all steps are the same +1 or -1.
So this model cannot be considered adequate. It is just a coin, the distribution properties of which have been known and studied for a long time.
Mathematicians explain the effect of rapid price falls compared to slower rises by the increased leverage effect, but in my opinion this is a very weak and clearly not sufficient explanation of the processes taking place.
The model could be improved by using a more advanced logarithmic volatility model rather than an equi-volume data slicing, where a low price generates low volume, which in turn leads to low volatility, and thus lower risk and profitability of trading systems trading on this data. On the contrary, high price will determine high volume and as a consequence high volatility. This means that the risk and profitability of TS on these intervals will be higher. By the way, the corrections for volatility can be quite significant, which means that without taking these corrections into account, you can make a big mistake in conclusions. This is especially noticeable on shares. If the TS was earning well in the periods of low volatility, but its profitability, although not significantly negative in the periods of high volatility, it may look like a complete drain, although in fact it is not. And by the way, this also means that charts on a large time scale should be viewed on a logarithmic scale rather than a linear scale. All normal stock charts have such an option.
In general, any mathematical model should always be defined by economic assumptions. The model itself without economic theory is meaningless. Therefore, before using RAND, it would be a good idea to read economics textbooks.
Let's assume that volumes are random with 1 lag. We generate a random glass, add to the previous values of the glass, subtract the volumes nearest to the middle as realised trades, and then calculate a new middle of the glass.
As far as I understand, this model will be reduced to the rate of a coin with a variable price. We toss the same coin, but at each toss it has a new random price from some limited range. And with some probability distribution of the price.
If the probability distribution of the price is close to normal (and it will be with such a glass), then we will get the old rate of the coin with a constant price. Just now we toss the coin 100 times in a row and only after 100 tosses we look at the result. And the price of the coin is constant, but it is new.
If the probability distribution of the price is tricky, then the rate will not look like the rate of a coin. Non-random patterns will appear in it. You can try to catch them on the real rate, but you must first set the probability distribution of the price.
You obviously read an article about a model of a beaker. A model is a model and does not fully describe what is happening in the stack.
But the algorithms of liquidity provision give an idea of the pricing principles (i.e. if you read the above article, you will see how "randomly" the volumes in the stack crawl).
If the probability distribution of the price is tricky, the rate will no longer be like the rate of a coin. Non-random regularities will appear in it. We can try to catch them on a real exchange rate, but we must first define the probability distribution of the price.
Let's assume that I know how to model synthetics, whose distribution of increments, ACF shape, ACF shape of squares of increments and dispersion behaviour in time are quite similar to the real observed ones. How can this help us to model price?
That's where the mistake lies.
1. the transition from continuous to discrete model must be done correctly.
2. Both models can be the same (continuous and discrete), but the condition must be fulfilled, the step in + and in - must be the same. Its magnitude.
3. took bars to analyse, that your statement would be true "They are the same. Only one model has integrals and the other has sums." Prove that all bars are the same. Can you prove it?
4. only one chart has this property, the Renko chart...you can replace it with +1 -1(https://www.mql5.com/en/code/9447#25419).
The discrete coin model will turn into a pure (familiar to us) market if we take the price of a coin equal to 1 pip, and during each tick we toss the coin 1000 times.
The continuous model will turn into a pure market if we divide the continuous time into ticks and round the price to 1 point.
Both models converge to the pure market and are the same under the above conditions.
Thanks for the link to the article. Read it. It suggests using the ARFIMA model, you random walk. These are different models. It would be interesting to read the following article, both yours and the author's. Where it is proved that your proposed market models are adequate. Not just asserted in words, but proved mathematically ... and given the calculation of this figure ...
H.Y. Just many people grasp this beautiful word adequacy, but how to calculate it even do not know. You wrote in your post that 100% adequate model does not exist. Absolutely agree with you. The question is how much the proposed model is adequate to the market by 20, 30 or 99.999999999%....
Both ARFIMA and the coin model are methods of generating exchange rate-like curves (series). In the next article I will propose a way to evaluate the quality of exchange rate-like curves.
The adequacy of a model to reality is not assessed by itself. The model is built to solve a specific practical problem (earning money on a course, constructing a building). If the problem is solved completely, the model is considered adequate. If the problem is solved by 50%, the model is considered adequate by 50%. So it is necessary to define the task. The coin model is designed to generate course-like curves. Well, the model generates curves. The curves are not very similar to the exchange rate, but the model is simple. So I'm going to stop at 20%.
Try to model a stack, the stack has a clear structure, the stack sees orders for a certain number of points up and down.
Generator passes through all cells (it can be not +1 -1 but generation of random volumes), then after all cells of the stack are passed by the generator, the calculation of where to move the middle point of the stack is made.
And don't forget to restart SRAND after generating 32768 rand, otherwise your sequence will be repeated.
Are you proposing a pricing model? It all comes down to how we generate the volumes in the cells. Volumes are not random. The further away from the midpoint, the higher the volume. We need a specific model for volumes.
Let's assume the volumes are random with 1 lag. We generate a random glass, add to the previous values of the glass, mutually subtract the volumes nearest to the middle as realised transactions, and then calculate a new middle of the glass, the cycle is over.
Try modelling a glass, the glass has a clear structure, the glass sees bids for a certain number of points up and down.
Generator passes through all cells (it can be not +1 -1 but generation of random volumes), then after all cells of the stack are passed by the generator, the calculation of where to move the middle point of the stack is made.
And don't forget to restart SRAND after generating 32768 rand, otherwise your sequence will be repeated.
There is one inaccuracy in the article, if we take a drunken sailor as an analogy, then the size of the step is different. Roughly speaking 1 step is 80 cm long if moving from the pub, step back (to the pub) 60 cm. The trend is the same, it is also known that the downward movement of the market is faster than the upward movement. And in the article all steps are the same +1 or -1.
So this model cannot be considered adequate. It is just a coin, the distribution properties of which have been known and studied for a long time.
Mathematicians explain the effect of rapid price falls compared to slower rises by the increased leverage effect, but in my opinion this is a very weak and clearly not sufficient explanation of the processes taking place.
The model could be improved by using a more advanced logarithmic volatility model rather than an equi-volume data slicing, where a low price generates low volume, which in turn leads to low volatility, and thus lower risk and profitability of trading systems trading on this data. On the contrary, high price will determine high volume and as a consequence high volatility. This means that the risk and profitability of TS on these intervals will be higher. By the way, the corrections for volatility can be quite significant, which means that without taking these corrections into account, you can make a big mistake in conclusions. This is especially noticeable on shares. If the TS was earning well in the periods of low volatility, but its profitability, although not significantly negative in the periods of high volatility, it may look like a complete drain, although in fact it is not. And by the way, this also means that charts on a large time scale should be viewed on a logarithmic scale rather than a linear scale. All normal stock charts have such an option.
In general, any mathematical model should always be defined by economic assumptions. The model itself without economic theory is meaningless. Therefore, before using RAND, it would be a good idea to read economics textbooks.
The pricing process is much more complex than "random volume generation". Try reading sometime: http://people.orie.cornell.edu/~sfs33/research.htm
Stop feeding rumblings and point fingers :o)
I read diagonally, from the whole set of letters I understood that a regression model is used to calculate the size and direction of the cup pitch.
Let's assume that volumes are random with 1 lag. We generate a random glass, add to the previous values of the glass, subtract the volumes nearest to the middle as realised trades, and then calculate a new middle of the glass.
As far as I understand, this model will be reduced to the rate of a coin with a variable price. We toss the same coin, but at each toss it has a new random price from some limited range. And with some probability distribution of the price.
If the probability distribution of the price is close to normal (and it will be with such a glass), then we will get the old rate of the coin with a constant price. Just now we toss the coin 100 times in a row and only after 100 tosses we look at the result. And the price of the coin is constant, but it is new.
If the probability distribution of the price is tricky, then the rate will not look like the rate of a coin. Non-random patterns will appear in it. You can try to catch them on the real rate, but you must first set the probability distribution of the price.
Stop feeding the rumba feed and point the finger :o)
Ok, http://people.orie.cornell.edu/~sfs33/LimitOrderBook.pdf
You obviously read an article about a model of a beaker. A model is a model and does not fully describe what is happening in the stack.
But the algorithms of liquidity provision give an idea of the pricing principles (i.e. if you read the above article, you will see how "randomly" the volumes in the stack crawl).
If the probability distribution of the price is tricky, the rate will no longer be like the rate of a coin. Non-random regularities will appear in it. We can try to catch them on a real exchange rate, but we must first define the probability distribution of the price.