From theory to practice - page 149

 

What confuses me is this.

Feynman, of course, was a genius. So he was looking at the motion of quantum particles at uniform observation intervals, and I am looking at exponential inter vals... And he was looking at uniform... Hmm...

 
 
Alexander_K2:

Since my beloved daughter and father-in-law are shaking me by the breasts and demand immediate improvement of my TS in order to make a profit, I will write briefly.

So, here is the algorithm I came up with (see attached table for AUDCAD):

1. Receiving quotes in exponential time intervals.

Column A - price Bid

Column B - Ask price

Column C - price (Ask+Bid)/2 - I am working with it, maybe I am mistaken.

Comment: I bring the quote flow to a Markov process with pseudo-states where integral moments of a random variable can be ignored and the equation of motion is reduced to the equation of motion of a quantum particle between two walls. The walls in this case are the boundary values of dispersion of a random variable. 2.

2. Let us analyze the price increments Ask and Bid

Columns D, E, F are increments for Bid, Ask and (Ask+Bid)/2 respectively

I work with pure values of the gradients without transforming them in any way.

3. Calculate statistical parameters for column F (see Sheet 1 in the table). The most important thing is to find a sample volume for sliding window of observations

This is a very important step!!! Based on Chebyshev's inequality we find necessary sample size in which boundary values of dispersion will correspond to confidence level of forecast.

4. Return to the AUDCAD tab of the table and go to the line 15625

Column M - Calculate the length of the particle run in our sliding window of observations = 15625 consecutive quotes.

Columns N and O - Boundary values of the probable deflection of the particle ("wall")

5. Move to Sheet2 of the table

I have copied there columns A, N, M, O beginning from the line 15625 from the AUDCAD tab

6. I build charts:

Top chart - actual price values (Ask+Bid)/2

Lower chart - values from columns B, C and D - we actually see the movement of particles between the walls (in the dynamic channel)

A very important point

I calculated dispersion (columns C and D) in the same way in my model. But I plotted the channel against the SMA moving average for the 15625 sample. Column B was missing.

Was about to switch to WMA, where time was to be used as weights.

The results have been quite satisfactory - out of 6 trades - 4 positive and 2 negative with total profit over 400 pips.

And at this crucial moment Warlock (Vizard_) connected and actually told me with his chart (by hand!!!): Idiot! Why are you working with some moving average? You look at how the particle itself moves (the sum of the increments over the observation time) - it moves relative to zero between the walls!!!

Now I calculate column B and see the following picture:

In the lower graph - motion of the particle in the sliding observation window = 15625 with boundary confidence levels = 99.5%

INGENIOUS SOLUTION!

It is possible and necessary to make forecasts when the price goes beyond these confidence levels

Or you can simply - when a particle leaves the borders of the channel on the lower chart - open a deal. When it comes back to zero - close it, etc. But I'm not going to impose my opinion - everyone is free to make his or her own forecast algorithm.

But to be honest - I'm not sure I would have done it by my own wits - thanks again toVizard.

Now I just need to replace sliding WMA in my TS figuratively speaking with Column B, and someone should comprehend all described above, ask questions if necessary, and build my TS.

Make money on your own! I personally am not sorry and do not need to find ambiguity in my words.

My father-in-law finally got violent and obscene form makes me finally sit down and finish TS.

I bid farewell, but not goodbye. I am always here and kind of absent - well, you get the idea. Schrodinger's cat, in a word. :))))))))))))))))

https://yadi.sk/d/Q26c4qoS3RbJRn
Detrending and working from the borders of the channel to its centre - you call this the most ingenious solution invented in the time of king peas?))) "Oh, how many wondrous discoveries the spirit of enlightenment prepares for us.")
 
Alexander_K2:

What confuses me is this.

Feynman, of course, was a genius. So he was looking at the motion of quantum particles at uniform observation intervals, and I am looking at exponential intervals... And he was looking at uniform... Hmm...

It's easy to explain - you're just more genius than he is. Here on this forum in general a genius sits a genius and drives a genius, Nobel laureates let them rest).

 
khorosh:
Detrending and working from the borders of the channel to the centre of the channel - you call this the most ingenious solution invented in the time of the king of peas?)))
I've been trading forex for only 3 months. If this algorithm has been successfully used for a long time - I'm glad. I can close the subject at this point.
 
Alexander_K2:
I've only been doing forex for three months. If this algorithm has been used successfully for a long time, I'm glad. This is the end of the subject.
The success of this algorithm is a relative concept. It will be successful during the flat period, while it will fail during a flat trend. If you manage to identify the trend-flat and timely switch from the counter-trend strategy to the trend, then perhaps there will be success.
 

Again on the exponential acceptance of ticks.

Suppose we have built a sequence at which intervals the ticks are accepted. How do we know that it is the most correct sequence, out of nowhere.

Let's compare it with another similar sequence; they have no advantages over each other.

Hence, we have several parallel variants of quotes evolution. All of them are equal, none of them is preferable.

Then it will be statistically correct to average the readings of all of them.

All right, not all of them, but statistically significant number, for example 100.

The probability that at least one of them will have a lag of 11 seconds (which is the maximum lag length in the exponential tick acceptance method proposed by Alexander),

this means that every tick we have to wait for 11 seconds until this reading can be averaged.

Therefore, the process is potentially incomplete until 11 seconds elapse from the current time, and so on from each second.

The decision cannot be made based on the current data, the calculation is incomplete and will be possible after 11 seconds, and the data that will come after 1 second can only be judged after 12 seconds.

Thus, we are in an endless waiting period for the calculation to be completed.

Or to put it another way, we are working with the data of 11 seconds past. This is for ticks.

If we apply the same method to the minutes, then we will be able to decide the current situation after 11 minutes.

If it is a clock we decide in 11 hours.

I hope you get the idea. Even the Mach lags half a period, and the exponential method has no averaging yet; it already implies a lag.


I'm going to respond right away that we're not averaging anything. If we do not average readings then we work with only one variant of multivariate space, and it is not the fact that this particular breakdown is the best. We have a signal in this space and no signal in the other. And which is the best signal?

In I&C there are concepts of confidence in readings, measurements are made with three sensors, two readings (or more) out of three are considered correct, if all three show different values then all sensors are checked (such a reading cannot be trusted).

 
Nikolay Demko:

Nikolai, some know-it-alls here are saying that this method is nonsense and has been known for 100 years. Do you know whether it is called an indicator or an adviser?

As for the time, it is a matter of principle, and I'll never tire of repeating it.

In my opinion - working with ticks indiscriminately is the worst mistake in time series analysis. The notion of time itself is lost; for one and the same amount of ticks at different stages you have different time and vice versa. It is sheer nonsense and as a consequence, the impoverishment and shame of the individual.

This leaves us with two ways:

1. To read data in equal intervals of time, and take the value of a guaranteed arrival of the quote as a discrete time.

2. Through exponential intervals - read about reducing a non-Markovian process to a Markovian one. This is exactly the trick through which everything is done.

 
Nikolay Demko:

Again on the exponential acceptance of ticks.

Suppose we have built a sequence at which intervals the ticks are accepted.

....

It seems to me that a very interesting nuance is missing in this whole tick story.

We are declared that one of the main advantages of the proposed approach, is the acceptance of tics at exponentially increasing intervals.

The advantage of this approach is clear to all: in the sample "denser" are the latest ticks compared to those removed in time.

But in practice?

Suppose we take ticks 1, 3, 7, 15 ..... We calculated statistics and other stuff, in particular, plotted the increments with the channel of supposedly variance.

A new tick comes. Do we recalculate? On every tick do we recalculate? That tick that was number 1 became tick number 2 and was not included in the sample. It is quite obvious, that ABSOLUTELY new ticks sampling will be made, as numbers of ticks in two exponents, which differ by one tick shift, will be different, i.e. all ticks are new! What does the presented figure refer to then? It turns out that the figures presented to us exist exactly one tick!


Is it possible to check a strategy in which the calculation exists exactly one tick!

Yes, you can, but not a word about it from the author.

 
Alexander_K2:


2. Through exponential gaps - read about reducing a non-Markovian process to a Markovian one. This is exactly the trick through which everything is done.

Above I have posted graphs for your data, which show that there is a memory of almost 40,000 ticks!
Reason: