Comparison of two quotation charts with non-linear distortions on the X-axis - page 11

 
gpwr:

The system is based on the assumption that there are repeating patterns in the quotes. On page 9 of this thread I described the method of finding these patterns (unleashed coding). There are other methods. You can also compare prices using the nearest-neighbor method. Read my previous posts. I don't want to repeat myself.
Of course I've read it, that's what I'm saying.
 
gpwr:

To avoid opening a separate thread, I've decided to describe the results of my research on patterns here. Maybe it will save someone's time and give someone new ideas.

In 2006, when I first got interested in Forex, my first idea was to compare the last N bars (current pattern) with all past patterns of the same quote, using correlation coefficient as a measure of similarity. This is the same nearest neighbour (SN) method. The advantage of the correlation coefficient over the Euclidean length is that it takes into account the distortion of the price axis. I have built an Expert Advisor using this method that has shown extraordinary profitability for 2-3 months of forward testing (10к in 10М or something similar), but then it was losing 2-3 months. And so the sequence: a huge profit, then a total loss. I several times returned to this method BS, did committees of neighbours, etc., but the result was the same. In the end I got disappointed and put the code of BS method in the base on 5.

In 2007-2008 I got interested in PNN, particularly in GRNN. The essence is the same as BS, but instead of selecting some one (or few, like in committee) similar neighbours, all past patterns are automatically selected and their influence on prediction is weighted by exponential function like exp(-measure_difference). Thus more similar parts of history are weighted exponentially more heavily. You can take the pattern prices (minus the average) and calculate the Euclidean distance as a measure of difference, or take the difference in vector readings of some indices. Prediction accuracy was slightly higher than BS method, 52% instead of 50.5% (I don't remember exactly).

My last idea was to use methods used by our brain to transform information. I described these methods in detail on 5. The essence of one of them is to find patterns (or basis functions) into which current prices can be decomposed. Like

Price[i] = sum (a[k]*function[i][k], k=1...L) i=1...N

Of course, we can take trigonometric functions instead of search for bases and use Fourier transform. But it is more perspective to find the basis functions on the history using the method of rarefied coding. The essence of this method consists in fitting of the mentioned linear model in prices at various history intervals of length N by ANC in such a way that the specified error is achieved at the least number of non-zero coefficients a[k], k=1...L. Ideally, each historical price vector contains only one basis function (or pattern). At every step coefficients and functions themselves are optimized. There are a lot of parameters that are not known in advance. For example, the length of the pattern N, the number of basis functions in the dictionary L, the number of non-zero coefficients in our decomposition (I select 3, like every price segment consists of the tail of the old pattern, the current pattern and the beginning of the new pattern). It is important N*L should be much less than the whole history length otherwise the algorithm will find patterns equal to the past prices themselves and then we will have something like the nearest neighbours method. For example, the dictionary of 64 patterns each 64 bars long for EURUSD H1 trained by disjoint coding on the history of 1999-2010 (74 bars) will look like this

I have noticed the following regularity: the longer the pattern and the greater the number of them in the dictionary, the higher the profit in the bactest, which can be explained by the overtraining. But in any case, with different N and L, the forward test looks chattering around zero profit. Starting to get frustrated with the patterns. Apparently they are not constant in forex, or in other words forex has no memory for patterns - new ones are created every time.


Do you have any experience with the Echo State Network? http://www.scholarpedia.org/article/Echo_state_network
 
yacoov:

Do you have any experience of the Echo State Network? https://www.mql5.com/go? link=http://www.scholarpedia.org/article/Echo_state_network

Ask TheXpert . He has experience.
 
gpwr: I've never heard of correlation between binary signals. By the way, I tried coding patterns with a binary sequence using a zigzag. Took the last 6 knees up and 6 knees down.

I googled the correlation of binary signals, it seemed easier to XOR and count the number 1 in the result

you took 6 ZZ knees, that's the problem: i don't know how many bars (i use bars with fractals, from 8 to 16) to use for analysis

gpwr:The system is based on the assumption that there are repeating patterns in quotes.

The assumption that the market has patterns or regularities is correct, but these regularities appear or not with no apparent periodicity. That is, the technical analysis is working, but no one can say at what moment in time. Apparently, the search and analysis of patterns is similar to the task of optimizing indicator experts, if so, it turns out we waste our time - it's easier to write a self-optimizing expert who would choose the selection of indicators (strategies) according to the current history

I've been meaning to do some research for the euro, but so far I haven't found any - may be the crosses have more regularities?

 
sever32:
Of course I've read it. That's what I'm saying.


Then I don't understand your question: "I couldn't find any justification for why your system should work". Justification for what:

1. justification for the assumption that there are repeating patterns? Or

2. the rationale behind the disjointed coding method for finding these patterns?

3. Or a rationale for something else?

 
IgorM:

I googled the correlation of binary signals, it seemed easier to XOR and count the number 1 in the result

you took 6 Knees of ZZ, that's the problem: you don't know how many bars (I use bars with fractals, from 8 to 16) to use for analysis

The assumption that the market contains patterns or regularities is correct, but these regularities appear or not with no apparent periodicity. That is, the technical analysis itself works but no one can say at what moment in time. Apparently, search and analysis of patterns is similar to the task of optimizing indicator experts, if it is so, it turns out we waste our time - it's easier to write a self-optimizing expert who would choose the selection of indicators (strategies) according to the current history

I've got an idea of searching for the euro, but so far I haven't found it - may be the crosses have more regularities?


6 knees up and 6 knees down was quite enough on the H1 timeframe. Judge for yourself. Suppose we have the last knee zz is the knee up. Let's number the knees 1v-6v, 1n-6n. Then we have this sequence of bits:

Bit 1: -1 = 1v < 2v, 1 = 1v > 2v

bit 2: -1 = 1v < 3v, 1 = 1v > 3v

...

Bit 5: -1 = 1v < 6v, 1 = 1v > 6v

bit 6: -1 = 2v < 3v, 1 = 1v > 3v

and so on, for all knees up and down. A total of 30 bits. The number of patterns that can be described by 30 bits = 2^30. But not all bits are important. For example comparing the most recent knee 1v with knees 4v, 5v, and 6v is not important in most cases. But you cannot determine in advance which bits are important and which are not. You need to optimize by history so that each pattern is described by the smallest number of non-zero ("important") bits. This is what takes a very long time. Adding more knees to a pattern description leads to an over-educated pattern dictionary and a lack of generalization.

Comparing patterns "not rigidly" and allowing some bits to not match means that those bits are not important to that pattern and they are nulled in my system. Zero bits were not matched at all. Again, this pattern description system based on zz using binary bits has nothing in common with the pattern-finding system based on unloaded coding, which I reported on the previous page. In that system, the pattern-samples consisted of the prices themselves and fit into the current ISC pattern. The similarity of the current pattern to the exemplar pattern was judged by the ISC error (although it was actually more complicated than that).

 
gpwr:

A total of 30 bits. The number of patterns that can be described by 30 bits = 2^30. But not all bits are important. For example comparing the most recent knee 1v with knees 4v, 5v, and 6v is not important in most cases.

got busy again with statistical research on patterns, looking for matches on history to "8-bit patterns", basically doesn't matter how...

I noticed an interesting feature: there are repeated (by my algorithm) sequences of bars on the history, less than 30% of history fall under the coding, which immediately provokes the previously announced conclusion that trading by patterns is difficult to implement due to their rare occurrence ....

approximately:

pattern #'snumberpattern ##numberpattern ## no.no.
1 83 11 3 21 2
2 34 12 3 22 2
3 19 13 3 23 2
4 12 14 3 24 2
5 6 15 3 25 1
6 5 16 3 26 1
7 5 17 2 27 1
8 4 18 2 28 1
9 4 19 2 29 1
10 4 20 2 30 1

But if there are not many patterns on the history according to my coding algorithm, then more than 60% of the history does not contain the repeatable parts of the history at all, and we can assume that these 60% of information will not appear in the future

It's a bit chaotic so far, I'll give it some more thought

 
IgorM: So far, it's a bit rambling, I'll think about it some more.

At first I thought that it would be more logical to use the obtained patterns by analogy with the market actions on the history, but I decided to display the patterns simply by numbers:

The height of the indicator bar is the pattern number, the voids are the absence of similar combinations on the history, so far I've come to the following - it seems that the patterns are the so-called attractors, from Wiki they fit the description: ".... and odd (irregular - often fractal and / or in some section arranged as a cantor set; the dynamics on them is usually chaotic)". Indeed, at the initial stage of designing a pattern-finding algorithm, I used Cantor sets

SZZ: So far so chaotic, I'll think some more :)

 
wmlab:

Have any of the intraday people noticed that often two EURUSD or GBPUSD intraday charts are similar? Not always, of course, but often yesterday's pattern surprisingly repeats today, on which you can try to profit. But...

The peaks and troughs, though repeating the pattern, do not coincide in time. For example, yesterday's mid-day dip started at 2:15pm and today's at 1pm. There are many similarity criteria - Spearman, Pearson, least squares, but I don't know of any that compare graphs subject to small distortions on the X-axis. Does anyone know of any such methods?



You are not alone in this world.
Files:
 
IgorM:

At first I thought that it would be more logical to use the patterns obtained by analogy with the market actions on history, but I decided to just draw the patterns by numbers and it turned out this way:

The height of the indicator bar is the pattern number, the voids are the absence of similar combinations on the history, so far I've come to the following - it seems that the patterns are the so-called attractors, from Wiki they fit the description: ".... and odd (irregular - often fractal and / or in some section arranged as a cantor set; the dynamics on them is usually chaotic)". Indeed at the initial stage of designing a pattern-finding algorithm I used Cantor sets

ZS: it's a bit chaotic so far, I'll think about it some more :)


DTW based patterns?
Reason: