Machine learning in trading: theory, models, practice and algo-trading - page 434

 
Maxim Dmitrievsky:

As a minimum, it is necessary to make affine transformations of charts, because patterns go at different angles of inclination (self-affine structures),

i.e. compress or stretch the template by height.... ? - interesting option. But I think the compression should not be more than 30-50%, otherwise you can try to look for patterns on the volatile time of american session, for example, by nighttime random fluctuations. Both there and there are different patterns and different players.
If you take stretching compression up to 30-50% in work, the increase in the number of patterns found probably will not be very large, which probably will not greatly affect the forecast and therefore it can be neglected... However it must be checked.

And it's not at all clear how to implement this compression in MT code without using ready-made external products...

search in different php

It seems to me even on M1 and M5 already different patterns. And it is wrong to look for the same patterns on them. The patterns may be similar, but the reasons that gave rise to this form of charts will be different.

 
elibrarius:
I do not see any other options for comparing 2 price charts. What other options do you have...?

Suppose there are two price arrays with 5 prices in each
The first is a1,a2,a3,a4,a5.
the second one is b1,b2,b3,b4,b5.

1) The price graph can be detrended, i.e. it can be placed horizontally from some rotated arrangement. This can be done with a linear regression - find it, and use the error array instead of the original price series. Whether this step will help in searching for patterns I don't know, I haven't studied its effect in detail. So far I haven't used this step myself.

2) It is questionable to call a row of prices a pattern; there has to be a mathematical description of the shape formed by these prices. For example, we can find the increase of price on every bar and use these increases as a certain pattern description.
the first pattern will be obtained by the formula a5-a4, a4-a3, a3-a2, a2-a1
the second is b5-b4, b4-b3, b3-b2, b2-b1.

3) "similarity" of patterns - either correlation (I did not check it myself) or Cartesian distance by the Pythagorean theorem (I checked it, and it worked out very well) -
sqrt( ((a5-a4)-(b5-b4))^2 + ((a4-a3)-(b4-b3))^2 + ((a3-a2)-(b3-b2))^2 + ((a2-a1)-(b2-b1))^2 )
or something else, I think there must be better options.

 
elibrarius:

i.e. compress or stretch the pattern by height.... ? - interesting option. But I think the compression should not be more than 30-50%, otherwise you can try to look for patterns on the volatile time of the American session, for example. Both there and there are different patterns and different players.
If you take stretching compression up to 30-50% in work, the increase in the number of patterns found probably will not be very large, which probably will not greatly affect the forecast and therefore it can be neglected... However it must be checked.

And it is not at all clear how to realize this compression in MT code without using ready-made external products...

It seems to me even on M1 and M5 already different patterns. And it is wrong to look for the same patterns on them. The patterns may be similar, but the reasons that have generated this form of charts will be different.

For better understanding it is better to study properties of fractals. In particular, as I already wrote - it's scaling and self-affinity.

Scaling, by definition - similar patterns are formed on different time intervals. We can take 1 min quotes and build an array of synthetic TFs with a specified multiplier and use this array to search for a similar pattern to the current one.

Self-affinity - patterns are similar but never exactly the same. This is the main problem when choosing "similarity" criteria, correlation is not appropriate here.

The difference appears more in the slope of the patterns (slope angle of the regression line) than in their compression/stretching. I was building the LR by the current pattern, then I took the quotes from other segments and changed the slope angle to the slope angle of the current pattern, as a result it found similar patterns more often. And when creating a forecast, the forecast curve was transformed taking into account the slope of the LR of the current pattern.

Next. Self-affinity (self-similarity) of fractals has one more interesting feature - the same, but smaller patterns are formed inside a large pattern. Algorithm of search - (for example) take the last 500 bars of the 1hour timeframe with 10 bars shift, and in the tester we run through minutes or 5-minutes and look for patterns, similar to the 1-hour pattern. If we find it, we project the last 10 bars of the 1-hour pattern onto the 5-minute pattern - this is the forecast. Do this too, taking into account the angles of inclination of the regs. This is how I did it.

I haven't done crossvalidation on a group of consecutive patterns yet, but it seems to be an interesting topic

 
Dr. Trader:

Suppose there are two price arrays, with 5 prices in each
the first is a1,a2,a3,a4,a5
the second one is b1,b2,b3,b4,b5.

1) The price graph can be detrended, i.e. it can be placed horizontally from some rotated arrangement. This can be done with a linear regression - find it, and use the error array instead of the original price series. Whether this step will help in searching for patterns I don't know, I haven't studied its effect in detail. I do not use this step myself so far.

2) It is questionable to call a row of prices a pattern; there has to be a mathematical description of the shape formed by these prices. For example, we can find the increase of price on every bar and use these increases as a certain pattern description.
the first pattern will be obtained by the formula a5-a4, a4-a3, a3-a2, a2-a1
the second is b5-b4, b4-b3, b3-b2, b2-b1.

3) "similarity" of patterns - either correlation (I did not check it myself) or Cartesian distance by the Pythagorean theorem (I checked it, and it worked out very well) -
sqrt( ((a5-a4)-(b5-b4))^2 + ((a4-a3)-(b4-b3))^2 + ((a3-a2)-(b3-b2))^2 + ((a2-a1)-(b2-b1))^2 )
or something else, I think there must be better options.


I have noticed an interesting feature: it is possible to look for patterns not on charts, but on the RSI indicator. What is interesting, no matter how detrending or rotating the chart, the RSI based on it will show approximately the same thing, i.e. there is no need to rotate charts by an angle. But the output (forecast) will still need to be converted taking into account the slope of the LR. Plus we can build cross-correlation and other stuff on the obtained indicators.
 

Maxim Dmitrievsky andDr. Trader
It seems that you spent a lot of time searching for patterns on history, like the indicator I made.
Do you still use it or did you switch to neural networks, because the search for patterns was futile? Or is the efficiency of these approaches the same, and the only difference is the speed?

 
elibrarius:

Maxim Dmitrievsky andDr. Trader
It seems that you spent a lot of time searching for patterns on history, like the indicator I made.
Do you still use it or did you switch to neural networks, because the search for patterns was futile? Or the results of these approaches are the same, and the only difference is the speed?

I gave up on working with patterns because it didn't give me the result I wanted, I'll come back to it later. And there is a lot to invent and do, it's time consuming and not obvious until you do it. My friend and I had some developments of fractal analysis with Weierstrass-Mandelbrot fractal analysis before that, but they also used correlation and I found normal patterns only once. If I can use convolutions or think of some other new way to search for the patterns, I will come back... in short, I am addicted to correlation, it is not good enough.
 
Maxim Dmitrievsky:
I gave up on working with patterns because it did not produce the expected result I wanted, I will come back to the subject later. I have to do a lot of figuring out and doing it, it's time-consuming and not obvious until you do it. Before that, my friend and I had been working on the fractal analysis with Weierstrass-Mandelbrot friction, but there I also used correlation and found normal patterns only once. If I can use convolutions or think of some other new way to search for the patterns, I will come back... in short, I am addicted to correlation, it is not good enough.

Here, if you're interested, 100 years ago I recorded an introductory video on fractal analysis. It has a direct relation to the analysis of patterns, from my point of view.


 

And on what principle do simple NSs (simple MLP) make a prediction?

It seems to me on the usual correlation - because the weight of connections between neurons grows from the number of repetitions of the signal along this line with the coinciding response of NS, if the line was in + and in - it will remain about 0 - and this is, in fact, the usual averaging. Then using these weights we find the similarity of the input combination of predictors to the average for the training period.

 

I haven't given up yet, I'm trying different algorithms to squeeze more profit out of patterns.
Compared to neuronka this approach gives me more possibilities, and I even wrote before that I'm trying to take into account the influence of time (e.g. decrease of similarity depending on how long ago the similar pattern was found), plus different tricks. You can't do that in neuronics.
My neuron can never learn to trade in profit using only prices. But the pattern model did, so the choice is obvious :)

But you can use neuronics on different indicators. But it does not matter - neuron, forest, or even a linear model, everything will work if the indicators and the target for training are chosen correctly.


I.e. if you deal with patterns, you need to spend a lot of time on creating a method for assessing the "similarity" of the patterns, and you will not find much useful information on the subject, you need to experiment a lot.

And if you deal with indicators - a lot of time will be spent on selection of indicators and targets for training; the choice and training of the model (neuron, forest, boosting) will not take much time.

 
Maxim Dmitrievsky:
I gave up on working with patterns because it didn't give me the result I wanted, I'll come back to the topic later. I have to do a lot of figuring out and doing it, it's time consuming and not obvious until you do it. Before that, my friend and I had been working on the fractal analysis with Weierstrass-Mandelbrot friction, but there I also used correlation and found normal patterns only once. If I am able to use convolutions or think of some other new way to search for patterns, I will come back... in short, I am too busy with correlation, it is not good enough


the only option is to ask the stableman for help) he will teach me how a real man should trade.... not patterns and science are important, but courage and strength... and you need a real Chechen beard... then the market will not resist an inflexible and principled warrior.....

hutch style trading rules..........

Reason: