Machine learning in trading: theory, models, practice and algo-trading - page 2645

 
mytarmailS #:
What about the asociative rules didn't work out?

The idea is generally clear. In any case, we first need to think of an algorithm for partitioning a continuous set of predictors into discrete items from which the rules are formed. If such good predictors and their good partitioning really exist and are found, the rest is a matter of technique.

 
Aleksey Nikolayev #:

The idea is generally clear. In any case, first we need to think of an algorithm for partitioning a continuous set of predictors into discrete items from which rules are formed. If such good predictors and their good partitioning really exist and are found, the rest is a matter of technique.

I wrote something wrong at first, I was thinking of the wrong thing.
It all depends on what you want to do. If you are looking for clear levels, I just normalised and rounded the price a bit to find a bounce pattern, but the search space is large and the repeatability is small. But if it is something else, normal clustering is a good solution.
 

Experimenting with symbolic regression...

Basically, sequential asociative rules are implemented, but instead of static items - logical rules. This gives more depth to the algorithm, it can understand its observations much more subtly. This concept makes it possible to describe any kind of regularity, because the complexity and type of rules are not limited by anything.

There is a spoonful of tar, the algorithm can not afford to study large data arrays, as it is very long due to the peculiarities of its architecture.

Therefore, I came up with some approaches to reduce the dimensionality of the search.

1) I am interested only in extremes, concentrating on them we reduce the search space 10-20 times, and really all we need from the market is to know whether it is a reversal or not, trends-schmends, flatts-schmets... this is subjective crap that prevents us from concentrating on the main thing.

2) I invented and implemented something like "one shot learning" as I see it, now I don't need to calculate the whole history to learn something, it's not a cool know-how, it's more of a desperation, because learning on the whole history will not work, at least not yet.

So far there are only the first experiments, but I can say with certainty that the algorithm is not completely stupid and something to learn.


The trading algorithm itself consists of patterns, a pattern is a set of rules for a specific situation.

this is what a pattern looks like for one situation.

The rules are primitive, but we are just getting warmed up.)

The pattern is traded like forrest, there are many rules in the pattern, if some threshold amount of rules is triggered, URAH we recognise the reversal and trade it.

It looks something like this.

It's like this.


What is the beauty of the algorithm?

1) It digs deep into the pattern, if I may say so.

2) It is not tied to indices and does not work with tabular data, so it is resistant to non-stationarity, as well as asociative rules.

 

By the way, it may be interesting for someone.

Very often if the bounce doesn't work, then resistance becomes support.

like in the picture.

And it can be explained, so the levels are there, they can't not be there.

 
Aleksey Nikolayev #:

I am thinking about the possibility of combining my idea with the idea of PRIM algorithm. I don't have much to brag about.

Curiously, this PRIM contains the same ideas that I am trying to realise.

I have read the article, but there are some confusions:

1. What is the quantisation process there for boundary partitioning? Is it a uniform partitioning with a certain step?

2. It is clear with the boundaries - I do it myself, but they have an additional clipping in the picture - is the second clipping a stupid exclusion of sampling?

3. If I understood correctly, they, like me, consider each predictor separately - finding so-called "boxes", but I didn't understand from the description how these different predictors are combined.

The disadvantage of this method is that it evaluates the stability of indicators through bootstrap sampling (randomly taking a given percentage of the sample from the whole sample), which does not give an understanding of the dynamics of stability of indicators, which in turn is important for trading, because the pattern may exist at the beginning of the sample, but completely disappear by its end.

Do you have any improvements with this method?

 
mytarmailS #:

Experimenting with symbolic regression.....

Basically, sequential asociative rules are implemented, but instead of static items - logical rules. This gives more depth to the algorithm, it can understand its observations much more subtly. This concept makes it possible to describe any kind of regularity, because the complexity and type of rules are not limited by anything.

Do I understand correctly that it is the same table with predictors, but inequalities are constructed not only by predictor scores, but also by inequalities of the predictors themselves among themselves?

mytarmailS #:


2) I invented and implemented something like "one shot learning" as I see it, now I don't need to calculate the whole history to learn something, it's not a cool know-how, it's more of a desperation, because learning on the whole history is not possible, at least not yet.

I.e. take one example, generate many variants of leaves (patterns) consisting of inequalities and then test them on a larger sample, those that show acceptable results - leave them, right?

mytarmailS #:

What is the beauty of the algorithm?

1) It goes deep into the pattern if I may say so.

2) It is not tied to indices and does not work with tabular data, so it is resistant to non-stationarity, as well as asociative rules.

And here I don't understand, if the data is not in tables, then what do you feed it to work in?

 
Aleksey Vyazmikin #:
1. Anything at all, the limit of fantasy
2. Yes
3. as well as associative rules, but deeper
 
mytarmailS #:
1. Anything at all, the limit of fantasy
2. Yes
3. Same as associative rules, but deeper

1. You can be more specific - what else could be for example.

2. Is it clear, and how fast are these rules generated? Maybe it makes sense to upload them to MQL5 and run them through history - it may be faster due to agents. I have already done something similar, which I wrote about a long time ago, but I took leaves from genetic trees.

3. I don't understand the answer - what you feed to the input - that's the question.

 
secret grail is published here, the author will start explaining in response what a fool he is)

There is some truth in this explanation, because there is NO definition of the concept of "GRAIL in trading", so that ALL of us can agree with this definition.....

And if there is no definition, then the "swan, crayfish and pike" begins....

 
Aleksey Vyazmikin #:

Curiously, this PRIM incorporates the same ideas that I am trying to realise.

I read the article, but there are some confusions:

1. What is the quantisation process there for partitioning into boundaries? Is it a uniform partitioning with a certain step?

2. It is clear with boundaries - I do it myself, but they have an additional clipping on the picture - is the second clipping just exclusion of sampling?

3. If I understood correctly, they, like me, consider each predictor separately - finding so-called "boxes", but I didn't understand from the description how these different predictors are combined.

The disadvantage of this method is that it evaluates the stability of indicators through bootstrap sampling (randomly taking a given percentage of the sample from the whole sample), which does not give an understanding of the dynamics of stability of indicators, which in turn is important for trading, because the pattern may exist at the beginning of the sample, but completely disappear by the end of it.

Do you have any improvements with this method?

As I understand it, it is a modification of what is usually done when building a decision tree. At each step a variable is searched for and a piece that can be bitten from it - they call it peeling. From all such possible steps they choose those that build an "optimal trajectory" - the dependence of the average target on the number of remaining points. The trajectory is also used to determine the moment when the algorithm stops (when there is no noticeable improvement when the box is reduced).

The approach is interesting first of all because it shows that tree building algorithms can and should be modified.

Reason: