Machine learning in trading: theory, models, practice and algo-trading - page 1175

 
Aleksey Vyazmikin:

I have 3 classes, i.e. the tree has signal to buy, sell and wait, now I'm experimenting with catbust and simplified the targets for compression into one class...

I don't know anything about ctree and cnode classes, as well as about OOP in general - I'm not very good at programming, so you can hardly understand class code without a programmer...

It's more a question of how to build them and why. You can also branch them through genetics in the optimizer, but it won't look like yours. Rather, it will be similar to a tree of algorithms, which is optimized, and they are like layers to complicate by themselves or something... when optimum is reached, in the optimizer weed out unnecessary. It's just a way, I don't know if it works.

 
Maxim Dmitrievsky:

Yes there is more a question of how to build, and why. It is possible to branch them through genetics in the optimizer, but it will not be like yours. Rather, it will be similar to a tree of algorithms, which is optimized, and they are like layers of complication by themselves or something... when optimum is reached, in the optimizer weed out unnecessary ones. It's just an option, but it may not work.

It's hard to discuss when you don't understand the essence of the code...

I've done an experiment on rotation of groups of predictors - just what I was saying - some predictors are good to split the sample and go to the root, but spoil the whole picture.

So I got 9 groups - I divided them according to the logic of predictors, even more generalized than just the logic. I got 512 combinations of groups, below chart shows the scatter of financial results depending on the combination of groups - for the selection used metric "Precision", set that the other day - no change, the target columns_100, tree 4 levels, iterations made only 1000.

The table shows the result of the whole set with all predictors - 1710 units, and also maximum - 3511 and minimum - 607 results.


Next I made a calculation with a penalty, that is, multiplied the blocking flag group for the financial result of the combination, if the value is zero (the group was used), then we record a positive result for the group, but if the value is not zero, then the group is blocked - multiply the result by -1, then sum up the values for each group. According to the idea, the group that has more penalties when blocked, that is the group worse than others in the aggregate, and there they can be ranked of course for further research.

Here is how the accumulation of indicators looks like on the graph

Of course, the shape of the curves depends on the combination algorithm, since all have the same number of zeros and ones, the final result is still fair. And he is - in the table below.



The table shows that turning off groups 1, 8, and 9 has a good effect on improving the financial result.

Let's look at the combination of the best financial result



And now on to the combination of the worst result



In general, the evaluation method worked - the best result does have group 3,4,5,7 unblocked and the worst ones blocked, while the situation is almost mirror-like in the worst case.

Conclusion - this approach has the right to life and can aggressively identify negatively influenced groups of predictors.

There are two ways to go from here: either to find the reason for such bad results by crushing negative groups or to crush positive groups and try to find out where are the magic predictors that give good results. On the other hand, you can go down two paths...

Top ten and worst combinations:

Well and we have to see what happens with the other target...

And here's another chart - it clearly shows. The more right decisions (Proc_All - delta of right/wrong decisions relative to all decisions (zero and one in total)), the greater the profit, especially if they are correct trade entries (Proc_1_V02) relative to all entries.


 
Aleksey Vyazmikin:

So then in production, how do you process these predictors to get signals?

 
Maxim Dmitrievsky:

So then how do you process these predictors in production to get signals? with the same model?

I'm far away from the product (real trade use), but I'm planning to pair models and build scaffolds from them - the next stage, but I don't know how to automate this process yet...

Predictors in final form are obtained in Expert Advisor, so it's easy to choose them to use.
 
Maxim Dmitrievsky:

ok. good luck ) if you tune the mt and katbust bundle, write an article :)

I was just doing some brain work today... ...just today I was brainworking the algo, the mountains are getting higher, the mountains are getting steeper... oh boy, what a job it all costs on a low level language

The last 3 months of trien, the rest worked right up to early 2018, 15-minutes

I think I should do some monitoring and relax... although I still have a lot of work to do. I made childish mistakes, one of the arrays was turned the wrong way originally (as series), and I was training all the time on the inverted fiches, worried that I wasn't learning well.

Of course, the question of implementing CatBoost model in EA is still open, on the one hand it's possible to unload code in C++, which I don't understand at all, on the other hand I want to get data on sheets to correct model, and for this purpose I need interpreter, which I can't write myself...

Yes you have your own approach - the models work for relatively short distances, but why not try real conditions on the history, i.e. train on the past for 3 months and trade 1 on the future and then stitch together the results - you may get an interesting result, suitable for use.

I've made a funny mistake - I forgot to mark a target that was not trained in the training sample as "not to use" and was happy for a couple of minutes while evaluating the results :)

 
Aleksey Vyazmikin:

It's hard to discuss if you don't understand the essence of the code...

I did an experiment on rotation of groups of predictors - just what I said - that some predictors are good to split the sample and go to the root, but spoil the whole picture.

So I got 9 groups - I divided them according to the logic of predictors, even more generalized than just the logic. I got 512 combinations of groups, below chart shows the scatter of financial results depending on the combination of groups - for the selection used metric "Precision", set that the other day - no change, the target columns_100, tree 4 levels, iterations made only 1000.

The table shows the result of the whole set with all predictors - 1710 units, and also maximum - 3511 and minimum - 607 results.


Next I made a calculation with a penalty, that is, multiplied the blocking flag group for the financial result of the combination, if the value is zero (the group was used), then we record a positive result for the group, but if the value is not zero, then the group is blocked - multiply the result by -1, then sum up the values for each group. According to the idea, the group that has more penalties when blocked, that is the group worse than others in the aggregate, and there they can be ranked of course for further research.

Here is how the accumulation of indicators looks like on the graph

Of course, the shape of the curves depends on the combination algorithm, since all have the same number of zeros and ones, the final result is still fair. And he is - in the table below.



The table shows that turning off groups 1, 8, and 9 has a good effect on improving the financial result.

Let's look at the combination of the best financial result



And now on to the combination of the worst result



In general, the evaluation method worked - the best result does have group 3,4,5,7 unblocked and the worst ones blocked, while the situation is almost mirror-like in the worst case.

Conclusion - this approach has the right to life and can aggressively identify negatively influenced groups of predictors.

There are two ways to proceed: either to find the reason for such bad results by crushing negative groups or to crush positive groups and try to find out where are the magic predictors that give good results. On the other hand, you can go down two paths...

Top ten and worst combinations:

Well and we have to see what happens with the other target...

And here's another chart - it clearly shows. The more right decisions (Proc_All - delta of right/wrong decisions relative to all decisions (zeros and ones in total)), the greater the profit, especially if they are correct trade entries (Proc_1_V02) relative to all entries.


This is a tale, it's easy to read, but it's hard to grasp the meaning - wordplay or a figure of speech that is beyond my comprehension... I plus Maksim's suggestion about the article:)

As for the connection between EAs and the Python console, I already offered my own engine, which allows to send and execute blocks of Python code in real time from MQL, it works even in the tester.

 
Ivan Negreshniy:

Just tinny, easy to read, but the meaning is not real - a play on words or a figure of speech that is beyond me... I plus Maksim's suggestion about the article:)

As for the binding of EAs to the Python console, I already suggested my own engine, which allows to send and execute blocks of Python code in real time from MQL, even in the tester.

And you say article - if I can not express a simple thought, then what kind of article ...

The point is that the construction of the tree uses the principle of greed, which prevents the emergence of logical relationships of predictors, initially divided into two large groups (you understand how to build a decision tree, right?), so the model can get better on a smaller number of predictors, both for this reason, and because of checking more predictor combinations for the same time interval, but this is to a lesser extent.

Ask a specific question if my verbiage does not reveal the meaning of the words in front of you.

 
Aleksey Vyazmikin:

And you say article - if I can't express a simple thought, then what kind of article...

The point is that in the construction of the tree the principle of greed is used, which prevents the emergence of logical links of predictors, initially divided into two large groups (you understand how to build a decision tree, right?), so the model may turn out better on a smaller number of predictors, both for this reason, and because of checking more combinations of predictors in the same time interval, but this to a lesser extent.

Ask a specific question if my verbiage does not reveal the meaning of the words in front of you.

A cursory expression of thoughts in a forum and the publication of an article are different genres.

In this thread, how much has already been discussed, but there is no use and no practical results.

And there's no sense in discussing such fundamental questions, especially such ones as tree structure.

It's better to write articles and/or code - to compare the effectiveness of MI models, used libraries, test and monitor Expert Advisors...

 
Ivan Negreshniy:

A cursory expression of thoughts in the forum and the publication of articles - these are different genres.

Here in this thread, how much has already been negotiated, but no effect, and no practical results.

And there's no sense in discussing such fundamental questions, especially such ones as tree structure.

It's better to write articles and/or code - to compare effectiveness of MI models, libraries used, test and monitor Expert Advisors...

There are articles on the MOE, where everything is scientifically sound, and here more or beginners can ask questions, or discuss some ideas to try. In general, the article should be written when you are already confident in the results of their actions, I am so far away from this.

 

Yesterday I wrote about different financial indicators when using different groups of predictors, saying that sometimes it is not rational to put everything in one mess. In that post, I used an indicator such as profit to select the significance of the groups. Today, mindful of Maxim's question, I decided to look at the results of the model on the test sample (before I looked only on the test sample), performed the same manipulations and was upset - the significance of the groups had changed upside down - how come, I thought... Comparing the results of the two samples, it became clear that the criterion of pure profit is not suitable - I need to look for other ways to solve the problem.

So I had an idea: what if I didn't get greedy and consider as a good result the selection of those models that give higher profit in the aggregate than the average profit of all 512 models? I decided to look for the best way to make it happen. The methodology is simple, we use a test sample to determine the average value of the index, then we check each option of the model - if it is greater than average, it will be 1, and if it is less, it will be 0 (I did it in Excel and intermediate calculations are useful for understanding), then we do the same with the test sample and compare indexes of both samples, thus obtaining statistics, which tells us if the index belongs to one of the groups (above or below average value). After I also looked at how much profit each indicator on the test sample, if you divide the test sample on the principle - more than the average value or less than the average value, kept the average profit.

We got an interesting result



The table shows that selection by profit (FinRez) gave the worst matching percentage between the two samples, and moreover, when filtering "below average" (the column with the number "0") this group included mostly profitable models in the test sample, rather than the group that showed results on the test above average. If you think about it, it makes sense, because the training is conducted by classification, not regression, and if I have a take profit in the strategy, then the stop loss is floating, which will distort the results even with the same number of correctly classified signals. If you look at the other indicators, their predictive ability is much higher, i.e. it means that the trend on average holds for the model, whether on the test sample or the exam sample. So far I decided to dwell on the indicator Proc_1_V01 - it is the percentage of correctly classified target units of all units in the sample, this indicator shows a very high level of preservation of the relationship in, say, two-dimensional space (above / below the average value) - 87.3%, which is very good in my opinion, plus the division of the sample according to his method gives approximately the same average profit, and even slightly above average value - 1488. Then it turns out that we have a better chance of a good financial result when choosing models by this indicator, or am I missing something?

So far, I have decided to review the groups with consideration of Proc_1_V01 index instead of FinRez(profit), the methodology remains the same and the result is very impressive


The significance of the groups retained its consistency as the test sample, as well as on the test sample, with the exception of group 7, in contrast to the situation with the definition of the group on the indicator - the financial result, presented below in the table.



The conclusion I make here is that the stability of the most important absolute values of profit, which can be with greater probability random.

Here's another added chart, which shows how the distribution of the sample profit (reduced to 100% relative to the test and exam, respectively) - on the left side of the filtered performance, and the right side taken, this is if you increase the average value of the indicator Proc_1_V01 by a factor of 1.25.

And this is for comparison - if we do the selection by profit - the density has increased, but we also got a fat tail from 20% to -15%, which is not good.

All in all, we need to think of an indicator that will define the trend at its best.

But by combining the two indicators and adding filtering by percentage of profit to Proc_1_V01 - >20% - for there are many losses at the lower value, and <80% - for extreme values are often random, we can get a more satisfactory picture.


Reason: