Discussion of article "Grokking market "memory" through differentiation and entropy analysis" - page 10
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
If we take his book specifically, fractional differentiation is not very pleasant (in practice), besides, it is a hundred years old, starting with ARIMA modifications. This is if you take such series autonomously, without other features, I did not get satisfactory results.
I liked his thoughts about meta-marking, i.e. to train the second model to correct the results of the first one, improving the results of the matrix. There are very few losing trades on Train Sabesta. Again, this is not purely his approach, it is well known. But only I have seen it applied to fin. rows. It does not get rid of retraining, but it makes a nice picture on the trayne.
You can see here, Meta-Labeling section: https: //towardsdatascience.com/financial-machine-learning-part-1-labels-7eeed050f32e.
More here: https: //www.quantopian.com/posts/meta-labeling-advances-in-financial-machine-learning-ch-3-pg-50
__________________________________________________________________________________________________________________
As for the overtraining problem, I'm looking towards Meta-learning, Attention mechanisms in machine learning.
You can look at:
1. https://medium. com/towards-artificial-intelligence/a-simple-neural-attentive-meta-learner-snail-1e6b1d487623
2. https://stats. stackexchange.com/questions/344508/what-are-attention-mechanisms-exactly
Thanks for the detailed reply, I'm looking at it....
I forgot to add that Meta-Labelling from his book is a special case of stacking, if you apply the meta-model not to a single primary model, but to a suspension of several models trained on different sabets. This could give more room for research, as applied to tafm series.
Here's a pretty good article.
Plus, attention + stacking mechanisms are just as well woven into meta-learning. So there's a lot to try.
This should all lead to some global generalisation and help fight overfit.
In my multi-agent RL library from the article there is a billet for multi-agents, but their results are averaged and not weighted in any way. I do all sorts of variations on the theme for myself.
Maxim Dmitrievsky:
In my multi-agent RL library from the article there is a billet for multi-agents, but their results are averaged and not weighted in any way. For myself I make all sorts of variations on the theme.
Do not consider for impudence (you are the author of the library and you decide in what direction to move), in the discussion of your first article on RL I raised the question of replacing averaging with a more complex method, then you took it in the bayonet, in principle, when all agents are created by one algorithm of random forest it is really not relevant, with a degree of conventionality can be considered that 2 agents give in the union forest with a large number of trees. It is just necessary to make one more step in RL library and put trees in a separate standardised wrapper class, for the possibility of simple replacement in the agent by another algorithm.
On the basis of your library I tried to create an extended one with the following structure: class of primary data preparation (pair series, additional series-Euro-dollar affects many pairs, a set of indicator readings, additional parameters like day of the month, day of the week, etc.) - data preparation for all chains of algorithms.
The chain itself: several standard pre-processing classes, method class
At the end - a class of decision making (combining the results of chains), in a simple form - average
I have got 4 classes-bases-wrappers: data, pre-processing, processing, decision making.
In this form it is possible to mix both different methods and one method on different data, probably something is not taken into account, but this is the obtained minimum
Don't be cheeky (you are the author of the library and you decide in what direction to move), in the discussion of your first article on RL I raised the question of replacing averaging with a more complex method, then you took it in a flip, in principle, when all agents are created by one algorithm of random forest it is really not relevant, with a degree of conventionality we can consider that 2 agents give a forest with a large number of trees. It is just necessary to make one more step in the RL library and put trees in a separate standardised wrapper class, for the possibility of simple replacement in the agent by another algorithm.
On the basis of your library I tried to create an extended one with the following structure: class of primary data preparation (a number of pairs, an additional row - euro-dollar affects many pairs, a set of indicator readings, additional parameters - like day of the month, day of the week, etc.) - data preparation for all chains of algorithms.
The chain itself: several standard pre-processing classes, method class
At the end - a class of decision making (combining the results of chains), in a simple form - average
I have got 4 classes-basis-wrapper classes: data, pre-processing, processing, decision making.
In this form it is possible to mix both different methods and one method on different data, probably something is not taken into account, but this is the obtained minimum.
But where to get other algorithms? There is nothing else in alglib, we will have to add some third-party algorithms. In that library from the article you can change the parameters of each agent - that is, you can add other features, number of trees and other settings to it, i.e. you can get, say, a lot of weak classifiers with a small number of trees that are trained on different features. The only thing missing there, as you have noticed, is to replace averaging with a meta-model. Well, it is also possible to split the training into folds, so that each agent would be trained on its own subsample. I have not yet experimented with stacking.
But where to get other algorithms? There is nothing else in alglib, so we will have to add some third-party algorithms. In that library from the article, you can change the parameters of each agent - that is, you can throw in other chips, number of trees and other settings, i.e. you can get, say, a lot of weak classifiers with a small number of trees, which are trained on different chips. The only thing missing there, as you have noticed, is to replace averaging with a meta-model. Well, it is also possible to split the training into folds, so that each agent would be trained on its own subsample. I have not yet experimented with stacking.
That's where I got burned... I decided to move the bousting algorithm, I'm scattered and can't get it all together. C++ libraries are too focused on templates and function overloading, because in MQL not all overloads are supported, the algorithm flies and it's easier to write again, for python algorithms a normal matrix library is needed (what is in AlgLib is truncated, some things are just closed with plugs, not good for the basis), the easiest thing was to transfer from C#, it seems that MQL developers are more oriented on it than on C++, up to the coincidence of methods and names. I tried to take alglib trees as a basis, but there trees are made on matrices, it is difficult to deal with indexing, and deletion is difficult. Now I will either finish it and post it, or I hope that someone will be interested and share it....
That's where I got stuck... I decided to move the algorithm of bousting, I'm scattered and can't get it all together. C++ libraries are too focused on templates and function overloading, because in MQL not all overloads are supported, the algorithm flies and it is easier to write again, for python algorithms a normal matrix library is needed (what is in AlgLib is truncated, some things are just closed with plugs, not good for the basis), the easiest thing was to transfer from C#, it seems that MQL developers are more oriented on it than on C++, up to the coincidence of methods and names. I tried to take alglib trees as a basis, but there trees are made on matrices, it is difficult to deal with indexing, and deletion is difficult. Now I will either finish it and post it, or I hope that someone will be interested and share it....
Or do it in python, but then suffer with MT5 :) I have a desire to make a similar library in python, the possibilities there are over the roof in terms of models. Is there any sense to bother with the article?
The only problem is that I have developed this library very much, it looks much more complicated than in the article... well, or just different, although the concept remains the same.Or to make in python, but then to suffer with MT5 bindings :) I have a desire to make a similar library in python, the possibilities there are over the roof in terms of models. Is there any sense to bother with the article?
The only problem is that I have developed this library very much, it looks much more complicated than in the article... well, or just different, although the concept is the sameTHERE'S DEFINITELY A POINT TO THE ARTICLE.
Python has become an ML standard, MQL developers have also moved in this direction, you have to master Python in any case. My attempt to transfer algorithms is related to the refusal of DLLs for auto-trading in MQL, but this is not a strict requirement and if the algorithms are easier to use in python, then why not.
Without flattery - I read your articles with pleasure, we can argue till we are blue in the face about the content, but the fact that they set new directions is unambiguous.
I am in favour of a new article.
THERE'S DEFINITELY A POINT TO THE ARTICLE.
Python has become an ML standard, MQL developers have also moved in this direction, so you have to master Python in any case. My attempt to transfer algorithms is related to the refusal of DLLs for auto-trading in MQL, but this is not a strict requirement and if the algorithms are easier to use in python, why not.
Without flattery - I read your articles with pleasure, we can argue till we are blue in the face about the content, but the fact that they set new directions is unambiguous.
I am in favour of a new article.
I propose to make an analogue of RL library in Python as an article, only not with random forest but with bousting, for example CatBoost.
and then develop the topic further later. To start with simple.I propose to make an analogue of RL library in Python as an article, only not with random forest but with bousting, for example CatBoost.
and then develop the topic further later. To start with simple.As an option XGBoost - library with source code or even simplified:
https://habr.com/ru/company/mailru/blog/438562/
By the way, the article describes bousting, bousted backing