Machine learning in trading: theory, models, practice and algo-trading - page 2626
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Just show me if you got it, I'll show you my results on the method, I had no time to finish it.
The importance of signs in the moving window (indicators and prices)
At one moment the indicator may be 10% important and at another moment it may be 0,05% important, such is the truth of life)
If you think it solves everything, you should be proud of it.
That's what the four signs of Fisher's Iris look like.
Or if you enlarge the sliding window like this
The importance of signs in the moving window (indicators and prices)
At one moment the indicator may be 10% important and at another moment it may be 0,05% important, such is the truth of life)
If you think it solves everything, you should be proud of it.
That's what the four signs of Fisher's Iris look like.
Or if you zoom in on the sliding window.
It is clear that the irises (and similar problems) have a stable pattern. Everyone who's been experimenting with them has already figured out that everything "floats" in quotes.
I wonder how the significance of the indicators is different in every point of the chart. It is determined for the whole model built on all training lines at once. Or do you have 5000 models there?
And in general, explain your graphs, what is on them and how they were built.
The fact that irises (and similar problems) have a stable pattern is already clear. And the fact that everything "floats" in the quotes is also clear to everyone who has experimented with them.
I wonder how the significance of the indicators is different in every point of the chart. It is determined for the whole model built on all training lines at once. Or do you have 5000 models there?
And in general, explain your graphs, what's on them and how they were built.
There are a lot of ways to find out trait informativeness, for some you don't have to train a model. I used fselector. h ttps://www.r-bloggers.com/2016/06/venn-diagram-comparison-of-boruta-fselectorrcpp-and-glmnet-algorithms/
I did online window training, if you take it all together without filtering by time, performance is poor. I didn't think of doing it with filtering at the time. There is an example of such bot in my article about entropy
What is that?
Well, there are all sorts of recurrence networks, there was one here
straight through the pattern and look for a pattern where it behaves in a pattern :)You should go straight to the model and look for a pattern, where it behaves as if it were a pattern :)
If very simple: teach it, test it on the test, identify periods where it was pouring and working, draw conclusions / try to filter it out, identify a pattern
Yes, in principle it is possible, even better, in this order you can do on the machine
or not to pour))
For me, it is not necessary to make complicated models, a simple rule is enough, otherwise you cannot call it a pattern.
I always want to do better)))
There are a lot of ways to find out the informativeness of a feature, some you don't have to train a model for. I used fselector. h ttps://www.r-bloggers.com/2016/06/venn-diagram-comparison-of-boruta-fselectorrcpp-and-glmnet-algorithms/
The fast methods do not coincide with the benchmark. They do not match each other either. The fselector is even faster, I don't think it will match anything either.