Machine learning in trading: theory, models, practice and algo-trading - page 401

You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
I hope this wasn't manually compiled? Was it somehow in cycles? Manually would have taken hours...
In a loop, yes.
A little off topic and still, is it possible to run java on GPU????
I would ask a broader question, who is able to rewrite everything on mql5 on GPU :)
I would ask a broader question, who is able to rewrite everything on mql5 on GPU:)
Just in the process of rewriting it is possible to miss something or make mistakes, and so I wish to run it on GPU, at least, and there already it is possible to mount some co-processor. Or combine processor power over a network on the Internet. So, the task is not trivial. Increase the number of cores to about 20-30 and build model.... just ....
Just in the process of correspondence it is possible to miss something or to make mistakes, and so it would be possible to run it on GPU, at least, and there already it is possible and to fasten some co-processor. Or combine processor power over a network on the Internet. So, the task is not trivial. Increase the number of cores to about 20-30 and build model.... just ....
The problem is, you'll still want such a library for MT sooner or later, as it will open more possibilities like auto-training, less hodge-podge, less headache. Or at least as a dll.
And for this one in Java, you can just rent a multi-core server for an hour and do everything you need, because the allocation to threads is already implemented there... so whether it's worth it, extra work
Personally, even if I wouldn't rewrite it, I'd spend money for it (for mql5 version).
The problem is that you will still want such a libu for MT sooner or later, because it will open up more possibilities like auto-training, less hangeobbing, less headache. Or at least as a dll.
For this one on java you can just rent a multi-core server for an hour and do everything you need, because thread distributions are already implemented there... so, is it worth it, extra work
I once considered the option of renting for this case, but that my hands never really got around to it, but we'll see....
Reshetov:
VS Two-class decision forest & logistic regression:
Reshetov won by a landslide.
Some kind of Reshetov fanclub... (For God's sake it's funny)) You should also take a point for buy and sell and 1000 chips...
Actually, it's a clean fit. For such a sampling step, training samples are needed not for 3 months, but for 5 years at least, 3 months even for ultra-HFT is not enough when a million points a day runs up. And there's no need to reinvent the wheel, trivial XGB with the right set of features gives quasi-optimal results.
It's like a Reshetov fanclub... (laughs for God's sake)) You should also take a point for buy and sell and 1000 features...
Actually, it's a clean fit. For such a sampling step, training samples are needed not for 3 months, but for 5 years at least, 3 months even for ultra-HFT is not enough when a million points a day runs up. And there is no need to reinvent the wheel, the banal XGB with the right set of features gives quasi-optimal results.
No, we're not fans, we're just looking for the optimum. 3 months for xft is more than enough. What is trivial XGB? naive Bayesian only know :)
Even if the fit, other models can't do it either, no matter how I twisted them
It is better to define the importance of predictors as follows
As I understand it, predictors with high correlation between them are eliminated.
I do it a little differently - I count the full correlation matrix and then starting from the 1st I check the correlation to it, if it is higher than 0.9 (the parameter is adjustable) - I discard it. The method was described in articles on MO.
You have a lot of predictors that are eliminated... Apparently your f-fi sifts out somewhere at the level of 0.5
Nah, we're not fans, we're just looking for the optimum ) In 3 months for xft is more than enough. What is trivial XGB? naive Bayesian only know :)
Even if the fit, other models can not do it either, no matter how I twisted them
XGB - https://en.wikipedia.org/wiki/Xgboost is a thermonuclear weapon of machine learning. "Banal" since it is the most popular.
3 months for hft is not quite enough, for a full simulation cycle, since the model needs to be tested on different markets, mode changes, flash roofs and different swans, synthetic stress testing can't do that, since the real market. The final model for the most part will use no more than the previous week's data, but to configure it you will need to race it on 1-3 year samples to make sure it doesn't screw up everywhere. In 3 months the data can be trained and if the datascientists know their stuff, it will turn out to be a regular money mower, but one day, maybe in 3 months or in half a year, everything can break abruptly, for "unknown" reason, or rather known, as the model has not encountered such meta state of the market and became irrelevant.