Machine learning in trading: theory, models, practice and algo-trading - page 3166

You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
it's time to end this epic endeavour of trying to find patterns in random data.
Yeah.
or it's time to stop thinking inertia and take a sober look at the results.
The results of learning through the corset are often not bad
From years 10 to 21 a corset was found with a fraction of 30% (30% of the random history from this site participated in learning), the other years generally pure OOS
In the terminal it looks like this
There are many methods for determining coreset. Here are some of the most popular methods:
It is important to note that there is no universal way of obtaining coreset that is suitable for all machine learning tasks. The choice of the method to obtain coreset depends on the specific task and the available computational resource.
*Bard
The results of learning through the corset are often not bad
From years 10 to 21 a corset was found with a fraction of 30% (30% of the random history from this site participated in learning), the remaining years generally pure OOS
In the terminal it looks like this
Well, there are also periods of drawdowns lasting six months to a year. Are you ready for that? Especially if the drawdown starts immediately at launching in real?
Well, there are also periods of drawdowns lasting six months to a year. Are you ready for that? Especially if the drawdown starts immediately when you start in real?
Usually diversify.
You just need to create a portfolio of instruments that will create the biggest recovery factorThese plots will be profitable on other instruments. And if the general trend of all of them will be the same as on the presented chart, it is a guaranteed investment stability.
Well, there are also periods of drawdowns lasting six months to a year. Are you ready for that? Especially if the drawdown starts immediately when you start in real?
I'm not ready to bet on 20 years :) this is more of a case study.
I'm fine with 10 years of training - 1 year of OOS, fine.
but there is a lot of noise, sometimes the model throws out almost all samples as useless, 3 transactions remain
There are also pieces of history that are never predicted normally.
All in all, it's not a very rewarding activity.
It's like spinning the old receiver and accidentally hitting some wave with noise.
Once again I am convinced that to forecast you need a model.
The model removes the unnecessary (noise) leaving the necessary (signal), if possible amplifying the necessary (signal), as well as the model is more deterministic, more repeatability in patterns....
as an example.
high-low minutka prices.
Further we build the simplest simplification of the price (create a model).
then we remove the excess (improve the model) with the help of a simple known algorithm for dimensionality reduction, the model became more repeatable.
and the last perhaps decorative touch.
I wonder how the MO will be trained on such data?
This is a test sample.
Have you ever seen numbers like this before?
What's the exact name? Or is it homemade?
I have been using different "wooden" models for many years and have never seen anything like this.
What do you mean, homemade? There's a theoretical justification, a good article. There's a package called RLTv3.2.6. It works well. You should pay attention to the version.
About ONNX for wooden models in Python. See package skl2onnx.
Supported scikit-learn models. The last supported set of options is 15.
Good luck
Have you ever seen those numbers on your own?
0.99 trains/test, with the model truncated to a couple of iterations. Only a few rules remain that predict classes well.