Machine learning in trading: theory, models, practice and algo-trading - page 3608

You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
I'm not MO busy at all right now. So I won't be testing it any time soon. It will probably be enough for you to just test this idea with a small adjustment of the code. Of course, if it is interesting to compare with and without the final model.
A way of correcting labels in the dataset was suggested. After it the final model is trained, a regular classifier. It will not work without the final model.
Here it is much more important to understand the meaning of the approach and why it should work, rather than the way of implementation.So it only fits my training system, which has been in the works for a couple of years. Or you're doing something wrong.
Is there a difference in immediately checking for probability bias on a lagged sample, or building a model and estimating the effect through the model?
I'm thinking about it, and I don't see the difference - if the probability bias is there on the new data, the model will work on it, and if it isn't, it won't....
Does it make a difference to immediately test for probability bias in a lagged sample, or to build a model and estimate the effect through the model?
I'm thinking about it, and I don't see the difference - if the probability bias is there on the new data, the model will work on it, and if it isn't, it won't....
It is better to check the model at once, or even better, the TS. What is closest to the trade, test it.
If it doesn't tell you anything, why are you using it to select marks for repartitioning?
Are you sampling the test sample? How many pieces?
If it doesn't, why are you selecting marks for repartitioning based on it?
Are you sampling the test sample? Into how many chunks?
I don't cull tags from it, you're misunderstanding it.
Aren't the tagged rows selected through the clusters, which are all labelled 0 or 1 depending on the probability offset?
Aren't the labelled rows selected through the clusters, which are all labelled 0 or 1 depending on the probability bias?
It's a reduction to a general view that guarantees nothing. For this purpose, there are added settings for how much and what to fix.
It doesn't guarantee anything, but if the probability bias is not confirmed on the new data, then you have aggregated random rubbish into a cluster, which means the training is guaranteed to be worse than if there was no rubbish.
That's why I think that if you check right away, it will reduce the time for this whole loop.
I come up with strategies purely from observing the market - for marking and learning.
Strategies are simple - mainly a couple of parameters that are significant.
You run this in the tester and there are results that are profitable on the traine and test - both forward and backward history. I realise that all these are statistical errors... but this is in a couple of parameters, and what is going on in the model....
Not guaranteed, but if the probability bias is not confirmed on the new data, then they aggregated random rubbish into a cluster, which means the training is guaranteed to be worse than if there was no rubbish there.
That's why I think that if you check right away, it will reduce the time for this whole loop.
No
Very different principle, very different goals.