Machine learning in trading: theory, models, practice and algo-trading - page 2115

You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Oversampling does not give anything yet, but the "volume" has improved the result a little - it means that there is something in the data, the main thing is to dig properly.
Histogram of models with different quantization settings on the sample.
It makes a better boundary between classes. The same way you have to prepare the data - so that the separation into classes is clear, the examples do not overlap.
and I even know how to do it... kinda smart, but I haven't done it yetit makes a better boundary between classes. The same way you should prepare the data - so that the division into classes is clear, the examples do not overlap
I even know how to do it... I'm pretty smart, but I haven't done it yet.I wonder how? In our field, usually classes can be described as evenly mixed.
it makes a better boundary between classes. The same way you should prepare the data - so that the division into classes is clear, examples do not overlap
I even know how to do it... I'm kinda smart, but I haven't done it yet.Add clustering to label sampling. Clustering by the same features, then sampling with clusters. The classes will be separated, but it's not clear what will happen to the new data. It should improve, in theory.
So I covered this idea here this week :)
Only I suggest decreasing the number of major classes.
So I covered this idea here this week :)
Only I propose to reduce the number of majority class.
I have not seen
Are there any methods/tools that can do this automatically?
Are there any methods/tools that can do this automatically?
I don't know, I'll have to see. Maybe this weekend I'll look.
Please let me know if you find it, otherwise I'll start assembling my bicycle :)
Elibrarius suggested an idea - just build a branching tree and use it instead of clustering, taking information from leaves in order to reduce majority class.
Current futures - training completed in 2018. It's too beautiful.
And here is the same pattern in the last futures. It's sadder, but bearable.
Even closer to the end of the training, let's see the futures. And that's where the trouble is.
And I do not understand what is going on - it would seem that the closer to the end of the training, the better the results should be, but it turns out all the opposite - an anomaly!
It seems that the answer lies in the trend itself - the current futures without MO
Last
and also
What about this MO?