Machine learning in trading: theory, models, practice and algo-trading - page 3520
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Other sampling and markup
Is the indicator not affected by class balance?
Measure the entropy of labels before training. Then compare it with the results on the OOS of the trained model using your own estimates. Reducing entropy should improve trading on the OOS.
I think you need Recall under one for this, I don't have such models.
I think it requires a Recall under a unit, I don't have any models like that anywhere near me.
Subjectively, entropy reduction should affect any melons and for any models. If there is such dependence, but it is not proven yet.
My point is that at low Recall - there will be very few units after classification, and there should probably be about the same number of units for correct estimation. Or do you propose to take only the area where the model classified an example with a unit and estimate entropy from this data? Then the amount of data will be very small.
My point is that at low Recall - there will be very few units after classification, and there should probably be about the same number of units for correct estimation. Or do you propose to take only the area where the model classified an example with a unit and estimate entropy from this data? Then the amount of data will be very small.
Subjectively, entropy reduction should affect any melons and for any models. If there is such dependence, but it is not proved yet.
The translator on the English forum translated that "should affect any melons". The word "Data" was there :)
Although it can affect some melons too :)Prediction horizon has a big impact on label entropy in my case.
Here is the best result so far, when prediction is 7 bars ahead
It works pretty cool on new data too. But I need to do all the tests in numbers later.
Prediction horizon strongly affects the entropy of labels in my case.
Here is the best result so far, when predicting 7 bars ahead
It works pretty cool on new data too. But I need to do all the tests in numbers later.
How do you compare after splitting into clusters (judging by the log), because, I assume, they have different numbers of sample elements?
How do you compare after splitting into clusters (judging by the log), since I assume they have different numbers of sample elements?
Apparently, the number of items doesn't affect the metric that much.
It's much more affected by the way the labels are labelled.