Machine learning in trading: theory, models, practice and algo-trading - page 3723
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
What do you mean, "patterns"?
In wood patterns are wood and they don't repeat. But there's another observation over randeforest.
When sampling 1500 bvr the number of trees is sufficiently 70, after which the error stabilises. Another interesting thing is that if we take more than 1500 bvr, for example 5000, then 70 trees are still enough. But it is not clear whether these are the same trees or different ones? I suspect that these are different trees, i.e. "non-stationarity" of trees is observed, which is why it does not work when running a window outside the sample.
It's all black and black and hopeless....
I agree that the problem of nonstationarity as such is probably not solvable today.
But it should be solved in parts.
Predictors.
Are we taking them in their original form, or is preprocessing required. what is the purpose of preprocessing and what are the quality criteria?
Relationship from predictors to target.
Has anyone made a histogram of this relationship? What are the criteria for a "good" relationship?
Probability of class prediction.
Has anyone plotted histograms of these probabilities? I have and have had some surprising and unexplained results.
There is no constructive definition of non-stationarity - it is simply the absence of stationarity.
There are no universal methods for dealing with non-stationarity. For example, constructing a histogram from a sample makes no sense if the sample is not i.i.d., i.e. non-stationary - if there is no distribution common to all elements of the sample, the histogram cannot be an approximation for anything.
The only possibility is to empirically guess a model of non-stationarity suitable in practice. The main problem here is that the overwhelming majority of nonstationarity is unmodellable in principle.
Unfortunately, it is the nonstationarity of the series that kills the result of any strategy over time. Doesn't it?
Not bad MOSHKs are obtained on markup with divergences (trades are marked only when there is confirmation from divergences). It is stable on OOS. I checked it today, I have not optimised the parameters yet.
Inexplicable efficiency of this partitioning method, with different parameters min_val, max_val:
Useful information for ML-experimenters about treadne and flat TCs. It didn't seem to be paid attention to before, but in the process of experimentation it came up.
That's a great observation! It is indeed true, and here's why.
<Edited by moderator : Please refrain to post ChatGPT generated message, it's not allowed on this forum>
That's a great observation! It's really true, and here's why.
Did you use ChatGPT?
because this opus looks too much like his result.
Did you use ChatGPT?
cause this opus looks too much like his result.
Statistical validation
Predictability can be objectively assessed using the Hurst Exponent ( H):
H< 0.5 - mean reversion (flat): past data has predictive power;
H = 0.5 - random walk: past data does not predict the future;
H> 0.5 - trending: past data is less informative for prediction.
Key paradox
In a trending market, past prices signal "the price will go up", but do not indicate when the trend will end. It is this uncertainty that makes forecasting in trends much more difficult than in flat markets.
For this reason, ML models in flat markets are often successfully trained on historical patterns, while in trending markets they are often ineffective due to the rapid loss of data relevance.
And according to her, flat markets are eternal or something