Machine learning in trading: theory, models, practice and algo-trading - page 3211

You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
And you can not run on the file INE and live happy, as Maxim does with very beautiful pictures.
I'm not interested in solving other people's mental problems.
You're in charge of pretty pictures, even in the market. So that's your main problem.
You're in charge of pretty pictures, even in the marketplace. So that's your main problem.
There is a simple arithmetic that feature selection is done from a pile of heterogeneous information, often irrelevant to the subject of the study.
The derived BPs are all related to this BP, you can only choose better/worse, often it makes no sense at all.
I am not talking about peeking, these are some childish problems. Obviously, such fiddling has led to nothing over the years. But they persist in repeating it.
And the errors even in-sample because you just can't mark up the trades properly.There is a simple arithmetic that feature selection is done from a pile of heterogeneous information, often irrelevant to the subject of the study.
The derived BPs are all related to this BP, you can only choose better/worse, often it makes no sense at all.
I am not talking about peeking, these are some childish problems. Obviously, such fiddling has led to nothing over the years. But they persist in repeating it.
And the errors even in-sample because you just can't mark up trades properly.CV errors are not cured by their meaning as they are a search for optimal parameters with error minimisation. If a teacher and his predictors are falsely correlated, CV will necessarily find something better in this rubbish, but it will not solve the rubbish problem.
The rubbish problem is solved by "predictive ability", i.e. the ability of the predictor values to predict either one class or the other. Then it is clear that the classification error is determined by the fact that the same predictor values predict one class at some moments and another class at other moments. Rattle even has pictures on this topic.
CV does not cure errors in its meaning as it is a search for optimal parameters with error minimisation. If a teacher and his predictors are falsely correlated, CV will certainly find something better in this rubbish, but it will not solve the rubbish problem.
The rubbish problem is solved by "predictive ability", i.e. the ability of the predictor values to predict either one class or the other. It is then clear that the classification error is determined by the fact that the same predictor values predict one class at some moments and another class at other moments. Rattle even has pictures on this topic.
The problem mentioned above is that there is a model that has excellent results on a training file and an OOS file. I understand that the training file can be obtained even by random sampling by sample, and the OOS is the residual of the training file.
But when running the model on an external file, the result is catastrophically bad.
I think I've mentioned OOS a few times recently. But there good OOS was by your terminology "separate file".
And how to detect looking ahead?
If multi-pass learning (the next stage uses computations of the previous one), the probability of "looking ahead" is high. There is no general recipe, but I did the following in one case.
In order to speed up the computation, it was necessary to get rid of unnecessary ticks. For example, if you reduce the number of ticks by 10 times, calculations will be accelerated by the same amount. That is a very demanded action.
In my case, I knew which ticks I needed and which ones I hardly needed. Anyway, I built a custom symbol and started backtests on the custom and the original one.
Here it was important to turn on the nerdiness and achieve a >99% match. It turned out that I was throwing out too much initially, and got a different result (of course, better than on the original one).
Eventually I started throwing out less than the original, and everything started to match. That is, I actually use a two-pass method when training.
So, probably, to detect peeking after the previous pass you can use the above described check even before serious calculations. Well, and there is also a grandfather's method of detecting looking ahead - "too good to be true". Beginners rejoice at the cool results, while mature ones get upset because they realise that they will have to search for their own error for a long time.
Newcomers are happy with the cool results, the mature ones are upset, because they realise that they will have to look for their own mistakes for a long time.
And the professionals look at them both with pity and condescension and quietly say to themselves: when will you think of changing the concept, not the scenery?
And Professionals...
I haven't met any.
I haven't.
It happens.