Machine learning in trading: theory, models, practice and algo-trading - page 3684
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
If you consider the fact that bagging reduces bias, and bousting reduces bias and variance, then bousting is certainly better. In terms of curve, but not generalisation. Then the previous TCs need to be laddered as parameters for the next TCs. But you have to get rid of intermediate parameters somehow, because then the complexity of the model will grow.
You can memorise history almost perfectly with a single tree. Not a word about success on new data.
I have the same situation as SanSanych.
You don't need to memorise perfectly, you need to find regularities and generalisation. But how to get generalisation and not just under-learning?
Well, generally speaking, ensembling in MO makes sense if there are weak algorithms that are at least somewhat, but still work.
Accordingly, ensembled TSs should give at least a small profit. It is generally accepted to collect such models into portfolios - in essence, bagging.
The (theoretical) question was, in fact, whether it is possible to invent some more complicated procedure like bousting instead of portfolioisation of these weak TSs.
What do you mean by "work?"
The criterion "work" is if the performance on the training-testing file and on a step-by-step run is approximately the same.
What is NOT interesting is the performance on the training-test file only.
Going to images, the teacher-predictor pair has some performance as a mat.expectation, theoretically. Now the task is to approximate this theoretical expectation by obtaining mean values.
The algorithm of model building seems to be as follows:
1. Compute the performance on the training-test file and step by step.
2. If approximately matched, then end of training
3. If they do not match, we coarsen the model(s) and compare with the step-by-step mode.
4. Coarsen the models until they match.
I mentioned not in vain about mat.expectation, which does not always exist! I.e. is there some acceptable performance value for a particular set of teacher-predictors?
The algorithm for building the model appears to be as follows:
1. We calculate the performance on the training-test file and step-by-step.
2. If approximately matched, end of training
3. If not, coarsen the model(s) and compare with the stepwise mode.
4. Coarsen the models until they match.
This is the same hatchet approach, along with sliding window training/retraining and parameter over-optimisation. That is, it does not describe the process of creating a TC.
MO is falsely perceived as a way to find patterns. This is a beginner's mistake.
This is the most clumsy approach, along with training/retraining in a sliding window and re-optimisation of parameters. That is, it does not describe the process of TC creation.
MO is falsely perceived as a way to find patterns. This is a beginner's mistake.
The main thing is to be a professional in drawing beautiful balances with P-square = 1.
Where is at least one balance from the tester?
The key is to be a pro at drawing beautiful balances with P-squared = 1.
Where is at least one balance from the tester?
What do you mean, "work"?
You obviously want to talk about your own issues, using my post as an excuse to do so. In principle, I understand what you wanted to say and I agree that it is very important, but I am talking about something else.
In the context of my post we are just talking about standard conditions for algorithms that ensemble - the error is less than that of a naive algorithm (we still need independence of course).
But a suitable platform for MO trading would be nice. With custom events and normal synchronisation of events on the history.
You obviously want to talk about your own issues, using my post as an excuse to do so. In principle, I understand what you wanted to say and I agree that it is very important, but I am talking about something else.
In the context of my message we are just talking about standard conditions for algorithms that ensemble - the error is less than that of a naive algorithm (we still need independence of course).
...
Is it possible to somehow apply methods "from there" to let the agent into the chart.
The agent will see constantly an infinite pattern pattern picture "lots of input data (essentially a bare chart), but somehow learn to "win" in this environment.
Come to think of it.
What do we see in the real world around us? Trees, houses, and lots of other things. This world has its own physical laws, gravity, inertia of bodies, laws of wave refraction, and many others. Now, trees and houses are not laws, they are data. Roughly speaking, by the location of trees and houses on the current street and several previous ones, try to predict the location of trees and houses on the next street. Is that absurd? - Of course it is. But that's exactly what happens in most cases when applying MO to market data - trying to predict trees on the next street (whatever you want, price values or increments, direction or range) is in no way revealing laws in cvr. And they even go as far as "here, if you slightly undertrain to predict trees and houses on the next street, then you will definitely get a generalisation and will be able to calculate the law of gravity in this flat world of the price chart...". No, it won't.
So yes, an agent like this on a graph should see "trees" as effects, not cause - laws. Must literally live and improve his skills by learning and navigating the laws of this candlestick world.
There don't seem to be any fixed price patterns at all. I can't say for sure, but it's like looking for familiar shapes in the sky formed by clouds (they are just clots of vapour, there are no patterns in the shape). The patterns are different - temperature, pressure and humidity, that's what forms the clouds.