Machine learning in trading: theory, models, practice and algo-trading - page 3649

You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
There is such a question on variable length traits.
Would this be any different from splitting BP into multiple states?
After all, in essence, different lengths of attributes should somehow differentiate these states for the model.
From this point of view, there is no point in bothering, is there? If there is already a division into states.
Maybe what is meant is that if you retrain for 20 years, the first 10 years have the same set. It turns out that the second 10 years is an extrapolation if the set is found in some way that does not include the second period (about retraining - as an example of the set working on a large period).
Ivan, you know this yourself, but I'll write it for those who don't know.
This is what approximation and extrapolation imply - approximation to a process whose nature is strictly unknown. There are many other ways than neurons to describe a process or its properties with approximation, such as random forest or boost. Ultimately all methods are designed to approximate a process, including statistical methods. The goal is to get information about the process and use the information on new data - extrapolating the process itself or its properties to new data. This is what MO does by definition.
Neural networks can't extrapolate at all.
What does it mean to have different feature lengths?
Why is that? Even boosts can do more than just neural nets.
Maybe what is meant is that if you retrain for 20 years, the first 10 years have the same set. It turns out that the second 10 years is an extrapolation if the set is found in some way that does not include the second period (about retraining - as an example of the set working on a large period).
Why is that? Even boosts can do more than just neurons
I gave Dick a link to a simple tree training code. If he had spent 20 minutes studying it, he wouldn't have spent 4 days writing here his fantasies about extrapolation to MO.
Well, he obviously came here to troll....
But it turns out that the regulars of this thread have not seen the code.
I'll explain in a very simple way. There was an example of teaching a forest the multiplication table from 1 to 9. Using his example:
If you trained up to 9*9 at most, then the corresponding leaf will always answer 81. And at queries 10*9 and 9*10 and 11*20 and even 1000*2000 the answer will always be 81. There are no leaves with answers 90,100,220, 2000000.What extrapolation? The answers will only be from 1 to 81.
I gave Dick a link to a simple tree training code. If he had spent 20 minutes to study it, he would not have written here for 4 days his fantasies about extrapolation to MO.
Well, he obviously came here to troll....
But it turns out that the regulars of this thread have not seen the code.
I'll explain in a very simple way. There was an example of teaching a forest the multiplication table from 1 to 9. Using his example:
If you trained up to 9*9 at most, then the corresponding leaf will always answer 81. And at queries 10*9 and 9*10 and 11*20 and even 1000*2000 the answer will always be 81. There are no leaves with answers 90,100,220, 2000000.What extrapolation? The answers will only be from 1 to 81.