Machine learning in trading: theory, models, practice and algo-trading - page 495

 
Alyosha:
It's OK, the result is zero, you're lucky that the result is not statistically biased on such a small sample. And you don't even need to look at equity on the backgammon, it can be easily made as an exponent without variations.

Then what should we use as a guide when choosing a set for a forward?

 
Alyosha:

Alas, they are wrong and it is normal not only for "ignoramuses" and snobs, remember about Minsky and his authoritative opinion regarding "futility" of multilayer perseptrons)))

I'm not even speaking about the articles on Habra, it's like trash on forums, 99.9% advertising, sci-fi pop and explicit trash, 0.1% intelligible thoughts in the implicit form "between the lines".

Personally, I'm in favor of understanding how the algorithm works, making it myself, and using libraries from the net for reconciliation.

And in the network, mostly reposts alone, etc., a lot of videos, but few examples of concrete implementation in code or in code, but in an unfamiliar programming language.

 
Oleg avtomat:

everyone is a loser, except the FA

only the FA's are accounted for.

;))


I see I'm not letting you breathe... take a breath and calm down

 
Maxim Dmitrievsky:

What does all this have to do with extrapolation...

Those who wrote RF in the alglib library are also uneducated people?

and r bloggers are clueless too, apparently

https://www.r-bloggers.com/extrapolation-is-tough-for-trees/


When we cite reputable people, it means we trust the result. You can only do that with very reputable people who publish results in good journals with qualified editors.


What are you talking about? About the blog? Is it an authority?


Your link is a classic reference to those I call ignorant.

The author takes linear regression, an extremely limited model in application, and argues something there.

For linear regression, the properties of the input data are extremely important, and it is very important to justify that the results can be trusted. Where is this in the article?


It is the basics of statistics that apply to any model.


It is very succinctly formulated as an axiom of statistics (and all mathematics, for that matter): Litter ON THE INPUT - Litter ON THE OUTPUT.

If you do not know this or do not apply it in practice, in my opinion, you are a dense nerd, regardless of whether he knows the word Parseptron or not.

 
SanSanych Fomenko:

When we cite reputable people, it means that we trust the result. We can only do this for very reputable people who publish results in good journals with qualified editors.


What are you talking about? About the blog? Is it an authority?


Your link is a classic reference to those I call ignorant.

The author takes linear regression, an extremely limited model in application, and argues something there.

For linear regression, the properties of the input data are extremely important, and it is very important to justify that the results can be trusted. Where is this in the article?


It is the basics of statistics that apply to any model.


It is very succinctly formulated as an axiom of statistics (and all mathematics, for that matter): Litter ON THE INPUT - Litter ON THE OUTPUT.

A person who does not know this, or does not apply it in practice - in my opinion refers to the dense louts, regardless of whether he knows the word perseptron or not.


Geez, you all are drunk or something.

 

Does the forest know how to extrapolate? Yes.
Does it do it well? No.

 
Dr. Trader:

Can the forest extrapolate? Yes.
Does it do it well? No.


RF is ABSOLUTELY unable to extrapolate, this is due to the decision tree structure as shown in the article above

 
Maxim Dmitrievsky:

RF ABSOLUTELY does not know how to approximate, it has to do with the decision tree structure as shown in the article above


Bummer!

Extrapolation and approximation are ABSOLUTELY different.


Are you not sober at all?

 
SanSan Fomenko:

Bummer!

Extrapolation and approximation are ABSOLUTELY different.


Are you not sober at all?


Yes, I accidentally mixed up the words, because I was reading about approximation at that moment

 

Here's an interesting example, I posted it in this thread some time ago.
Extrapolation in this case would be prediction outside the "cloud of known points"

If the known points are well clustered, you can see that extrapolation is not a problem for most models.
But if the known points were arranged more randomly, without obvious clusters, then the prediction itself would be worse, and the extrapolation would not be credible.

It's all about predictors, if you put all kinds of garbage into the model you really won't get good extrapolation.
For forex it is unlikely you will find ideal predictors, I would never trade by extrapolation on financial data.

Reason: