Machine learning in trading: theory, models, practice and algo-trading - page 494

 
Yuriy Asaulenko:

Read Heikin Neural Networks and Bishop's theory in English - no translation, but it looks like it's being prepared.

Everything is simple. Random trades for input, and results for output. The Monte Carlo method is called, and it is not very fast per se. And the systematization is a business of the National System.


Well, is there a special name for the NS? Like a stochastic annealing neural network of unclear learning with or without a teacher, and optimizing inputs instead of outputs :))) I'll read books,

There's a book by Haykin "NS Complete Course, Second Edition" in Russian.

 
Maxim Dmitrievsky:

Well, is there a special name for NS itself? Like a stochastic annealing neural network of unclear learning with or without a teacher, and optimizing inputs instead of outputs :))) I'll read books,

Heikin "NS Complete Course Second Edition" is available in Russian

Heikin is, Bishop is not available in Russian.

NS is the usual MLP, training is the usual BP, only with regular manual readjustments as you go along. If these readjustments are not done, or just shuffle the sample, it learns very quickly, but works well (even perfectly)) only on the learning sequence.

 
Yuriy Asaulenko:

Heikin is there, no Bishop in Russian.

The NS is the usual MLP, the training is the usual BP, only with regular manual readjustments as the play progresses. If you don't make such readjustments or just shuffle the sample, it learns very fast, but works well (even perfectly)) only on the learning sequence.


For now I'll make do with Haykin's work. Always limited myself to articles and model descriptions, books are very superfluous (so that there is enough material to sell).

 
Maxim Dmitrievsky:

And Haykin's is old:) so far I'll do without it, I always limited myself to articles and descriptions of models, the books have a lot of unnecessary (that would be a volume for sale)

Well, I would not say. The theory never gets old. But there is a deeper understanding of the subject. Articles, of course, but without a general theory are not very well understood, and just perceived superficially and uncritically - a lot of nonsense they write).
 
Alyosha:

False statement. Normal and boosted forests do not differ from NS in extrapolation.


  • The model can only interpolate, but not extrapolate (the same is true for forest and boosting on trees). That is, the decision tree makes a constant prediction for objects in the feature space outside the parallelepiped that covers all objects in the training sample. In our example with yellow and blue marbles, this means that the model gives the same prediction for all marbles with coordinate > 19 or < 0.

Every article I come across says the same thing

https://habrahabr.ru/company/ods/blog/322534/


  • like decision trees, the algorithm is completely incapable of extrapolation
 
Maxim Dmitrievsky:

  • The model can only interpolate, but not extrapolate (the same is true for forest and tree boosting). That is, the decision tree makes a constant prediction for objects in the feature space outside the parallelepiped covering all objects in the training sample. In our example with yellow and blue marbles, this means that the model gives the same prediction for all marbles with coordinate > 19 or < 0.

Every article I come across says the same thing

https://habrahabr.ru/company/ods/blog/322534/


  • like decision trees, the algorithm is completely incapable of extrapolation

Bullshit is written by uneducated people. They haven't heard about retraining, have no idea about datamining, haven't heard about noise predictors, and don't know how to estimate models. They are just a kind of big-aged snobs playing intellectual games.

 
SanSanych Fomenko:

Bullshit is written by uneducated people. They haven't heard of retraining, have no idea about datamining, haven't heard of noise predictors, and don't know how to estimate models. It's just the kind of overgrown snobs who play mind games.


What does all this have to do with extrapolation...

those who wrote the RF in the alglib library are also uneducated people?

and r bloggers are clueless too, apparently

https://www.r-bloggers.com/extrapolation-is-tough-for-trees/

Extrapolation is tough for trees!
Extrapolation is tough for trees!
  • Peter's stats stuff - R
  • www.r-bloggers.com
This post is an offshoot of some simple experiments I made to help clarify my thinking about some machine learning methods. In this experiment I fit four kinds of model to a super-simple artificial dataset with two columns, x and y; and then try to predict new values of y based on values of x that are outside the original range of y. Here’s the...
 

everyone is a loser, except the FA

only the FA's are accounted for.

;))

 
Oleg avtomat:

everyone is a loser, except the FA

only the FA's are accounted for.

;))


That's how people use RF without understanding the principles, and then they say it doesn't work... it's obvious from the last article that RF can't extrapolate, so it should only work on familiar data

 
Alyosha:

Alas, but they are mistaken and it is normal not only for "ignoramuses" and snobs, remember Minsky and his authoritative opinion regarding "futility" of multilayer perseptrons)))

I'm not even speaking about articles on hubra, it's the same as throw-ups on forums, 99.9% of nuchpop advertising and 0.1% of true trash, 0.1% of sensible thoughts "between the lines".

the man gave an example on R, in what place he made a mistake? unfortunately I don't use R, but I can even reproduce it myself

Reason: