Machine learning in trading: theory, models, practice and algo-trading - page 580

 
Yuriy Asaulenko:
Thank you. I wonder if there are any monographs, exist in the nature?

did not find... only on the forests saw from Breyman - the creator of the forest

 
Yuriy Asaulenko:

...

I'd like something big and detailed.)


... and thorough.


Zykov A.A. Fundamentals of graph theory. -- M.: Nauka. Editor-in-chief, Physics and Mathematics, 1987.

A systematic introduction to graph theory, organized according to the internal logic of its development.


There are some links in the network where you can download it.

 

Why not a tractor assembly manual?

 

A new version of the library for connecting Python to MT5 has been posted. I remind the linkhttps://github.com/RandomKori/Py36MT5 But there are problems. In Visual Studio the test project works as it should, but in MT there are some unclear problems. Now the library normally works with the directory where the Python script is located. I don't know how to debug it with MT. MT is protected from the debugger. Maybe someone knows how to debug?

 
Maxim Dmitrievsky:

Why not a tractor assembly manual?


You're just making fun of me?

Man, I give you useful information, and in response ... you're like a teenager, you're rude, and you think you're the best wit... pathetic.

You probably have had enough of one book, like some characters here...

 
Oleg avtomat:

Is that your idea of a joke?

Man, I give you useful information, and in response ... you're like a teenager, rude, and you think you're the ultimate wit... it's pathetic.

You've probably had enough of one book, like some of the characters here...


what's useful in there? how to build a graph-tree? very useful...

you have to read the whole book because of this?

 
Maxim Dmitrievsky:

what's useful in there? how to build a graph-tree? very useful... a stick of cucumber

is that why you have to read the whole book?


that's why you're fidgeting around on top of it, because you don't have a thorough knowledge, and you don't want to have. you don't have the knowledge and understanding. A book and some articles you read is not enough for that.

 
Oleg avtomat:

That's why you're fidgeting over the top, because you don't have any solid knowledge, and you don't want to have any. You don't have knowledge and understanding. And for that, one book and a few articles you once read are not enough.


How to live, how to live... panic-panic... go learn the multiplication table and the theory and ontology of knowledge

 
Yuriy Asaulenko:
Thank you. I wonder if there are any monographs, exist in the nature?

Stop fooling around and take R: the code must be accompanied by a link to a source that describes the theory of the code.

Here are references to Breiman's classical algorithm:

Breiman, L. (2001), Random Forests, Machine Learning 45(1), 5-32.

Breiman, L (2002), "Manual On Setting Up, Using, And Understanding Random Forests V3.1", http://oz.berkeley.edu/users/breiman/Using_random_forests_V3.1.pdf.


Also, if one uses R, there are already a wide variety of forests collected there, and one would see that there are other forests besides randomForest that specify a wide variety of nuances of the original id.

For example, randomForestSRC, randomUniformForest.

The most interesting and efficient algorithm of the same breed is ada.

Here are the references (these are all from the documentation of the R packages)

Friedman, J. (1999). Greedy Function Approximation: A Gradient Boosting Machine. TechnicalReport, Department of Statistics, Standford University.

Friedman, J., Hastie, T., and Tibshirani, R. (2000). Additive Logistic Regression: A statistical viewof boosting. Annals of Statistics, 28(2), 337-374.

Friedman, J. (2002). Stochastic Gradient Boosting. Coputational Statistics \& Data Analysis 38.Culp, M., Johnson, K., Michailidis, G. (2006). ada: an R Package for Stochastic Boosting Journalof Statistical Software, 16.


There are several varieties of this ada.


And here is R itself making thematic selections.

On trees:

  • Random Forests: The reference implementation of the random forest algorithm for regression and classification is available in packagerandomForest. Packageipred has bagging for regression, classification and survival analysis as well as bundling, a combination of multiple models via ensemble learning. In addition, a random forest variant for response variables measured at arbitrary scales based on conditional inference trees is implemented in packageparty.randomForestSRC implements a unified treatment of Breiman's random forests for survival, regression and classification problems. Quantile regression forestsquantForest allow to regress quantiles of a numerical response on exploratory variables via a random forest approach. For binary data,LogicForest is a forest of logic regression trees (packageLogicReg. ThevarSelRF andBorutapackages focus on variable selection by means for random forest algorithms. In addition, packagesranger andRborist offer R interfaces to fast C++ implementations of random forests. Reinforcement Learning Trees, featuring splits in variables which will be important down the tree, are implemented in packageRLT.wsrf implements an alternative variable weighting method for variable subspace selection in place of the traditional random variable sampling.

By very close tree relatives:

  • Boosting and Gradient Descent: Various forms of gradient boosting are implemented in packagegbm(tree-based functional gradient descent boosting). Packagexgboost implements tree-based boosting using efficient trees as base learners for several and also user-defined objective functions. The Hinge-loss is optimized by the boosting implementation in packagebst. PackageGAMBoost can be used to fit generalized additive models by a boosting algorithm. An extensible boosting framework for generalized linear, additive and nonparametric models is available in packagemboost. Likelihood-based boosting for Cox models is implemented inCoxBoost and for mixed models inGMMBoost. GAMLSS models can be fitted using boosting bygamboostLSS. An implementation of various learning algorithms based on Gradient Descent for dealing with regression tasks is available in packagegradDescent.

There are also wrappers, for example, a very interesting one for Maxim on predictor estimation algorithm:

  • CORElearn implements a rather broad class of machine learning algorithms, such as nearest neighbors, trees, random forests, and several feature selection methods. Similar, packagerminer interfaces several learning algorithms implemented in other packages and computes several performance measures.



And when I write that you use rural podlouches, I mean exactly the following circumstances:

  • a large number of users
  • that large number of users debugged the code well
  • that large number of users have documented well
  • it's a large number of users who have scrutinized the theory.
  • so many users have created publications, from mutual disputes to practical applications.
 

AdaBoost is no better than bagging for forex because it overfits badly, especially on large dimension data... moreover, it is already obsolete in its class, there is xgboost. and the rest is still a long way off :)

I don't really believe in importers on forex either... but it's good to get familiar with it for general education, for example doping gini to alglieb

Reason: