Machine learning in trading: theory, models, practice and algo-trading - page 3727
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
In articles on the application of MO to markets, do they give the result of applying the described techniques to random series?
I don't remember about the articles, but I did the experiment myself. On new data it is naturally 50/50.
Random data on which it was trained it will remember and often give correct answers, but on new random data it will be 50/50.
Think of MO as a database with some of the data merged and averaged. For example, when dividing a tree, a single leaf may contain 100 rows, among which 80 give the same answers (0/1). The algorithm will choose the best option of dividing the data into 2 parts/sheets, maximum 0 on one side and maximum 1 in the other sheet. And it will get for example a leaf with 85 out of 97 identical answers.
It is possible to divide the tree up to 1 example in a leaf (so does the forest in Alglib), then the accuracy will be almost 100%. Less than 100% because in reality not all rows can be split, because there are no features that would allow to do it. And some of the sheets will still contain rows with different answers. The more chips there are, the better the memorisation accuracy will be.
Maybe it makes sense to under-train models for market data. Let it be 60% in sheets on training data, but at least 55% on new data. I myself select the degree of training during optimisation.
In articles on the application of MI to markets, do they cite the result of applying the described techniques to random series?
It is practically a standard - to test the studied approach on random data with given probabilistic characteristics. It often takes the form of Monte Carlo modelling, which in your time you called suitable only for writing useless articles.
There is a sense of a vicious circle.
Treat the MOE as a database with some of the data merged and averaged.
Where exactly in the neural network to look for data?
Wooden models are clearer in terms of data. Leaves give the average of the examples that went into it.
Obviously in neurons and shifts. Multiply and multiply and multiply and you get the answer. It's hard to perceive, that's one of the reasons I switched to wooden models.
Wooden models are clearer in terms of data. Leaves give the average of the examples that went into it.
Who's tried it?
https://www.mql5.com/en/articles/19519
The random data it was trained on it will remember and often give correct answers, but on new random data it will be 50/50.
It is practically a standard - to test the learnt approach on random data with specified probabilistic characteristics.