Machine learning in trading: theory, models, practice and algo-trading - page 1492

You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Yes, then detect them in the future and choose a model that works well on this particular cluster
https://radfiz.org.ua/files/temp/Lab1_16/alglib-3.4.0.csharp/csharp/manual.csharp.html#example_mcpd_simple1
found examples here, search for "markov"
interesting article on NS training on Habra ( just a read )Restoring photos with neural networks
the most valuable thing, as always, are the users comments - throughout the article the author talks about a major breakthrough in his work, as usual with a comparison of third-party software, but users immediately pointed out the flaw - the dominance of green
Conclusion - preparation of input data is more important than the NS training technology itself
conclusion - the preparation of the input data is more primary than the NS learning technology itself
Sorcerer said this a billion pages ago.
Gentlemen, when will you learn to listen to each other? What is this redneck narrow-mindedness and stupidity? It is really annoying and infuriating.
interesting article on NS training on Habra ( just a read )Restoring photos with neural networks
the most valuable thing, as always, are the users comments - throughout the article the author talks about a major breakthrough in his work, as usual with a comparison of third-party software, but users immediately pointed out the flaw - the dominance of green
The conclusion - the preparation of input data is more important than the technology of training NS
The fact that the picture is not moving and does not change, as well as the fingerprints. And in pattern recognition machine learning is indispensable.
But when MO is applied on historical data in forex, it only creates the illusion that it has found the best option.
And in reality, he's just learned on history to bypass all the dangerous areas, resulting in a great graph of gains.
But pricing is a dynamic process that is always changing and moving forward, and no one can guess where the price is going to go.
But when MO is applied on historical data in forex, it only creates the illusion that it has found the best option.
And in reality he has simply learned on history to bypass all the dangerous areas, resulting in a great graph of growth.
read all thishttps://habr.com/ru/post/443240/
If you want to know what I think, MO is no better and no worse than GA strategy tester,
But all the same the theme is very interesting, it's fascinating ))))
Here are some more examples https://csharp.hotexamples.com/examples/-/mcpd.mcpdreport/-/php-mcpd.mcpdreport-class-examples.html
Well, it turns out pretty easy to apply. From the 1st example.
We set the number of states, and sequences of transitions (say, historical). I.e. probabilities of being in a given state. And then it counts the total probability for all states.
Or on MAshki to make a simple example for the beginning, but so far I still haven't learned how to do it, maybe@mytarmailS will explain.
Such a strategy with MAs (very simple) is described here
https://www.quantstart.com/articles/market-regime-detection-using-hidden-markov-models-in-qstrader
Well, it turns out pretty easy to apply. From the 1st example.
We set the number of states, and sequences of transitions (say, historical). I.e. probabilities of being in a given state. And then it counts the total probability for all states.
Or on MAshki to make a simple example for the beginning, but so far I still do not know how to do it, maybe@mytarmailS will explain.
In alglib, as I understood (probably incorrectly), you have to compile the tracks yourself by the required number of clusters. In those examples in Python, you ask for the right number of clusters, and it redistributes the data itself.
Although if the classification, with 2 classes, you can probably build chains like this: starting with 0 until it becomes 1; and starting with 1 until it becomes 0. Since we do not have intermediate 0.95, 0.8, etc.
In alglib, as I understand it (probably wrong), you have to compile the tracks yourself by the required number of clusters. And in those examples in Python, you ask for the right number of clusters, and it redistributes the data itself.
Although if classification, with 2 classes, you could probably build chains like this: starting with 0 until it becomes 1; and starting with 1 until it becomes 0. Since we don't have intermediate 0.95, 0.8, etc.
I don't understand how that works at all. Then why not just use classification via NS
I don't understand how it outputs results in python, including new data. And in alglib how to get the predicted state on new data, and for each dimension separately. That's way too many bucats at a time.
and it looks like there's something wrong with the alglib, a different model