Machine learning in trading: theory, models, practice and algo-trading - page 83

You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Any actionfrom visual viewing of charts and searching for patterns in the breakdownto training neural networks is nothing but trading by statistics, the very one that does not work in the market, do you understand what I'm talking about in general?
the market moves against the crowd's trades ----- the crowd acts on statistics ------ all you need is to predict the crowd's actions in the future and do the opposite, the only way to predict is statistics
If by statistics we mean tools working ONLY on stationary random processes, then such statistics does not work, because financial markets are non-stationary processes, on which much loved concepts of "average", "correlation" and so on and so forth are meaningless.
But machine learning usually does not refer to statistics, but to artificial intelligence.
As forMihail Marchukajtes idea, which I also came up with a few days before it appeared in this thread, maybe someone will be interested in the result, I think this approach is also correct and even viableI've noticed some technical pattern, which works from time to time, the pattern is purely a sell one (but I let the net buy just for fun), I prescribed it and when the price comes to a certain point "X" in the pattern I let the net make a buy/sell/rest, the net does not analyze all quotes constantly, but only when some condition is reached the net will do that..
The target was of three classes, i.e. when the "X" point is reached the network sets mental stop-losses and take-profits to buy and to sell:
buy - if a buy takeaway was taken and the stop was not taken out
Sell - if a Sell take point is taken and the stop has not been knocked out
rest - if both buy and sell orders were taken and no takeout limit was hit
the take was 2 or 3 times the stop, I don't remember exactly, I think it was 3 times
Despite the fact that the network in reality traded much worse than the validation, (on validation kol. correct answers was 63% and in real trade about 20%) but nevertheless the algorithm was profitable
but more often all the same
the pattern itself is short by itself that's why purchases here obviously are not marked by accuracy and profitability
And if such patterns to program 10 instead of one? interesting ? ;)
If by statistics we mean tools that work ONLY on stationary random processes, then such statistics do not work, because financial markets are non-stationary processes, where the much-loved concepts of "average", "correlation" and so on and so forth are nothing.
But machine learning usually does not refer to statistics, but to artificial intelligence.
Anything that is used by the vast majority, if I understood the remark correctly, would be considered statistics.
And I'll add one more thing before the race:
You write about non-stationarity and markets, machine learning, but did you know that there are generally accepted tools for predicting non-stationary processes and there are not many of them, these are "MGUA" , hidden Markov models and recurrent neural networks (maybe I'm wrong about the network, maybe it's just for BP)
Neural networks, Forrests of all kinds, etc. are NOT designed for non-stationary data, why all of us, including me, use tools not for their intended purpose? Question)
100% for a tree is absolute nonsense!
If all your predictors are noise, this result is very difficult to achieve: the error will still be 3%-5%. Noise always gives very good results with all crossvalidations and other tricks.
Accuracy of 100% means the only thing: among predictors you have a duplicate of the target variable (some modification of it). That is, the model looks into the future.
I remember an example on your data from ALL_cod.RData, dataset TF1 or something like that, with the first target variable the forest even with a small number of trees gave almost 100% result. And by increasing the number of trees in the forest, the accuracy rose even to absolute 100%. The forest can memorize every single training example, if its parameters are big enough for that.
With mytarmailS it is the opposite, the forest with a small number of parameters gives good results, but with increasing number of trees accuracy drops. He doesn't use crossvalidation, so the conversation is about the training data itself. It doesn't work that way. The accuracy of the forest on the training data will only fall as its parameters decrease, not the other way around. Is it possible?
And with mytarmailS all the opposite, the forest with a small number of parameters gives a good result, and with the increase of trees - accuracy decreases.
A heavy bell.
And on the very edge of it
A butterfly dozing. ( Japanese hoku.)
Watching the branch from the auditorium, and I can not understand, and I do not need it?
On the one hand machine learning, artificial intelligence, neural networks. The main issue, as I understand it, is the art of identifying and combining precedents (regularities) and predictors (prognosticators). On the other hand the market is a substance, the number of factors is infinite and the question is whether it is possible to divide them into major and minor?
Some sketches. 2011г. Japan. Fukushima. The tsunami caused an accident at the nuclear power plant. What caused the tsunami - an earthquake or a flap of a butterfly's wing, according to the chaos theory, does not matter. It is important that it is impossible to predict, and to know the effect on the market. It would seem that it was an accident, the evacuation of the population, and radiation, so get off the island. But no. The accident happened on March 11, and on March 16 the Nikkei index showed unprecedented growth. It turned out that the Japanese did not run like rats from a sinking ship, but on the contrary they started to return the capital to the motherland to help to recover.
One year ago. Germany. Wolfsburg. The plant "WV" and the city was built by order of Hitler to create the German people's car. Here the programmer acted as a butterfly, programming the absence of harmful emissions of diesel engines only during the test bench trials. Scandal. WV stock plummeted. The DAX went ballistic.
Our days. Japan again. Shares in Nintendo are climbing. Capitalization exceeds e.g. U.S. arms exports. Who would have thought something like "Pocemon Go" would become so popular?
This thread discusses systems based on weekly, monthly and even yearly data. The main stable forex signal is being searched for. It makes me puzzled. During the day some butterfly, statistics, statements can "retrain" the market even to the slightest degree. To build a stable system that works at least a week is as likely as to build a Boeing out of details found in a dumpster.
Heavy bell
I remember an example on your data from ALL_cod.RData, dataset TF1 or something like that, with the first target variable the forest even with a small number of trees gave almost 100% results. And by increasing the number of trees in the forest, the accuracy rose even to absolute 100%. The forest can memorize every single training example, if its parameters are big enough for that.
With mytarmailS it is the opposite, the forest with a small number of parameters gives good results, but with increasing number of trees accuracy drops. He doesn't use crossvalidation, so the conversation is about the training data itself. It doesn't work that way. The accuracy of the forest on the training data will only fall as its parameters decrease, not the other way around. Is it possible?
What's the point of the post? I don't get it.
A heavy bell.
And on the very edge of it
A butterfly dozing. ( Japanese hoku.)
Watching the branch from the auditorium, and I can not understand, and I do not need it?
On the one hand machine learning, artificial intelligence, neural networks. The main issue, as I understand it, is the art of identifying and combining precedents (regularities) and predictors (prognosticators). On the other hand the market is a substance, the number of factors is infinite and the question is whether it is possible to divide them into major and minor?
Some sketches. 2011г. Japan. Fukushima. The tsunami caused an accident at the nuclear power plant. What caused the tsunami - an earthquake or a flap of a butterfly's wing, according to the chaos theory, does not matter. It is important that it is impossible to predict, and to know the effect on the market. It would seem that it was an accident, the evacuation of the population, and radiation, so get off the island. But no. The accident happened on March 11, and on March 16 the Nikkei index showed unprecedented growth. It turned out that the Japanese did not run like rats from a sinking ship, but on the contrary they started to return the capital to the motherland to help to recover.
One year ago. Germany. Wolfsburg. The plant "WV" and the city was built by order of Hitler to create the German people's car. Here the programmer acted as a butterfly, programming the absence of harmful emissions of diesel engines only during the test bench tests. Scandal. WV stock plummeted. The DAX went ballistic.
Our days. Japan again. Shares in Nintendo are climbing. Capitalization exceeds e.g. U.S. arms exports. Who would have thought something like "Pocemon Go" would become so popular?
This thread discusses systems based on weekly, monthly and even yearly data. The main stable forex signal is being searched for. It makes me puzzled. During the day some butterfly, statistics, statements can "retrain" the market even to the slightest degree. To build a stable system that works at least a week, is as likely as to build a Boeing out of details found in a dumpster.
Your lack of understanding is fundamental to the subject.
1. Everything you write is absolutely true for extrapolation type forecasts, which in turn work for stationary time series. You cite real examples that speak to the non-stationarity of financial series, and here you are absolutely correct. Moreover, news is not the only cause of non-stationarity.
2. Here we are discussing forecasts based on a classification that does not take into account the previous state when forecasting the next bar. Predictions (forecasts) based on classification are predictions based on patterns. And if there was news in the past that led to changes that do NOT follow from previous values (not extrapolated), then the classification will catch such a change as such and if there is a similar change in the future (not exactly the same, but similar) it will be recognized and a correct prediction will be made.
So when classifying, pokemons are not scary.