Discussion of article "Thomas DeMark's Sequential (TD SEQUENTIAL) using artificial intelligence"
So where is the "artificial intelligence" as you claim? Where is the neural network ?
1. Two NN directions? It's solved problems( mostly) two - regression and classification. Less frequently we use clustering and ranking. And there are dozens, if not hundreds, of neural networks. What kind of neural network have you used?
2. Overfiting is not only defined, but methods have been developed to reduce the probability of its occurrence. For neural networks it is regularisation (L1/L2), stabilisation (dropout, dropconnect and many others). Therefore, to paraphrase a well-known expression: all models are retrained, but some have a much lower probability of it.
3. Classifiers can be "hard", i.e. never giving up on prediction and "soft", which may refuse to predict and say "I don't know". "The "hard classifier" becomes "soft" after calibration. There are other ways to "soften" a classifier.
4. Well, the recommendation to reverse signals after the first error at the beginning of the day is super.
If there were not numerous preliminary announcements of the author with the promise of a revolution in the use of neural networks, one could pass it by.
And so, as for me - no credit.
So where is the "artificial intelligence" as you claim? Where is the neural network ?
1. Two NN directions? It's solved problems( mostly) two - regression and classification. Less frequently we use clustering and ranking. And there are dozens, if not hundreds, of neural networks. What kind of neural network have you used?
2. Overfiting is not only defined, but methods have been developed to reduce the probability of its occurrence. For neural networks it is regularisation (L1/L2), stabilisation (dropout, dropconnect and many others). Therefore, to paraphrase a well-known expression: all models are retrained, but some have a much lower probability of it.
3. Classifiers can be "hard", i.e. never giving up on prediction and "soft", which may refuse to predict and say "I don't know". "The "hard classifier" becomes "soft" after calibration. There are other ways to "soften" a classifier.
4. Well, the recommendation to reverse signals after the first error at the beginning of the day is super.
If there were not numerous preliminary announcements of the author with the promise of a revolution in the use of neural networks, one could pass it by.
But as for me, it's a fail.
Well ... excellent!!! There are the first comments and it's good to see.
1. Yes indeed, there are many types of NS, but the main are two directions, classification and prediction, Clustering, implies when the classes are not two 0 and 1, but more. here is another level of problems.
2. Indeed nothing stands still, and there are already methods to reduce overtraining, but the degree of overtraining is not easy to identify.
3. a committee of two networks is used. Each network separately "does not give up prediction", as you put it, but we are only interested in the moment when the committee simultaneously says "Yes" or "No".Committees of networks have been used for a long time and are widely used. According to author Yuri, a committee has a higher generalisation power than a single network.
4. Yes indeed it should be done with caution, because turning the network over you can make a mistake for the second time. The point of this is that we compare two signals, the current and the previous one, and knowing the result of the previous one we can draw a conclusion about the current signal, about its truth, etc. Indeed, the network can simply make a mistake!!!!.
Well, I didn't promise any coup, but only promised a method of RIGHT construction of TS, because sometimes some people write such heresy that it becomes a bit scary!!!!.
Sorry, but the article is very bad, really, really... alas. It is an advertisement of the optimiser Reshetov and that's all, nothing useful. Take the example of Mr Fomenko's article about forests and trends, there, although for beginners, but very informative, you have very badly done, with all due respect.
I'll be honest, it's my first experience. And what didn't you like? Can you be more specific? Maybe something is not clear? After all, the point of the article was to inform about the methods that everyone can use when building their own TS, it is not necessary to adopt my TS in particular.
Well.... great!!! There are the first comments and it's good to see.
1. Yes indeed, there are many types of NS, but the main are two directions, classification and prediction, Clustering, implies when the classes are not two 0 and 1, but more. here is already another level of problems
2. Indeed nothing stands still, and there are already methods to reduce overtraining, but the degree of overtraining is not easy to identify.
3. a committee of two networks is used. Each network separately "does not give up prediction", as you put it, but we are only interested in the moment when the committee simultaneously says "Yes" or "No".Committees of networks have been used for a long time and are widely used. According to author Yuri, a committee has a higher generalisation power than a single network.
4. Yes indeed it should be done with caution, because turning the network over you can make a mistake for the second time. The point of this is that we compare two signals, the current and the previous one, and knowing the result of the previous one we can draw a conclusion about the current signal, about its truth, etc. Indeed, the network can simply make a mistake!!!!.
Well, I didn't promise any coup, but only promised a method of RIGHT construction of TS, because sometimes some people write such heresy that it becomes a little scary!!!!.
1. A classifier can predict 10, 100, 1000 classes and will remain a classifier. CLASTERISATION is the division of an unlabelled data set into groups based on certain attributes.
2. It is not easy to determine the moment of the onset of overtraining, but very easy.
3. Indeed a committee of models gives (not always!) the best result. But in your case it is not really a committee.
Good luck
Unfortunately, my criticism refers not to the literary quality of the article, which is quite normal, but to its semantic content. You have not said anything interesting.
You consider some random "Demark" drainer and want to teach it to earn with the help of NS, this is a dead-end approach, instead of Demark you can insert any indicator, the meaning will not change, then what is it for? And if this particular filter is a good feature, you have not explained why it is so. Then all that actually about ML and feature-engineering , is that all will make optimiser Reshetov ... mmm.... well that is ... this is for a specific audience article then, for "meat", but meat can not even understand how to jar run, there you need to just run in mt and watch how the process of draining ))).
Explain succinctly and concisely, with pictures and formulas, how Reshetov's optimiser works, since the whole point is in it, how it does chips, how it classifies, why it does it better than this or that mainstream method of extracting chips and classification, with examples and datasets so that everyone could be convinced, refine in general, while it's bad.
And throw out all the lyrics, especially about comparisons of Reshetov with Stradivarius, otherwise it looks like you and Reshetov are one physical person, so even Lecun nobody praises))))) I downloaded Reshetov's sorts, but so hands did not reach to understand in algorithm on sources, that can take approximately up to a week, but there is no time so much on probably strange algorithm, which even 2-figures datasets for some reason does not want to eat))))) I wanted to see how it does the partitioned mask for 2-dimensional distributions....
I was talking about it in the article, that you can take any TS as a basis, I took Demark for the reasons I described in the article (there is a window, signals on peaks and troughs). The point is to build a polynomial that will have a generalisation of the output variable, and with what help you will do it, does not make any difference. The main thing is the input data that should be the reason for the price, since we interpert the work of any TS to the price.
I came across this resource after editing the article. Here he explains more clearly about the work of the optimiser. It will be very useful for AI developers, which I am not, please note https://sites.google.com/site/libvmr/home/theory/method-brown-robinson-resetov.
- sites.google.com
1. A classifier can predict 10, 100, 1000 classes and will remain a classifier. CLASTERISATION - dividing an unlabelled data set into groups based on certain attributes.
2. It is not easy to determine the moment of the onset of overtraining, but very easy.
3. Indeed a committee of models gives (not always!) the best result. But in your case it is not really a committee.
Good luck
1. I agree, the difference between a classifier and clustering in a teacher and without, in that case.
2.Interesting to hear a future Nobel laureate in re-learning, because the question is really interesting not only for me. Is it by what method???? And is it possible to detect the degree of overlearning?
3. What makes you so determined that this is not a committee???? how so???? Interesting......
OK, you're implying that if Reshetov is a Stradivarius, you're nothing but Mozart, let's check it out!
I propose to do so, I'll give you a dataset, you learn on it and me a trained classifier, it doesn't matter jar or serialisation, the main thing is that it should be usable in a couple of clicks, and I'll run it on a test set, which you haven't seen, if the classifier is worthwhile (for example, compare it with XGB ) .then we'll continue the conversation about Reshetov's creations, I'll just take and parse the code of his sources, so it will be easier to understand than to understand about Brown-Robinson method and Shackley vector, etc.
I will publish the results with the data and your model. For now it's just another black box with simple advertising, of which there are tens of thousands, and I'm sorry, but I can't afford to spend a week analysing it, without proof that it's not worse than XGB.
Great plan!!! I've been wanting to try it for a while. You send me a file for training, I'll send you a model, and then test it and post the results. OK?
OK, you're implying that if Reshetov is a Stradivarius, you're nothing but Mozart, let's check it out!
I propose to do so, I'll give you a dataset, you learn on it and me a trained classifier, it doesn't matter jar or serialisation, the main thing is that it should be usable in a couple of clicks, and I'll run it on a test set, which you haven't seen, if the classifier is worthwhile (for example, compare it with XGB ) .then we'll continue the conversation about Reshetov's creations, I'll just take the code of his sources, so it will be easier to get into it than to understand about Brown-Robinson method and Shackley vector, etc.
I will publish the results with the data and your model. For now it's just another black box with simple advertising, of which there are tens of thousands, and I'm sorry, but I can't afford to spend a week analysing it, without proof that it's not worse than XGB.
HMM. Mozart sounds proud. However, it is the preparation of the given and the choice of output variable that plays the IMPORTANT role. How do I know your inputs are a good fit for the output description. BUT as they say, it is the optimiser that will determine how well your inputs match the output. The point is this. I can select inputs that will interpret the output well on the training set, but the inputs will perform poorly on the OOS. This suggests that the input is not a cause for the output. Another thing is when the input is indeed a reason for the output, then the performance of the network on training and OOS will be the same. Please consider this.

- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
You agree to website policy and terms of use
New article Thomas DeMark's Sequential (TD SEQUENTIAL) using artificial intelligence has been published:
In this article, I will tell you how to successfully trade by merging a very well-known strategy and a neural network. It will be about the Thomas DeMark's Sequential strategy with the use of an artificial intelligence system. Only the first part of the strategy will be applied, using the Setup and Intersection signals.
In other words, we obtained a steadily losing anti-model, reversed it and got a profitable model! I call this method Model orientation. As a rule, it is realized during one signal. It is sufficient to wait for one buy signal and one sell signal to appear at the beginning of the day, orient them and use them for work. This way, at least 3-4 signals are obtained in a day. The point is not in checking the past signals and their performance. Instead, it is necessary to compare two latest signals with each other, check if they belong to one group or not, and see what action must be taken, if the result of the previous signal is known. At the same time, do not forget that the neural network may produce an error.
Fig. 2. Indicators BuyVOLDOWNOPNDOWN.mq5 and SellVOLDOWNOPNDOWN.mq5
Author: Mihail Marchukajtes