Discussion of article "Neural Networks Cheap and Cheerful - Link NeuroPro with MetaTrader 5" - page 3

You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
As for the article itself, not NS in general. What's the catch? The number of coefficients to be fitted is comparable to the amount of history.
Let's take the number of coefficients equal to the amount of history. I think, then the adjustment will be just perfect. It will not have a single losing trade and will squeeze the maximum possible out of history.
If we approach the NS construction as a selection of a wild number of coefficients, we don't need such a good thing.
There is another usefulness here - compression of information with loss. There was a lot of history, there are fewer coefficients that approximately describe the history. On the other hand, there are many compression algorithms, even without loss, with much better performance.
As for the article itself, not NS in general. What's the catch? The number of coefficients to be fitted is comparable to the amount of history.
Let's take the number of coefficients equal to the amount of history. I think, then the adjustment will be just perfect. It will not have a single losing trade and will squeeze the maximum possible out of history.
If you approach the construction of NS as a selection of a wild number of coefficients, what the hell is the need for such a good thing.
You probably just did not (carefully) read the article. The number of inputs is 24 (one-hour timeframe), the number of neurons in a layer is 20, and there are 3 layers.
And the example of history is 5k bars. Set 10k bars, the number of coefficients will remain the same.
If you don't understand what we are talking about, then you really don't need it.
You probably just didn't (carefully) read the article. The number of inputs is 24 (hourly timeframe), neurons in a layer - 20, layers - 3.
And the example of history is 5k bars. Set 10k bars, the number of coefficients will remain the same.
If you don't understand what we are talking about, you really don't need it.
You can fool yourself as much as you want!
Look at the source code and count the number of adjusted coefficients. Blah, blah, blah in the description of NS, but the essence is the source code.
Double the amount of history and watch the ratios collapse. And so it goes with each increase.
The simulation of orgasm from the article is the result shown. And the fact that it is obtained in a horrible way is not what anyone is saying.
Let's make it simple. I'll give you the source code of an advisor with a thousand coefficients. And I will give you a comparable piece of history. I will not say that it is NS or something else. Just the source and a piece of history.
Will you also change your mind about this cal once I tell you it's NS or advanced scientific method? Look at the bottom line.
Let's take the unicum dimeon. His Expert Advisor contains no more than a dozen adjustable coefficients. The amount of history is thousands of times more than we use to adjust these coefficients. So the NS built into dimeon's head can sometimes produce great results. That's why they won't rage against all NS. But the article is misleading the reader.
On the other hand, our cool pipsarian does not use the neural principle of building a trading algorithm at all. It does not use the stupidest principle of addition and multiplication, as in NS. Perhaps, this is the reason for the striking difference between his results and the classics in the form of NS.
Funny name: ENCOG - machine learning... Well, it's hot.
The tools listed here are only a part of machine learning.
Laughing without a reason is a sign of stupidity © Popular saying
For those who are particularly gifted in machine learning:
Laughter without reason is a sign of stupidity © Folk saying
For especially gifted machine learning experts:
1. There are no problems screwing CRAN, well, none at all. More than two years all in codobase.
2. The quantity speaks for the diversity of approaches and rapid development. The quality of packages in CRAN is excellent.
3. WEKA is one of.... If we are talking about the choice of machine learning packages that can be used in trading, then caret. And for a start we take Rattle. If you start from scratch, you can get it up and running in about 15 minutes. I posted the results of comparing NS and random forests above. NS gives more than modest results. I even wrote an article. Try Rattle. Pick up 2-3 packages maximum and you will be happy. And abandon your NS forever. For a start, I can recommend another attach.
2. The quantity shows the diversity of approaches and the rapid development. The quality of packages in CRAN is excellent.
Rather the opposite, as some packages simply duplicate methods from other packages. For example, all SVMs are just ports from the same Taiwanese libsvm library. So it makes absolutely no difference if SVM is embedded in Cran, Weka, Encog or any other package. The results will be identical with the same settings.
If we are talking about the choice of machine learning packages that can be used in trading, then caret.
Once again, you should choose specific tools for specific tasks. Trading is just a generalised name for many stock market strategies and tactics. That is why it is impossible to cut everything under one umbrella.
I posted the results of comparing NS and random forests above.
These are not results, but some bullshit, like the average temperature of the hospital adjusted to the training sample.
Results are when, at minimum, the sample is divided into training and test samples and, at maximum, crossvalidation is applied.
I think I'll stick up for the NS. Just because random forests have suddenly become fashionable, it doesn't mean that NS is worse. It's the same eggs, only in profile. To make a more or less adequate comparison, take a committee of meshes, turn on bousting, and you'll get the same random forest. NSs are known for allowing you to implement almost any other algorithm.
In any case, 99% of success lies not in the tool, but in the choice and preparation of the data.
I think I'll stick up for the NS. Just because random forests have suddenly become fashionable, it doesn't mean that NS is worse.
Random Forest is not a fashionable phenomenon, but a tool that can give acceptable results at the first attempt. This classifier is used by both beginners and experienced users. Beginners use it as a basic tool because the method is very simple. And more experienced users start solving problems with RF in order to understand in what direction to move further.
In any case, 99% of success is not in the tool, but in the choice and preparation of data.
You can't make a big deal out of a big deal © People's saying
It would be interesting to see how you will solve the multiple regression problem using some binary classifier?
This is not results, but some bullshit, like the average temperature of the hospital adjusted to the training sample.
Results are when, at least, the sample is divided into training and test samples and, at most, crossvalidation is applied.
I don't do bullshit.
Proof.
The posted results always refer to "out of sample training" data. This is done as follows in Rattle:
1. the original set is divided into three parts: 70-15-15%
2. training is performed on the 70% part, which is called training. There is a very significant nuance here. From this 70%, about 2/3 of the training data is randomly selected, i.e. = 70% * 2/3. Training is carried out on this data. The model performance information is obtained on the remaining 70% * 1/3 of the training sample data, which of course is also a random set of rows. This part is called OOB - out of bag. I.e., although formally the same data set was used for training and evaluation, but different rows were taken from it for training and evaluation.
After that you can go to the Evaluate tab where you can use the trained model on the remaining two times 15% and compare it with OOB. If the results are the same, there is hope. It follows that although Rattle is a tool for testing ideas, the quality of this testing is much higher than in the discussed article (let the author apologise).
And personally for your sweet: the result obtained in my article and in this article cannot be trusted, because there is no proof of overtraining (overfitting) of the model, and the three sets for testing outside the training sample listed by me are not such proof. That is, we need criteria that are satisfied by the set of initial variables in the sense that the model using this set of variables can be tested according to the above scheme and the results of such testing can be trusted.