Discussion of article "Programming a Deep Neural Network from Scratch using MQL Language" - page 5

You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
I modified the "bodyPer" input. Instead of loading simply the relative length of the body, I compute this value : bodyPer=0.5+((close-open)/p100)/2;
So that more than the relative length, the variable also captures the direction of the candle. It frees up a slot for a 4th variable, I guess.
int error=CandlePatterns(rates[0].high,rates[0].low,rates[0].open,rates[0].close,rates[0].close-rates[0].open,_xValues);
Here we pass the data of a candle that has not yet formed. In reality, all parameters will be the same when the candle opens. All will be = rates[0].open
int error=CandlePatterns(rates[0].high,rates[0].low,rates[0].open,rates[0].close,rates[0].close-rates[0].open,_xValues);
Here we pass the data of a candle that has not yet formed. In reality, all parameters will be the same when the candle opens. All will be = rates[0].open
Incorrect!
Here the copying is performed not from the zero bar, but from the first one, so here:
CandlePatterns(rates[0].high,rates[0].low,rates[0].open,rates[0].close,rates[0].close-rates[0].open,_xValues);
will be the values of the last bar...
I don't think it is necessary to copy 5 bars, it will be enough to copy 1 past bar like this:
Hi Anddy, great job you did!!!
I'm analysing your code to adapt to my strategy and so far I can say that your DNN is amazing! Thanks for sharing.
I just have one question: I don't recognise the use of "yValues[2]>0.6" in any situation. After several attempts with different assets, no trade was closed due to this condition. Is this correct?
Thank you!
Best,
Alexandre
In this forum, please comment in Portuguese. Use the automatic translation tool or comment in one of the forums in another language.
There is a logic error in the code!
Strange behaviour whenever the trend variable is changed, the training results are always different, why?
There is a logic error in the code!
Specificity of the activation function.
And the more values reach or cross the threshold boundary, the more positions or more opportunities to fit (memorise the path) the price chart to the neural network.The more layers, the more attenuation - the values will be closer to 0.
Offset fixes this problem a bit.
So when the threshold is set to 0.6, most of the possible sets are discarded. And if you're feeding some huge number or several big numbers to the input, then even a straight pass will bring more possible values to the end of the neural network.
Specificity of the activation function.
And the more values reach or cross the threshold boundary - the more positions or more opportunities to fit (remember the path) the price chart to the neural network.The more layers, the more attenuation - the values will be closer to 0.
Offset fixes this problem a bit.
So when the threshold is set to 0.6, most of the possible sets are discarded. And if you're feeding some huge number or several big numbers to the input, then even a straight pass will bring more possible values to the end of the neural network.
Anyway, the results of training are always very variable with any type of optimisation, which raises certain doubts about its applicability to real trading - there will always be better parameters of weights in re-sorting combinations. What is the explanation for this peculiarity of this NS?
Anyway, the results of training are always very variable with any type of optimisation, which raises certain doubts about the applicability for real trading - there will always be better parameters of weights in the re-sorting of combinations. What is the explanation for this peculiarity of this NS?
You give this NS a lot of importance, in fact all NSs and everything related to MO, in general - everywhere where there are multiplications of numbers by numbers and an adder in the activation function - it will all be a fit to the chart. A totally unstable system.
It is interesting to dig into it, build architectures, add neurons and layers. But, it is absolutely useless, no better than crossing mashka.Moreover, pricing is a non-stationary process. Every time there is new data, and if you divide the chart into patterns, they will tend to work out 50/50 on the history.
NS is for stationary systems, repetitive.
But in Forex and so on you need more advanced, intelligent systems. Something like several NSs, somehow connected with each other, somehow magically adapting to the change of pattern statistics, etc.
NS itself is a memorisation of the price path, or averaging of results, if the amount of new data is more than possible combinations of numbers obtained by multiplication (or simply speaking - the simplest NS architecture with two or three inputs).
You give this NS a lot of importance, in fact all NSs and everything related to MO, in general - anywhere there are multiplications of numbers by numbers and an adder in the activation function - it will all be a fit to the graph. A totally unstable system.
It is interesting to dig into it, build architectures, add neurons and layers. But, it is completely useless, no better than crossing the mashka.Moreover, pricing is a non-stationary process. Every time there is new data, and if you divide the chart into patterns, they will tend to work out 50/50 on the history.
NS is for stationary systems, repetitive.
But in Forex and so on you need more advanced, intelligent systems. Something like several NSs, somehow connected with each other, somehow magically adapting to the change of pattern statistics, etc.
NS itself is a memorisation of the price path, or averaging of results, if the amount of new data is more than possible combinations of numbers obtained by multiplication (or simply speaking - the simplest NS architecture with two or three inputs).
Ivan, Thank you for the clarification. Any statistic has a tendency to repeat itself. In principle, if an integral indicator is used when optimising (training) the NS, then we can see by the points how and when the transition from ignorance to knowledge occurs - how to trade it better. Searching for a significant variable is a separate conversation. Did you manage to solve the problem with scaling inputs more than 4x?