You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
So far, it is the absence of those details I asked about above that has put me, as an ordinary dummies, in a stupor. I reread the article three times, but found the necessary answers only on the forum.
After some deliberation, it was decided to write the second part of the article.
At the moment, the second part will cover the work with multilayer neural networks.
If you have any wishes about its content - please, briefly, write them.
Those ideas that I will be able to convey on my fingers will be described in the article.
Thank you.
I "naively assume" that among native Russian speakers it is not customary to call the process of independent learning "parameter fitting". As well as it is not accepted to call the selection of parameters (with the help of external processes) for any system as learning.
No matter how you call fitting, it will not cease to be fitting.
Optimisation, fitting and learning for neural networks dealing with non-stationary data are synonyms. Because all three terms mean the same thing: selection of weighting coefficients for past historical data (training sample) in order to minimise errors in the neural network output. If it were possible to feed the grid with future data, then it would be a different matter. But they don't sell time machines in office equipment shops yet, so we have to fit the past.
Whatever you call a fitting, it will not cease to be a fitting.
Only one question is of interest: how to create a self-learning programme that can do without using an "external" optimiser. If such a thing is possible at this stage, of course.
It's simple. The EA code can contain the network itself and its weights optimiser, which can be launched automatically when new data arrives. Under neural networks in most cases we mean such self-learning networks. Networks trained externally, for example by the tester optimiser, are toys.
Guys, help me! Did I understand correctly that normalisation of input data should be done for the whole training period of the network? I mean maximum and minimum values of xi should be taken from the whole period?
I wrote this owl. Can we say that it is a neural network, because I have my doubts.
Owl for trading in the channel.
Algorithm is as follows: extremums are taken for the number of Fibo bars (2,3,5,8,13....). For each neuron to buy, for example - if the price is below or equal to the price of the extremum LOW for one period, then return 1 otherwise - 0. Further, as in the example with NeuronMACD. To sell - mirror the opposite.
I am waiting for criticism of the code and algorithm.
I wrote this owl. Can we say that it is a neural network, because I have my doubts.
Owl for trading in the channel.
Algorithm is as follows: extremums are taken for the number of Fibo bars (2,3,5,8,13....). For each neuron to buy, for example - if the price is below or equal to the price of the extremum LOW for one period, then return 1 otherwise - 0. Further, as in the example with NeuronMACD. To sell - mirror the opposite.
I am waiting for criticism of the code and algorithm.
In your case, the Neuron Activation Function can be thrown out, an unnecessary brake.