Discussion of article "Programming a Deep Neural Network from Scratch using MQL Language" - page 6

 
Nikolai Kalinin #:

Were you able to solve the problem with inputs scaling more than 4x?

Yes, I started poking around and got to the bottom of it. Not only increased the inputs, but also the architecture: I added layers, added neurons, added RNN - remembering the previous state and feeding it to the inputs, tried changing the activation function to the most famous ones, tried all kinds of inputs from the topic "What to feed to the neural network input" - to no avail.

To my great regret. But, it doesn't prevent me from coming back from time to time and twisting simple neural networks, including this author's one.

I tried LSTM, BiLSTM, CNN, CNN-BiLSTM, CNN-BiLSTM-MLP, - to no avail.

I am amazed myself. That is, all successes are described by one observation: it's a lucky schedule period. For example, 2022 for the Eurodollar is almost exactly the same as 2021. And by training on 2021, you will get a positive forward on 2022 until November (or October, I don't remember). But, as soon as you train on 2020, any(!) neural network, then on 2021 it fails cleanly. Right from the first month! And if you switch to other currency pairs (usually Eurodollar), it behaves randomly too.

But we need a system that is guaranteed to show signs of life on the forward after training, right? If we start from this thought, it is fruitless. If someone believes that he is a lucky person and after today's training he will have a profitable forward for the next year or six months, then good luck to him).

 
Ivan Butko #:


But, we need a system that is guaranteed to show signs of life on the forward after training, right? If we go from that thought, it's fruitless. If someone believes that he is a lucky person and after today's training he will have a profitable forward for the next year or six months, then good luck to him).

Then we can assume that the necessary "graal" parameters of NS were missed in the process of their search or even initially insignificant and not taken into account by the tester? Maybe the system lacks eventuality factors than just patterns-proportions.

 
Nikolai Kalinin #:

Then we can assume that the necessary "graal" NS parameters were missed in the process of their search or even initially insignificant and unaccounted for by the tester? Maybe the system lacks factors of eventuality than just patterns-proportions.

Of course, sometimes "grail" sets slip through during optimisation, it's almost impossible to find them (line 150 of some sort during sorting) until you check everything. Sometimes there are tens of thousands of them.

I don't understand the second part of your post.

 
Ivan Butko #:

Of course, sometimes "grail" sets slip through during optimisation, it's almost impossible to find them (line 150 of some sort during sorting) until you check everything. Sometimes there are tens of thousands of them.

I don't understand the second part of your post.

It is about input of such data, which is obtained at the moment of a certain event, for example, High[0]> High[1] in the moment. If the market is considered in such a context, it is entirely an event-driven model and correlated on that. And the control of chaos elements is already to the methods of fine-tuning and optimisation outside the NS "memory". It is well represented by an integral indicator how such event additions to the code work. This indicator (integrated criterion) improves and shifts towards the most profitable optimiser passes.

 
This is just what I was looking for!  Great article!