Experts: Examples from the book "Neural networks for algorithmic trading with MQL5" - page 3

 
Vrajeshbhai #:
I tried to test on tester but it's not working, tester stops showing message "Tester stopped because OnInit returns non-zero code 1" Please update it So everybody can test it and understand it what is exactly doing.
we have the same problem
 
Luiz Godoy #:
When running the gpt_test_not_norm.mq5 script, I got an out of range exit.

In the 40th line of the programme:

if(!loss_history.Resize(0,Epochs))

Change to:

if(!loss_history.Resize(Epochs))

 

Hi. A lot of writing, it turned out to be a book. I started reading it, I thought I'd write something for the market.

It's a good piece of work. Because there is nothing else so universal that would explain how to approach machine learning with mql5 tools.

However.

The sample is divided into 3 parts - 60% training, 20% validation and 20% test. Out of 40 bars of each bar chain, 35 bars are repeated. Moral - you train and validate using the same data. I've drawn a picture. This is not only the trouble with this book, I encounter it over and over again.

At first I thought the Zigzag indicator was the very thing to highlight non-repeating movements. Like top to trough, trough to trough, trough to top. These would be unique patterns, as it's called in the book. But no, the author builds patterns bar by bar, and that is of course a gross mistake. And you don't need a Zigzag for that. You can just look ahead 10 bars and calculate where the market has gone.

This is the first point - practical. And the second point is technical - also wrong. The author provides training with validation only in Tensorflow? That's cool, Tensorflow is a machine learning library that Google makes. But the purpose of this work was to show how to do it with mql5 tools, right?

There are no examples of learning with validation in mql5. I haven't found examples with validation yet. I'll write it down later if you want. Of course, you should try to do it yourself. Of course you should. There is a lot of work to prepare a story and to select training parameters. Tensorflow seems to have everything, but there is so much to do. And here it turns out that the technical part is not finished.

Is this the 95% that loses value without 5%?

I'll stop at the fact that it's good work. Because there's nothing else like it.


 
No matter how many times I've tried, the history can't be selected by date if there is more than 1 year between dates. Sometimes 2 years can get through. But if the chart is empty, and you load into an empty chart, only 1 year comes through. To reliably collect data for any period, you need an "idle" test in the strategy tester from one date to another. Then any number of years, and into a file.

Doesn't Adam's optimisation method adjust the learning_rate based on the learning rate? It also uses a "ceiling" rather than a constant value.

If doing a recurrent network, why add hidden full-link layers? The full-link theme destroys the lstm theme. Of course, at the end of the network you need 1 full-link neuron to activate.

Dropout is a layer property in any neural network. For example, in lstm the dropout property is mandatory, otherwise it gets used to data over several epochs. I know that in pytorch the dropout property can only be used in a transition between two identical layers, thus you need 2 recurrent layers. And in tensorflow any layer can have dropout. Obvious advantage of tensorflow over pytorch.

Batch processing in back propagation. Why does the author have a batch is any number! It turns out that at each epoch he chooses a random amount of data to train. A batch is a quantity! For example, sample 1000 items, and that's 10 packets of 100 values each. Each epoch is a different packet, but there are always 100 values. I do the training in a packet-by-packet fashion, and adjust the weights at the end of each packet. This can ensure that I don't get stuck at a local minimum. And I use all the sample data, not just some.
 

The trade optimisation does not work yet. A strange error in the dialogue window pops up. The computer switched off once. That is, if you train the network and then try to optimise the trade with it.

In my opinion, it is wrong to use a graphical stimulator for forex. No matter how you look at it, there is not much data. More correct, in my opinion, would be to use metatrader agents in the optimiser to run different training streams for different networks. For example, in tensorflow I am training 7 streams. One runs out, another one is added. And so on in order of 100 variants. Then I run them on history.

The point here is that a neural network may be good. But not every neural network can pass its training history. That's why you need a lot of variants of networks.

 

ChatGPT suggests that the best way to classify between the two is to use a sigmoid. And that's exactly what I couldn't find. And the activation functions are different. But the loss function is the same - MSE?

Tangent is still a regression, not a classification. -1 to 1 is not a probability, it's a value calculation.

In short, it'll do. There is nothing else to it. So the labour is certainly good. I'll use it.