"New Neural" is an Open Source neural network engine project for the MetaTrader 5 platform. - page 15

 
TheXpert:

I don't believe it :)

And think about it, since neurons use common memory, assigning a value to a cell it (value) immediately becomes available in all connected neurons, so it follows that all the difference in training is in prescribing a formula for the reverse activator, the rest is similar for all, and the difference is forward or backward motion. Quite a scanty list of differences :o)

ZY activation formula and its derivative, are prescribed when creating a neuron (or rather, when selecting the type from available), the direction of training course is chosen the same when creating the network.

 
Avals:

Yes, you can do that, but it's through the same place.)

Um, how is it not over the top?

________________

Nikolai, the easiest, most obvious and fastest way to represent is by linking vectors and matrices.

 
TheXpert:

Um, how is it not through the same thing?

________________

Nikolai, the simplest, most clear and fastest way to represent is by linking vectors and matrices.

But it is strongly tied to topology, or (if you zero out some cells, which gives universality) will overuse memory.

How can you use a matrix to represent a 1000x1000 echo basin, 95% of neurons of which don't exist? the question is rhetorical, it's clear that creating such a basin by matrix method every neuron should be able to connect to any neuron, which means 1000000 potential connections per neuron, multiply by 1000000 neurons and we have a 10^6 x 10^6 mql matrix it won't work.

 
TheXpert:

Um, how is it not through the same thing?

Well, not to come up with training algorithm for a particular variant of TS with NS. I.e. it should happen automatically: there is an Expert Advisor with NS and we don't care how it will be trained, but we won't need to gather the values of inputs in some vicinities and so on. It may be that in the example that we considered the algorithm would be as you described, and if, for example, the NS is another part of the system, it would be different. In general, I want it to be part of the formalization and hidden from the user. Probably, it comes down to preprocessing of output data of the NS or in other cases - the input, but it can be automated in most cases, rather than shifting it on the delicate shoulders of the user))) I don't know how to formalize it :)
 
Avals:

Well, in order not to invent a learning algorithm for a particular version of the TS with NS.

You can't do it that way :) At the very least, you need to know what to take, what to teach, and how to evaluate. And these things need to be organized by hand.

In general, I wanted it to be part of the formalization and hidden from the user. I don't know how to formalize it :)

Exactly. I don't know either. Not only that, there are sets that are generally very difficult to combine. Neurons are just a tool. In skilful hands (take Leonid for example) it is even very powerful.

I wonder if he is not willing to consult?

 
TheXpert:

It will not work that way :) You should at least know what to take, what to teach and how to evaluate. And these things must be organized by hand.

Exactly. And I do not know. Not only that, there are sets that are very difficult to combine at all. Neurons are just a tool. In capable hands (take Leonid at least) is very powerful.

Well, at least provide standard options then (for example, the one discussed on the previous page). You suggested a formal solution. Why do you think there is not one for the others?

It is possible that all come down to a few of the same type.

 
Avals:
Why do you think that for the rest of us there isn't one?

Yes there is :), but the whole point is the inputs and outputs :) the network is secondary. You can recognize letters any way you want, even MLP, even PNN, even SOM, even echo grid, but the principle will be almost identical.

Avals:

Maybe it comes down to a few of the same kind

Here's an example of how easy it is to set up a transaction filter.

And to feed a simple TS is not a task for average minds. And the first one is almost 100% a fit.

 
TheXpert:

Yes there is :), but the whole point is the inputs and outputs :) the network is secondary. You can recognize letters any way you want, MLP, PNN, SOM, echoset, but the principle will be almost identical.

So yes, the pre-processing and post-processing of NS data takes up most of the time and is the finest thing. If this is systematized and partially automated for typical TC variants, it's an advantage over third party packages. Otherwise, it's easier to do it all in them (since they are more specialized for working with TC anyway), and the ready models are transferred to MT5.
 

A few thoughts on the organization of the class used in EAs:

Properties:

1. minimum number of images that the network is trained, after it can be used.

2. maximum number of images. When a new image is added to the training set, the old one is removed, the network is retrained.

External methods:

1. send the sample to be trained. When used in the Expert Advisor, you can send a new image to the network according to the signals of the indicators .

2. Ask the network if it is ready or not. Whether the network is trained for a sufficient number of images.

The main method. Send an image to the network and get a result.


When a new image is sent to the network for training, perform its preprocessing:

1. Scale it.

2. Check correlation, so there are no two oppositely correlated images with the same output, and no two identical images with different output.


All this is quickly added to the Expert Advisor; as signals are fed by indicators, the network receives images, as soon as the sufficient number of images is accumulated, the network is trained and then, when a signal to open a position appears, we ask the network to confirm or not. This is if in the tester. If on the account, the network must be trained after testing, there must be a means of saving the network and loading.

What images are given to the net is the user's choice - a price, an indicator or true/false. Network parameters (number of layers, inputs, outputs) are set during network initialization.

 

Graphical network builder can be done in mql5.

It seems to me that in one layer there can be different neurons, it is not a problem, how to deal with the training of such a network is another issue.

One neuron in one layer? What's the point? Unless it is a bypassing additional layer.

Reason: