A quick and free library for MT4, much to the delight of neuralnetworkers - page 11

 
marinat >> :

Good afternoon to all, the optimization graph is not drawn, after optimization the following string is output

2009.12.21 15:52:54 There were 897 passes done during optimization, 897 results have been discarded as insignificant
can someone help?

I tried another terminal, same thing, no idea what to do :(

 
marinat писал(а) >>

I tried it in another terminal and the same thing, I have no idea what to do :(

Right click on optimization results during optimization, and uncheck "skip useless results" option. In general, using the search the problem is solved in 1 minute, while you "What to do, what to do?

Search https://www.mql4.com/ru/search/have%20been%20discarded%20as%20insignificant, one of the results is https://forum.mql4.com/ru/24644/page7#191364

 

Actually it was about something else, I set explicitly to use the date from 20.12.08 to 20.12.09 and everything was ok, but thanks anyway, and I found these posts.


Yuri i wanted to ask you a question, on your demo account, which is voiced on page 3, do you only use your EA or do you also make trades manually ? and another question, is the EA set up for multi-currency trading?

 
VladislavVG >> :

In this EA, all of the committee networks are given the same input signal and require the same response. It is not surprising that the nets converge to the same solution. In this example it is possible to leave one grid or modify the input system so that different nets have different inputs, the outputs can be left the same.

The whole point of the committee is precisely to feed it with the same data, and to get the result by averaging (preferably over the best instances of the committee). One grid can be left where the input data is simple, i.e. signal/noise ratio is large (this does not apply to markets). Yes - here we get the impression that one grid is enough, but this is because it is trained on a deliberately limited (incorrectly) dataset, coded in highly dependent variables, and therefore the result of the training will not be applicable in other areas.

It is a good idea to feed different inputs to different meshes, but one should choose how to segment the total set into subsets for individual meshes (which principle is a separate question, it could be market nature, type of transactions etc.), but the quality of inputs for each mesh should still be calibrated.

 
marketeer писал(а) >>

The whole point of the committee is precisely to feed it with the same data, and get the result by averaging (preferably over the best instances of the committee) .....

Yes - here you get the impression that one grid is enough, but this is because it is trained on a deliberately limited (incorrectly) dataset, coded in highly dependent variables, and therefore the training result will not be applicable to other sections.

So, it turns out that 16 grids initialized with random weights from -1 to 1, after the first execution of ann_runs(...) with one InputVector[], we get (judging by logs) 16 identical outputs accurate to 8 characters? No. There is a bug of some kind here.

You wrote yourself that the subject of neural networks is not easy to pick up. So we have to figure it out...

 

There shouldn't be an 8-digit accuracy...

About the meaning of a committee:

There are different strategies for forming committees (algorithmic compositions, ensembles).

The simplest one is averaging...

Here you can read about it in details. I'll tell you right away that building any supercomplex compositions won't give you any special gain. It's just about something else.

 
Do you think that if you give the values of the extrema and the duration between them to the neural network, will the result be more or less satisfactory ?
 
marinat писал(а) >>
Do you think if you give values of extrema and duration between them to the input neuronka, will the result be more or less satisfactory ?

Checked, there is not much point in the naked form. Although the data contained there seems to be exhaustive the results are not great, serious preprocessing of this data is required as always and everywhere with NSs, and again sometimes it works sometimes not.

 
Figar0 >> :

Checked, there is not much point in the naked form. Although the data contained there seems to be exhaustive the results are not great, serious preprocessing of this data is required as always and everywhere with NSs, and again sometimes it works sometimes not.

In general, the most stable results are obtained using what type of data, who got at least less than average results? in bare form in the sense of the bare, or scaled from 0 to 1 ?

 
marinat писал(а) >>

In general, the most stable results are obtained when using what type of data, has anyone got at least a more or less average result? in bare form, do you mean completely bare, or scaled from 0 to 1 ?

Until you try it, it's hard to assess the benefits of the inputs, one might work better in one area and another in another. And you can get an average result by trying almost any input. Naked is without preprocessing, "0-1" is just a special kind of normalization, it's good, but it may not be enough... Preprocessing is a whole science, imho more complicated than neural networks themselves, and compression, and bleeding, and coding and probably a lot more. You can start by looking at articles by V.A. Krisilov, you can get them from http://neuroschool.narod.ru/. What you have in mind, to put a phase in NS, I use only as a component of a complex combination of inputs, nothing more.

Reason: