Advisors on neural networks, sharing experiences. - page 3

 
Maxim Dmitrievsky:

No one put me up to it, I just thought it was interesting.)

No one is arguing that it's interesting!

But you can't jump a few steps in learning a speciality - it won't do any good...

A neural net, by itself, will not give the "right output" - it's not a magic wand. You need the right idea of a trading strategy, which will give a decent result even without a neural net...

 
Serqey Nikitin:

No one is arguing that it is interesting!

But you can't jump over a few steps in teaching a speciality - it won't do any good...

A neural network by itself will not give you the "right output" - it is not a magic wand. You need the right idea of a trading strategy, which will give a decent result even without a neural network...

I am not arguing about the idea, of course it is necessary. But a neural network will also help to quickly test an idea, or at least find ways to move forward. I, for example, already see that 11 inputs of standard oscillators, which all show +- the same thing, gives nothing. Exactly the same way it can trade with a single entry.

You're right about jumping, it's not my specialty at all, I'm not even a mathematician or programmer. Fortunately, it's enough to figuratively understand what a neural network is and what it does, because ready-made solutions, like this class, are already available. And then you just experiment, using your understanding of the market to test some ideas.

 
Алексей:

Let me look for my neural net topic.... It was on the four and a long time ago.

http://forum.mql4.com/ru/38550

The input was the price difference with some lag (order of a few hours). The output is a forecast for several hours ahead binary plus or minus.

All nets were obtained by architecture brute force.

The main problem was gluing multiple forward tests. This should be automated in a good way.

From all this I understood that the most important thing is correct inputs, we should think about it... Some kind of hidden dependence is quite difficult to find with a neural network, you need to have an idea of some dependence initially and then process it with a grid.
 
Maxim Dmitrievsky:
From all this I understood that the most important thing is the right inputs, I should think about it... Some kind of hidden dependency is quite difficult to find with a neural network, you need to have an idea of some kind of dependency initially and then process it with a grid.

About the inputs, yes. But the thing is, you can't really make a set of "good" inputs at a glance. How to find them out? We should make a lot of inputs and then make a procedure of selecting an informative vector of inputs. And we should train the net on it. And if you find informative inputs (90% of the case), then the network is not needed at all because the model can be based on a rule-forming algorithm of some kind and not on a black box.

About dependence. The net will not give any insight into the form of dependence. It is not intended for this purpose in the first place. And once again we return to the previous point: it is necessary to find informative characteristics and use them to build statistically relevant rules.

If you want deciphering I can tell you more). Really I am on duty now and cannot spend much time for explanations, but it is good, that today is Friday. ))

 
Алексей:

About the inputs, yes. But the thing is, you can't really make a set of "good" inputs at a glance. How to find them out? We should make a lot of inputs and then make a procedure of selecting an informative vector of inputs. And we should train the net on it. And if you find informative inputs (90% of the case), then the network is not needed at all because the model can be based on a rule-forming algorithm of some kind and not on a black box.

About dependence. The net will not give any insight into the form of dependence. It is not intended for this purpose in the first place. And once again we return to the previous point: it is necessary to find informative characteristics and use them to build statistically relevant rules.

If you want deciphering I can tell you more). Really I am on duty now and can not spend much time for explanations, but it is good, that today is Friday. ))

Actually my idea was originally: to train networks on different timeframes, then to filter signals from small ones by large ones, i.e. to obtain statistical advantage at that taking into account that the networks will produce some percentage of wrong entries, say, 50 to 50. Neural networks themselves imply processing a large amount of data, that is their advantage, so you can go not from any specific qualitative patterns, but to smear a large number of signals across the plate. And it doesn't matter what is going on inside it, the important thing is that the neural network will try to order it all. But you will not be able to do it without OpenCl, it would take too long. Probably, we need third generation networks, as described in the article.

However, of course, your comments are interesting. I will try to tinker with the previously suggested indicator. Paired with zigzag (2 outputs), it gives something incomprehensible. I will reduce the number of inputs and leave it only on the output.

 
Maxim Dmitrievsky:
I haven't heard of anyone making steady money on muwings).
"It's true that you can hardly make any money on muwings these days. But, as a signal filter, muwings are quite suitable.
 
Serqey Nikitin:

You are aware that all patterns lag relative to linear indicators, in which case the neural network is useless.

Hmm... In my opinion, patterns are the fastest input. Any oscillators are slower. And all the more so for the muwings.
 
Maxim Dmitrievsky:

In general, I had this idea from the beginning: to train nets on different TFs, then filter signals from small TFs by large ones, i.e. to get statistical advantage on this, taking into account that nets will produce some percentage of wrong inputs, say, 50 to 50. Neural networks themselves imply processing a large amount of data, that is their advantage, so you can go not from any specific qualitative patterns, but to smear a large number of signals across the plate. And it doesn't matter what is going on inside it, the important thing is that the neural network will try to order it all. But you will not be able to do it without OpenCl, it would take too long. Probably, we need third generation networks, as described in the article.

However, of course, your comments are interesting. I will try to tinker with the previously suggested indicator. Paired with zigzag (2 outputs), it gives something incomprehensible. I will reduce the number of inputs and leave it only on the output.

Read about selection of informative signs or features. Feeding the network with obscure information is not the best approach.

Here is an example:

At work I developed a model binary classifier for 10 discrete variables per input. I selected them in a clever way from 76 features. The prediction is so called majoritarian - if the fraction of ones is strongly outweighed, then one. The quality of the classifier turned out to be no worse than a random forest of 150 trees using all my feature vector of 76 variables! Moreover, the simple model constructs human-readable rules, while the forest is a dark box.

By the way, a regular multilayer perceptron can be used to select features by analysing weights on a trained network. You probably know that the network learns worse on correlated inputs, on input-output pairs that contradict each other. So to sift out bad inputs you need to sift through them.

 
Алексей:
Read about selection of informative signs or features. It is not a good approach to feed the network with a lot of unknowns.

Here's an example:

I developed a binary classifier model at work with 10 discrete variables as inputs. I selected them in a clever way from 76 features. The prediction is so called majoritarian - if the fraction of ones is strongly outweighed, then one. The quality of the classifier turned out to be no worse than a random forest of 150 trees using all my feature vector of 76 variables! Moreover, the simple model constructs human-readable rules, while the forest is a dark box.

By the way, a regular multilayer perceptron can be used to select features by analysing weights on a trained network. You probably know that the network learns worse on correlated inputs, on input-output pairs that contradict each other. So in order to weed out the bad inputs you have to sift through them.

Yes, that seems to be called the curse of dimensionality :) In fact, in my case, a bunch of identical oscillators on the input, all this needs to be removed and one left.

Another question - when normalizing data for inputs, is it better to normalize all vectors simultaneously, in one cycle, considering max and min values from the whole set, or normalize for each input separately, considering max and min of each particular vector?

 
Serqey Nikitin:

No one is arguing that it is interesting!

But you can't jump over a few steps in learning a speciality - it won't do any good...

A neural net, by itself, will not give the "right output" - it's not a magic wand.

You need the right idea of a trading strategy, which will give a decent result even without the neural net...

It will. A magic one. You have to know how to prepare your input data.

Then you don't need a neural net.
Reason: