Hybrid neural networks. - page 20

 
rip >> :

MSE

Strangely, when I started to apply this same tool to forex with MSE, it didn't work. You have to apply a completely different error I think.

 
registred >> :

Strangely, when I started to apply this same tool to forex with MSE, it didn't work. You have to apply a completely different error I think.



OK, which one? In the case of learning with a teacher, I think MSE is enough.

 
rip >> :

OK, which one? In the case of learning with a teacher, I think MSE is enough.


I'm thinking about it. But I know for sure that MSE is no good. At least for me for the reason stated above. I believe that for a neural network the teacher (rule) is error, because it is a universal approximator, we must somehow determine the extent of this approximation, the quality of approximation, taking into account the yield of the market, to bring the series to a more stationary form, so to speak. If you have some ideas about this, we can discuss.

 
registred >> :


I'm thinking about it. But I know for sure that MSE is no good. At least for me for the reason mentioned above. I believe that for a neural network the teacher (rule) is an error, as it is a universal approximator, it is necessary to somehow determine the degree of this approximation, the quality of approximation, taking into account the yields on the market, to bring the series to a more stationary form, so to speak. If you have some ideas about this, we can discuss.


If a number could be brought to a stationary form, I think there would be no forex as such :)

 

In short, people have already had examples of successful applications of neural networks, we need to work on this even shorter.

 
registred >> :

In short, people have already had examples of successful applications of neural networks, we need to work on it even shorter.

Why there have been, there are. If memory serves me correctly in 2007, Better won with a probabilistic NS.

The task of the network was to predict the movement of the course.

 
rip >> :

Why there were, there are. If memory serves me correctly in 2007, Better won just with a probabilistic NS

The task of the network was to predict the movement of the course.



Yes, I read that. The probabilistic network has one big problem, it's sensitive to noise in the data. In simple terms, it has little empiricism unlike BackProp and other iterative methods.

 
registred писал(а) >>

The probabilistic network has one big problem, it is sensitive to noise in the data.

Can you elaborate on that? I think I've seen different views in the literature.

 

VNS is less sensitive to noise in the data, relative to similar methods (e.g. k-NN), but relative to MLP and similar methods, the picture is likely to be reversed...

 
lea >> :

Can you elaborate on that? I think I've seen different views in the literature.


And what is the parameter to be adjusted there? Sigma? How will you adjust it? How do you find the optimum solution? These are questions that are not quite clear to me. Another thing is that for MLP only the type of error is an essential parameter, I insist on it nevertheless. Of course, MLP gets stuck in the local minimum and there are methods of fighting it. In any case, optimal solution for MLP may be found not through finding in the global minimum for many tasks. If you've got something working with PNN, that's a very good thing.

Reason: