Neural network in the form of a script - page 12

 
TheXpert писал (а) >>

However, many problems are solved with a 5-layer perseptron, the existence of the theorem does not mean that a 3-layer perseptron is a panacea.

It simply states that all (with very few exceptions) problems are solved by a 2-layer perseptron with one hidden layer! Yes, about the terminology, it seems that you count the input nodes of the NS (those that don't contain neurons) as a layer, and I don't.

Which is better -- use a 5-6-6-2 network or a 3-layer replacement of 5-25-2 ? Such a large number may well work out to ensure proper non-linearity.

I would use X-Y-1 architecture - it solves the problem. And I would experimentally adjust the number of neurons in a hidden layer Y starting from 2 and going up until generalization properties of the network would not improve. According to my modest experience, for many practical implementations two neurons in this layer is enough. Further increasing the number of neurons the training time increases, and due to the increasing number of synapses we have to increase the size of training sample or input dimension, which leads to "processing" of insignificant information or worsening of approximating properties of the NS (these properties, according to Ezhov, fall as 1/d, where d is number of inputs) etc., etc. Which is not good.

Of course, it is possible to build a ten-layer perseptron and it will work... but what's the point?

 
Neutron писал (а) >>

It is simply argued that all (with very few exceptions) problems are solved by a 2-layer perseptron with one hidden layer! Yes, about the terminology, it seems that you count the input nodes of the NS (those that don't contain neurons) as a layer, and I don't.

I would use the X-Y-1 architecture - it solves the problem. And I would choose experimentally the number of neurons in the hidden layer Y starting from 2 and up until generalization properties of the network would not improve. According to my modest experience, for many practical implementations two neurons in this layer is enough. Further increasing the number of neurons the time of training increases, and due to the increasing number of synapses we have to increase the size of training sample or dimension of the input, which leads to "processing" of insignificant information or worsening of approximating properties of NS (these properties, according to Ezhov, fall as 1/d, where d is number of inputs) etc., etc. which is not good.

Let's say we have 10 in the input. Is 2 in the hidden layer enough? I don't believe so, it doesn't add up to a fairly simple task.

About the input layer. Sometimes it's worth doing an input layer with thresholds, so it's better to treat it as another layer, as an integral part of the whole system.

 
sergeev писал (а) >>


Hmmm... is there any way to summarise this optimum. And about the 5 and 3 ply ones I wonder too. Where is the theory?



About the optimum - my personal, maybe mistaken, experience. About the number of layers - I have encountered it in practice. It depends on the non-linearity of input-output conversion, most of the problems can be solved with a 3-layer network. About the theory, sorry, it was a long time ago...

 
TheXpert писал (а) >>

However, many problems can be solved by a 5-layer perseptron, the existence of the theorem does not imply that a 3-layer perseptron is a panacea.


Which is better, a 5-6-6-2 network or a 3-layer 5-25-2 replacement? Such a large number might well work for proper nonlinearity.

By the way, do you know the most similar architecture for XORa?


4 neurons middle -- sigmoid


There is an analytical solution for XOR-a:


outXOR = in1 + in2 - 2*in1*in2


where: in1 and in2 inputs take values from 0 to 1


Convergence is instantaneous.

 
Reshetov писал (а) >>


For XOR-a there is an analytical solution:


outXOR = in1 + in2 - 2*in1*in2


where: in1 and in2 inputs take values from 0 to 1


The convergence is instantaneous.

LOL, there is an analytical solution for every function, but finding it... Sometimes it's very, very difficult.

I gave this example to show once again that a 3-layer perseptron is not always the best option.

 
TheXpert писал (а) >>

I gave this example to show once again that a 3-layer perseptron is not always the best option.

This problem can also be solved by a 3-layer perseptron with a threshold in neurons and the NS based on radial functions can handle it:

In general, there are many variants, the task is to find an adequate one.

 
Neutron писал (а) >>

This problem can also be solved by a 3-layer perceptron with threshold in neurons, and the NS based on radial functions can handle it:

In general, there are many variants, the task is to find an adequate one.

>> Thanks for the picture.

 
Please give me these books. Or who the authors are.
 
This is from Haykin. The details are on the previous page of the topic.
 

There is a specific task of writing a script which, at a given depth of history, will produce a solution -

Then it is necessary to determine the specific minimum network configuration and the minimum required number of inputs. It means that it is necessary to define the terms of reference and then the whole world should come to the realization to have a concrete product ready to be attached to a chart and see the result. I have seen something similar in the form of a neuroindicator on the Klot site .

http://www.fxreal.ru/forums/topic.php?forum=2&topic=1

Reason: