Article: Price forecasting with neural networks - page 12

 
Another thing we can discuss is the optimal number of synapses in the NS. From general considerations, it is clear that there should not be too many, otherwise the NS will overlearn. But also they should not be few, otherwise the network will be under-trained. In the literature I have encountered different opinions on this subject, so I would like to know the opinion of esteemed forum members.
 
Neutron:
Another thing we can discuss is the optimal number of synapses in the NS. From general considerations, it is clear that there should not be too many, otherwise the NS will overlearn. But also there should not be too few of them, otherwise the network will be under-trained. In the literature I have encountered different opinions on this subject, so I want to know the opinion of esteemed forum users.

The number of synapses strongly affects the teachability of a network. I have tried several times to develop a learning method that takes into account the topology of the network, but I failed.

 

OK, but I'd like a theory.

Here is the reasoning out loud. For a single layer NS with number of inputs N, we have N synapses whose weights uniquely determine the system N in the general case of non-linear equations. It is clear that to solve such a system we need a training sample of N vectors, each consisting of N elements. In this case it does not work any other way. With two layer NS the number of inputs must be less than the total number of training vectors N by n, where n is the number of synapses in the second layer, so the length of training vectors has length N-n.

For a 3-layer NS, the order of reasoning is the same.

Thus:

1. We proceed from the depth of immersion we need and determine the dimensionality of the NS input.

2. Then considering the architecture (number of layers) of NS, we count the number of synapses, and get the optimal size of the training sample.

 

One of the most important things (in my opinion) is data preparation, to do this

1. Try to reduce the correlation of inputs. In other words, inputs should be as statistically independent as possible.

2. at normalization of input vectors, it is necessary to achieve increase of entropy - thereby increasing quantity of information submitted to NS at preservation of the same volume of the input data.

Obligatory check of data quality, e.g. by the Kolmogorov-Smirnov method or/and with the Hurst exponent.


Selection of the network architecture can reduce the error.

 

Bleaching and normalising the inputs is straightforward. It's elementary. But how to determine the optimal number of inputs (immersion depth)? Is it possible to do it only experimentally? There are some considerations on this subject. I can show that there is a local optimum in the number of inputs. They must not be few, but they must not be many either. As a rule, the optimum is 3-5 inputs. What does the theory has to say in this respect?

 
Neutron:

Bleaching and normalising the inputs is self-explanatory. It's elementary. But how to determine the optimal number of inputs (immersion depth)? Is it possible to do it only experimentally? There are some considerations on this subject. I can show that there is a local optimum in the number of inputs. They must not be few, but they must not be too many either. As a rule, the optimum is 3-5 inputs. What does the theory has to say in this respect?

The theory says - only by gut feeling, i.e. experimentally. The main thing is informativeness.

 

What you are saying is sad.

The size of the problem (globally), unfortunately, does not allow a satisfactory answer to the question of optimal NS parameters in a reasonable time. Shall we include intellect and give birth to criteria?


In general, the work of the network is fascinating! Just for an experiment I threw a small (one-layer) net with four inputs and no non-linearity at the output.

I put ticks to its input and retrain it at every tick and make predictions one tick ahead. Oh, man, it's making predictions! In the picture, red is the ticker, blue is the prediction.

Of course, I understand that the series selected for prediction is as simple as two fingers on the pavement, but the network is elementary.

 

I work at NeuroSolutions, and there's an option for dimensioning the network and diving depth

Better to ICQ (see profile)

 
I have repeatedly found evidence that if I do not understand thoroughly how and why something works, then achieving a positive result is like finding a needle in a haystack - you can find it, but not the fact that it will happen. That's why I prefer to build it myself and from scratch, the result is usually better.
 

Dobrovo vam vremia sutok i izvinite za translit.


Ja nacinajuscij treider, no s bolsim bagazom programirovanija. U menia vopros vam, otnositelno neironnix setej.

Procitav ves topic forum, mne bolee menee jasno kemu vse klonitsia ispolzuja neironnije seti, i potomu voprosi, dumaju, bolee konkretnije.


Dannije

1. there are kucia signals with odinakovim cislom parametrov na konkretnij slucaj. - eto kak bi vxod neiroseti

2. there is a rezultat - is serii xoroso/ploxo


Itak - is it possible, ispolzuja neiroset poluchit otvet neiroseti v buduscem po etoj modeli - novije dannije - otvet - xoroso or ploxo? Kakije trudnosti obucenija NN vozmozni v etoj scheme?

Udovovletvoriajj menia rezultat b b b virozenije sili v nabore signalov s raznimi parametrovi (slaboje ili silnoje) v konkretnom sluchai.

Reason: