Neural network - page 2

 
xweblanser >> :

Thank you very much sorry not many comments but I'll try to figure it out....

Here www.nnea.net/downloads there is a good collection of pdf's with research on predicting financial markets with NS. you need to register. have a look at the research section as well.

 
marketeer >> :

A trader does not really need to understand the inner workings of the NS. For him it is a black box with inputs and outputs. There are many ready-made networks in the public domain, including on this website - just type "neural networks" into the search box. One of the latest publications, for example - Predictor based on a self-learning neural network. The main problem in using NS is the choice of what data to input and train, how to prepare this data, what is the structure and size of the network, etc. For example, we take the already mentioned network, try to train it the way Yezhov and Shumsky did (see Neurocomputing and its application in economics and business, I recommend)... And we end up with a flop. There may be many reasons for this. And here begins the trader's work, in order to intuit what may have changed since then (or what the authors omit ;-) ), and what to change in the settings and input data.

Well, I'm a trader, but mostly I'm a programmer... I wanted to write a neural network for myself and at the same time to prove to myself that I can....

 
njel >> :

There is a good selection of pdf's with studies on predicting financial markets using NS here www.nnea.net/downloads. you need to register. see also the research section.

>> thank you.

 
xweblanser >> :

1. As far as I understand each neuron of the network is the same function... but I don't understand how the same function can produce different values when the same data comes in...

These inputs will be multiplied by different weights. Therefore the function values will be different. After thoroughly studying neural networks and using different learning algorithms, ranging from gradient descent to genetics, I came to the conclusion that the mathematical apparatus of neural networks is not perfect. A neural network is designed to approximate a non-linear function. According to Kolmogorov's theorem, the network is capable of realizing any continuous function. In practice, the parallelism of the network leads to a multitude of local minima and realizations of the function being modelled. Take the network shown below as an example. The network has one input, one output, and one hidden layer with two neurons. Each hidden neuron multiplies input x by its weight (w1 or w2), passes the result through the activation function (say tanh), the obtained values are summed at the network output. For simplicity, assume that the bias inputs are zero. The output neuron weights are the same and equal to 1.


Now let's create a function approximation problem. Suppose our function is t = cos(x) (t means target). The network will calculate its value by the formula

y = tanh(w1*x) + tanh(w2*x)

Training (or coaching) the network is to find the weights w1 and w2, at which the output of the network y is closest to the value of our function t. This is achieved by minimising the sum of squares of error

E(w1,w2) = sum((t[k]-y[k])^2,k=0...p-1)

where summation is performed on different training data: x[k],t[k]. Let's look how surface of our minimizing target function E(w1,w2) looks when there is no noise in measurements t[k] = cos(x[k]):


This graph shows that there is an infinite set of solutions (w1,w2) minimizing our objective function E (notice the flat valleys). It is not difficult to understand: the network is symmetric with respect to w1 and w2. The results of training the network will be different for different choices of initial values of w1 and w2. Since these initial values are always chosen randomly, successive training of the network on the same training data x[k],t[k] will lead to different values of optimized weights w1 and w2. There is essentially no global minimum here. Or put differently, an infinite set of local minima are also global minima.

Now let us complicate our problem by adding noise to our series: t[k] = cos(x[k]) + rnd. This noisy series is more statistically similar to the price series than the perfect cosine:


Now the surface of our minimising function E(w1,w2) looks like this:


Notice the many peaks in both the peaks and valleys. Let's briefly close to one of the valleys:


Here we can see more clearly the set of local minima. Now imagine the optimization of E(w1,w2) by gradient descent. Depending on the initial values of w1 and w2, this descent will lead to a different minimum. Moreover, this local minimum could be either at the top or in the valley. Genetic optimization here will only help to descend from the top to one of the valleys and get stuck there at one of the local minima. The situation becomes much more complicated if besides w1 and w2 weights of output neuron are also optimized, which were equaled to one in the previous consideration. In this case we have a 4-dimensional space with a huge number of local minima with coordinates (w1,w2,w3,w4).

By all this simplified description of the neural network behavior, I wanted to prove that the parallelism of the network (or symmetry of its output with respect to weights of neurons of the same layer) leads to difficulties of its training (optimization of these weights) because of presence of infinite set of local minima especially for chaotic series like price series.

I attach the MathCAD file in which the above calculations were performed.

Files:
nnrsimplea2.zip  699 kb
 
gpwr писал(а) >> By this simplified description of the neural network behaviour I wanted to prove that the parallelism of the network (or symmetry of its output with respect to weights of neurons of the same layer) leads to difficulties in its training (optimization of these weights) due to the presence of infinite number of local minima, especially for chaotic series like series of prices.

One question - how does this affect the profit?

 
LeoV >> :

One question - how does this affect profitability?

Do you have a network that generates consistent profits?

 
LeoV >> :

One question - how does this affect the profit?

Profitability is absolutely affected. There is no guarantee of finding the right, deep enough local minimum that would be adequate to implement a neural network based profitable TS.

 
gpwr >> :

Which MathCad do you use, your calculations do not open in Mathcad 13.

 

The point of minimizing/maximizing the target function E(w1,w2) is to find a global extremum. And if there are a million of these global extrema, what difference does it make to us which one of them NN falls in!

It is worse if it gets stuck at one of the local minima/maxima. But it's not NN's problem anymore. It is a problem of the optimization algorithm.


LeoV >> :

>> One question - how does it influence the profit?

What gpwr described, it doesn't.

 
Urain >> :

Which MathCad do you use, I can't open your calculations in Mathcad 13.

>> Mathcad 14. >> I attach the same file in version 11.

Files:
nnosimplem2.zip  14 kb
Reason: