Article: Price forecasting with neural networks - page 15

 
Neutron:

Addendum.

It carried, it carried and it carried ;-)

This is the "FINE" analytical solution for the scales of a single-layer nonlinear NS:

...

It made me smile.

But the whole calculation takes 1 millisecond.

Theoretically, any NS is equivalent to a system of equations. And if it is simple enough, it is cheaper to write this system in analytical form and solve it with respect to weights. Problems, and with them the need for cunning training methods, begin to arise as the network (i.e. its equivalent system of equations) becomes more complex. It was by realising this fact that I once suspended my acquaintance with networks, started out of interest. I simply decided that I must first come up with a model and only then look for the most economical ways of solving it. It still seems to me that application of NS methods is justified only for very complex models :) .

 
Jhonny:

Explain this to me as a novice "neuroscientist"... I understand that the network in question is a multilayer perceptron.

What is the reason for choosing this type of network, why not Hopfield or Kohonen or something else?

It's more of a philosophical question. The point is that there are an infinite number of "complex" NS architectures, while the simplest one is the only one! It's a single-layer perceptron. In this case it is implemented with a non-linear activation function.


to Candid

Theoretically any NS is equivalent to a system of equations. And if it is simple enough, it is cheaper to write it down in an analytical form and solve it with respect to weights. Problems, and with them the need for cunning training methods, begin to arise as the network (i.e. its equivalent system of equations) becomes more complex. It was by realising this fact that I once suspended my acquaintance with networks, started out of interest. I simply decided that I must first come up with a model and then look for the most economical ways of solving it. It still seems to me that application of NS methods is justified only for very complex models :) .

Right. We can also mention non-stationarity inherent in market BPs, which puts an end to traditional statistical methods and gives one more point in favour of NS.


By the way, I have worked out an analytical solution of systems of equations for NS weights. The figure in black shows the usual method of training NS by back propagation of error (ORO) number of epochs 1000 and in blue the analytical solution. Training of NS takes place at each step, the prediction is one step ahead.


It can be seen that the analytical variant on the trend section of BP is incorrect for reasons I do not understand. But calculation time of this problem according to the ORO method is 10 seconds, and for the analytical method it is 0.001 seconds.

The performance gain is 10000 times!

In general, the operation of the network is fascinating. It seems to sense the price trend and attracts to it, like a magnet, in its forecast!

 
Neutron:
Jhonny:

Explain this to me as a novice "neuroscientist"... I understand that the network in question is a multilayer perceptron.

What is the reason for choosing this type of network, why not Hopfield or Kohonen, or something else?

It's more of a philosophical question. The point is that there are an infinite number of "complex" NS architectures, while the simplest one is the only one! It's a single-layer perceptron. In this case it is implemented with a non-linear activation function.

...

In general, the operation of the network is fascinating. It seems to sense the price trend and attracts to it, as if it were a magnet!

The question is of course philosophical. I am a beginner as well. But why not a probability net, for example. Especially after the well-known events of last year? It is true that the task is different there too. If prediction is close to filtering, at that an observable parameter, then estimation of probability is closer to filtering of unobservable parameter, but solves at once the problem of transition to the domain of solutions.

P.S. By the way, the running time of a probabilistic network is incomparably less than that of a multilayer perceptron, and the trainability is not worse.

P.P.S. When I was looking at Better's balance curve during the championship, I was fascinated, too. :-)

 
Neutron:

I want to dig deeper... For example, how justified is the complexity of the NS (hidden layers).

It depends on what is being fed into the input. If we directly take bars (their increments) from the market - then it will be very justified.


You don't see it in your examples, because your charts are smooth - piecewise monotonic. You've even got a single-input NS "ruling" there. I.e. the analogue of the most primitive trend following trading system. This problem is solved perfectly well without NS. What multilayer networks are there...

Neutron:

Anyway, I'm going crazy! The point is that if you represent the nonlinearity of NS in a certain form, you can get an accurate analytical solution for the weights. This, in turn, means that it will be possible to do away with the Inverse Error Propagation method for training the network and get the result as accurate as possible in a single action, without any 1000 Epochs of training there!!!

It must be understood that NS training is a function optimisation problem. And its analytical solution is much more complicated than solving systems of equations. Look at methods of analytical calculation of a linear regression function from one variable (analog of the simplest perceptron)... What do you think? Now imagine what the solution would look like if there are many variables and the function is non-linear to the nth degree (analogue of multilayer NS)... :-)


NS was invented to simplify life. And you don't have to train the network exactly by back propagation. It is the simplest - but also the slowest - algorithm. Besides, there are algorithms that are orders of magnitude faster.

 
ds2:
Neutron:

I want to dig deeper... For example, how justified is the complexity of the NS (hidden layers).

It depends on what is being fed into the input. If we directly take bars (their increments) from the market - then it will be very justified.


You don't see it in your examples, because your charts are smooth - piecewise monotonic. You've even got a single-input NS "ruling" there. I.e. the analogue of the most primitive trend-following trading system. This problem is solved perfectly well without NS. What multilayer networks are there...

ds2, could you please give a reason in favour of multilayer perceptron compared to single-layer, used for BP forecasting needs like price ones, if their immersion depth is the same. I'd like to see the argument in the form of a graph of predictive ability.

 
Neutron:
ds2:

Depends on what is being fed into the input. If you stick straight bars (their increments) from the market - it will be very justified.

You don't see that in your examples because your charts are smooth - piecewise monotonic.

ds2, could you make a case for a multilayer perceptron compared to a single layer one used for BP type price forecasting needs, if their depth of immersion is the same. I would like to see the argument in the form of a graph of predictive ability.

Well, it's obvious if you understand how the network works. A single layer network can only describe monotonic dependencies, while a multi-layer network can describe any dependency.


If we compare capabilities of NS and known MTS types, a single layer network can imitate MACD operation but only a multilayer network can recognize Pessavento patterns.


You can experiment and see for yourself. You have all the tools and data for that.

 

Has anyone written in MQL the Kolmogorov-Smirnov criterion for estimating inputs and the Spearman rank correlation?

 
I haven't got around to the test yet, and the rank correlation is not much to look for - 'Spearman's Rank Correlation Coefficient'
 
Curious to see if anyone has it ? Or has this info already been mentioned and I missed it ? Poke the link please.

This link is not even accessible when you register. Or maybe it describes the path to the Grail ;-).


http://www.maikonline.com/maik/showArticle.do?auid=VAF0AMGLSL〈=en PDF 274kb

A NEURAL NETWORK FRAMEWORK FOR MODELING A COMPLEX FUNCTION OF MANY VARIABLES ON SAMPLED DATA
Е. V. Gavrilova, O. A. Mishulina, M. V. Shcherbinina Izvestia RAN. Theory and control systems # 1, January-February 2007, P. 73-82

We consider the problem of approximation of function of many variables, which is measured against additive noise and characterized by qualitatively different dynamic properties in certain subdomains of its definition. We propose the solution to this problem using specialized modular neural network structure LINA. Rules for its training and functioning are formulated.
 

When training in Anfisedit an error is continually displayed:

??? Error using ==> anfisedit
Invalid handle object.

??? Error while evaluating uicontrol Callback.

Prepared the data, but not by the absolute rate as in the article, but by the type of candle. If the closing price of the day candle is higher than the opening price - then 1, if vice versa -1. At first I had 10 entries but my laptop did not pull it out. I reduced the number of entries as described in the article to 4, the architecture is shown, everything seems normal, but when I press train after a couple of epochs the training stops, Train Error is written over the chart and in the command line I wrote in the very beginning. What is the problem? Maybe the Matlab version is glitchy? I have Matlab 7.0.1.24704 (R14) Servise Pack 1.

Didn't work with ANFIS, decided to try NNTool. I loaded separately the inputs (4 inputs), separately the outputs, specified the parameters, created a network - but when viewing the network architecture, it shows that the inputs are 2. I pressed train anyway and got an error:

Error using ==> networktrain
Inputs are incorrectly sized for network.
Matrices must all have 2 rows.
Was there somewhere to set the number of inputs?

Reason: