Article: Price forecasting with neural networks - page 11

 
slava1:
I have known what to feed for a very long time. I wanted to discuss, so to speak, possible models. It usually works out better by joint effort. I myself have been working on a robot for a year now. There are results, but not very stable.

Well, a year is not enough :)


OK, let's try it, but it will be a one-way game and not yours.


Atache is a pseudo-random series formed by x = 4 * (x-1) * (1 - (x-1)), with x0=0.2

Does it remind you of anything? To a first approximation, this series resembles a data stream in the market.


Predicting t+1 value. Network architecture, MLP 1-5-1, with additional synaptic coupling of input and output neurons.

Quadratic error (10e-3), achieves around 60-70t epochs, with a training sample of 1000 elements. Learning is done by antigradient method.


It is very easy to prove by modeling different architectures that we have topology-independent sequence, the error doesn't decrease.

is significantly influenced by the complexity of the network, including the additional number of layers.


Let's use the method of artificial examples or heats, the result - the speed of learning increases by 2.5 times, namely, an acceptable error is achieved

In the region of 30-40t. epochs.


Here is the first example, you can spin it to see results ...

 
slava1:
I have known what to apply for a long time. I wanted to discuss possible models, so to speak. It usually works better if we work together. I myself have been working on a robot for a year now. There are results, but not very stable.

And about models, as I've already said, look towards Reshetov's description of implementations. Yes, and don't look for the grail, it doesn't exist ;)

 
slava1:
I have known for a long time, what to feed to where. I wanted to discuss possible models, so to speak. Usually it's better if we work together. I myself have been working on a robot for a year now. There are results, but not very stable.

I can only say one thing: once again carefully read the theory (the scientific theory, not the popular literature) and maybe you will find something you missed or did not take into account. If you think that you do not need to know the theory when using programs like NeuroShell Day Trader, then there is only one thing to do - leave neural networks alone.

I'll take my leave now.

 
With a 1x5x1 network I'm sure you'll never get any results. The network must have at least two hidden layers. For the inputs of the neural network I normalised the readings to 20 indicators. So I tried different topologies. I stopped at 20х140х140х4 The network may be cumbersome, but produces well interpreted signals. Of course, it can be scaled up. Further I am planning to choose the topology using genetic algorithm. The whole process of net creation and training is implemented using JAVANNS, the trained net is transformed into C-Code and this code is used in creation of a dll deciding function, which can be used in MetaTrader. That is only a rough description of the process. That's why I have been doing it for a year. It's a very large amount of work. I think it is stupid to use tools such as NeuroShell Day Trader, if you can create your own network and do it your way. But this is not what I wanted to talk about. I am interested in the approach to the problem of creating a training sample.
 
slava1:
With a 1x5x1 network I'm sure you'll never get any results. The network must have at least two hidden layers. For the inputs of the neural network I normalised the readings to 20 indicators. So I tried different topologies. I stopped at 20х140х140х4 The network may be cumbersome, but produces well interpreted signals. Of course, it can be scaled up. In the future I plan to select the topology using a genetic algorithm. The whole process of network creation and training is performed using JAVANNS, the trained network is transformed into C-Code and this code is used in creation of a receiver function in dll that can be used in MetaTrader. This is a very rough description of the process. That's why I have been doing it for a year. It's a very large amount of work. I think it is stupid to use tools such as NeuroShell Day Trader, if you can create your own network and do it your way. But that's not what I wanted to talk about. I am interested in the approach to the problem of creating a training sample.

The training sample is what you feed into the inputs. In this case you are feeding 20 indicators. What an indicator is, is a processing of an initial time series, a price series {H,L,O,C}. If you consider from the mathematical point of view the indicators that are used in TA, you can distinguish one or another group of mathematical methods - say MA is the simplest frequency filter etc., but who said that the data that is prepared with classic TA methods is best for a neural network? I would even say on the contrary, practically unsuitable. I didn't give the example of a shallow network built to extrapolate a pseudo random f-zi for nothing.


If you investigated it a bit more, you would find a number of very interesting properties, which would allow you to look at the preparation of training samples a bit differently. A stream of quotes can also be thought of as a pseudo-random f-ci, with a complex law. Neural networks are a mathematical method, but the technology is more of an art.


Yes, by the way, you are mistaken if you think that network size affects the ability to solve the problem.

 
No, on the contrary, I stressed that size is not important (meaning the number of hidden layers and neurons) :-)) Let's discard all demagogy and try to imagine what can be taken as a training subject. I build my strategy exactly on indicator readings and after a long battle with them I admit that this approach ...... is not suitable... So the question is... Purely philosophical. What else is MONA POPOBUT?
 
Once again, what you call art is perhaps predictable. That's what I'm saying. The number of input neurons plays a huge role. The more inputs, the greater the probability of correct predictions. It's obvious.
 
slava1:
Again, what you call art is perhaps predictable. That's what I'm saying. The number of input neurons plays a huge role. The more inputs, the greater the probability of correct predictions. That's obvious.


I disagree. The example I gave you, a 1-5-1 network, allows you to predict a pseudo-random sequence with high accuracy.

I would recommend that you re-read the theory a bit, as I get the impression that you don't really understand the mechanism.


Tell me, what do you think a neural network is?

 
Well, we can argue about this for a long time, who understands what, or doesn't understand. The discussion was about data preparation. I understand that no one here wants to discuss this issue. Pity
 
slava1:
Well, we can argue about this for a long time, who understands what, or doesn't understand. The discussion was about data preparation. I understand that no one here wants to discuss this issue. Pity

Dear Sir, it is possible to discuss it. But what is there to discuss if I mentioned the theme of what are indicators in TA and how they are suitable for preprocessing of data for the NS, you didn't want to discuss it, or rather I think you missed the topic :)

If you want to read a lecture about how to prepare data for NS training, I think you should not do it in this forum, only few people will be interested in it here.

Reason: