Neuro-forecasting of financial series (based on one article) - page 8

 

I think we need to work towards finding an optimisation function. I.e. search for target functions. What do we want the network to find at the optimization site???? Parameters for maximum balance in the same area. No, because practice shows that it does not work in the future. So we need to look for key moments that do not work now, but will work in the future. And for this we need to collect statistics, estimate which moments when and how work....

Well look what came to my mind, it may not be realistic but it's the way... the goal so to speak.

doing the following.

Everything as I did in the NS screenshots.

1. we train the network on a stretch of 6 months. If the NS is on machs, it will learn these parameters very well, it will trade normally.

2) Take the first 3 months, look at the result of the network trade. (in theory, the result should be good, because the network has seen this data)

3. optimize the network so it learns exactly those parameters that we already know.

Search for, adapt and develop a function that results in optimization for these parameters.

5. Having found the function, we test it on a segment which is not presented to the network........

Now I will try to look at it partially in NS....

 
Reshetov:

The signal of a neural network trained only on trend sections will not be random, but will be the way the network is trained. Namely, it will follow the movement and drain sideways.


I meant sharp changes, unpredictable movements etc.
 
But the bummer is that the target functions in NS are complicated... You can't just pick them out like that...
 
nikelodeon:

I meant sudden changes, unpredictable movements etc.

The net can be taught to sit on the fence when such movements occur. Like this for example: https://www.mql5.com/ru/code/10151

 
nikelodeon:
But the bummer is that in NS the target functions are complicated... You can't just dig them out like that...


Hmm. I usually train neural networks in a statistical package, in which the target function is not the balance value at the training or testing period, but the error value at the output of the network. This is a classic version of machine learning, in principle. You can experiment with the value of the error: take the sum of squares of the error; the sum of squares divided by the number of examples; the error modulo. And this is not the limit.

The trained network then exchanges signals with the robot in dll format. The leeway is enormous...

 
Reshetov:

The net can be taught to sit on the fence when such movements occur. Like this: https://www.mql5.com/ru/code/10151


You can also use SCP. On training examples, you first train an ACS and measure the maximum error in determining whether an example belongs to a cell or a cluster. Then one or more neural networks are trained on the same data, making, for example, price increment prediction one step ahead. And in the work, new, previously unknown examples are first checked against a tuned ACS and if the prediction network exceeds a predefined error threshold, the prediction network is NOT turned on, or the trading robot does not react to the signal from the network. In short, we use the ACS to detect anomalies and do not trade on them. Practical example: It is autumn 2008. We have trained the neural network and decided to trade during the quarter. There is a collapse in dynamics on all major pairs, the lion's share of examples ACS simply filters out and does not allow making trading decisions. But that's all theory to me. And blah blah blah. I haven't tested it in practice.
 
f.t.:

What can be interesting about it (apart from the task of training your brain)?

No NS can work without retraining (in the sense of learning from new data). The market is changing and the grid has to learn it. The question is: when to start a new training? ;)

And then, what you can "fix" in the grid when it "breaks", change the layers and the number of neurons, another transfer function -.... But you will never know exactly what and how and where to change. As long as you don't adjust the mesh to the new market, it won't work. And it's not the same as if ( Price == Ask ) and see that Ask = 1.2345, while Price turns out to be 1.23449999999 for some reason.

Now imagine a dialog with a possible investor asking: "Guess what answer he would like to give?

1) start training NS again and when (if) it learns - let it earn again (if the market does not change again by then)

2) I'll put a debugging seal on it, find the error and correct it.

So if you are "interested" - you're welcome, but if you want to earn money? ;)


Already started role-playing? I'm not a helper here )))) I won't discuss it in this thread, you're on your own.

You, due to your limited experience with NS, consider this tool apriori as a data fitting, black box, rake in the blank etc. The number of layers, neurons, function bundles are all unnecessary baggage in the wrong hands. First, you must have an idea why the network should work, and this idea should be tested with extensive forward tests at different parts of the time series. Risk assessment is done in advance, not at the onset of a flush. I'm telling you. I personally have not lost a single penny using NS; I have lost only on demo.)

The second thing I wanted to say is that a serious neuro adept, after doing all the necessary tests, also takes care of the problem of extraction, or, in Russian, understanding the heuristics, which are formed during training inside the NS - the essence of the rules by which the output signals are generated, depending on what's in the input. All the black boxes go LEFT. You have to get into the logic of the neural network and understand what it does. Joo also wrote about it, and it is logical. Otherwise it turns out that in Soviet Russia the NS controls you. Ha.

 
alexeymosc:
How are you getting on with the lectures? :)
 
TheXpert:
How are you getting on with the lectures? :)


Just so-so. I hope they will be available later for gradual mastering. I've been really busy with work lately.

I'll try to pull myself together and watch a few more. Stopped at the lectures on gradient descent, that is, in general almost at the beginning.

 

Another article on the successful application of neuro-technology to forecasting.

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.184.7175&rep=rep1&type=pdf

It uses chaos theory techniques to generate input vector lags. It also applies punishment to the network for wrong predictions. The test sample is 100 days. The results are cool: 80% or more hits. But this time the prediction is either trending up (+2%) or trending down. By the way, from personal experience I would say that trends in the stock market are predicted well, there are other pitfalls, like if you are wrong, then the loss is big, it eats the stat. advantage. Believe it or not, I got 80% accuracy on tests too.

Reason: