Neuromongers, don't pass by :) need advice - page 7

 
Summer:
..

The first picture, if I understand what it shows correctly, corresponds to the Overflowing Patterns ideology.

 
alexeymosc:

IMHO - a test sample to monitor network training is essential.

That may be so, but where can we get it from without wasting time?

Mathemat:

segment B is implicitly involved in training, because B determines the end of training by minimum error)

In my case, it is not applicable because there is no training as such. The only thing to do is to reconfigure the network based on the results.

Figar0:

Why pairs?) Try some indexes, some gold... I wonder what the result will be.

Probably within the spread. I'll try gold. Where can I get a proper history for indices? The glue is not traded.

 

I keep forgetting to ask, what would TC's learning period test look like? I wonder "how much" it learns with this ANC. I understand that the LOC produces the maximum possible result based on what it is given in the system of equations (I feel there is some severe matrix algebra involved :)). Just how would this maximum look like if it was transferred to trading? How smooth will everything be there?

Is it possible to make such a picture also in at least one month of its native study period? The results obtained by the NS during the training period are also important and speak for themselves.

 
Figar0:

I keep forgetting to ask, what would a TC test look like for a training period?

https://www.mql5.com/ru/forum/132692/page2#454397

The second picture, this is for one year on a test sample.

 
TheXpert try, as mentioned here, to test on data from another source. And check for a peek into the future. It all sounds very much like self-delusion...
 
TheXpert:

It may be, but where to get it from without wasting time?

In my case, it is not applicable because there is no training per se. I can only reconfigure the network based on the results.

Probably within the spread. I'll try gold. Where can I get a proper history for indexes? The glue is not traded.

I have several neural network EAs on my demo account now. I build nets in a statistical package and connect ddl files to them.

So I feel that the selection issues - training sample size, control sample size and how it is formed and how long is the period of trading outside the sample - are very important. I get different results, and it is mainly possible to improve the drawdown. Although so far the system is working in profit (alas, thank goodness), but we can determine the optimal parameters and work with them. I have to do some forward testing, of course, and spend some time on it, but I think the result will be worth it.

 
Belford:

TheXpert try, as has already been said here, to test with data from another source.

What does that mean? On the same story of another DC?

And check for a glimpse into the future. It's all very much like self-deception...

Thoroughly first thing in the first instance. I'd even say I'm insulted by such advice. And what exactly does it look like to be self-defeating?

alexeymosc:

Well, I feel with all my being that the issues of selection: the size of the training sample, the size of the control sample and how it is formed and the size of the trading period outside the sample are very important. I get different results, and it is mainly possible to improve the drawdown. Although so far the system is working in profit (alas, thank goodness), but we can determine the optimal parameters and start working with them. Of course it requires some time and forward testing, but I think the result will be worth it.

Good for you, but essentially nothing. Perhaps you could share your experiences?
 
TheXpert:

What does it mean? On the same story of another DC?

Thoroughly first things first. I would even say that I am insulted by such advice. And what exactly does it sound like to be self-defeating?

Good for you, but essentially nothing. Maybe you can share your experience?


A couple of articles on the topic under discussion: http://www.google.com/url?sa=t&source=web&cd=1&ved=0CBwQFjAA&url=http%3A%2F%2Fmadis1.iss.ac.cn%2Fmadis.files%2Fpub-papers%2F2005%2Flncs-05-whuang-1.pdf&ei=oYOVTarTOYvzsgaEsuGzCA&usg=AFQjCNHZycjABySFlxSQ4sFAVgNK4FXrpQ&sig2=t1p0qXv35VTdnuhetNaTtQ

http://www.google.com/url?sa=t&source=web&cd=3&ved=0CCgQFjAC&url=http%3A%2F%2Fciteseerx.ist.psu.edu%2Fviewdoc%2Fdownload%3Fdoi%3D10.1.1.23.6904%26rep%3Drep1%26type%3Dpdf&rct=j&q=An%20Empirical%20Analysis%20of%20Data%20Requirements%20for%20Financial%20Forecasting%20with%20Neural%20Networks&ei=K4SVTdvoFsbDtAbl9dy7CA&usg=AFQjCNHAlj21APE3Nnc9MJQWI9EUYR7Ug&sig2=Mbp5sVdyCDOhnG3lkQiLw

To summarise the research results: you don't need to take a very large sample for training. For Day1 timeframe 1-3 years is suitable... For Hour bars I take up to 1 year, for 15-minute bars up to half a year, for 5-minute bars up to a quarter. I take the data from the trading server and use Page Up.

You have two years for a 15 minute timeframe, that may be excessive, although I read that you have tried shorter periods. I think no more than half a year is enough.

I'll write about test (in Russian literature, and in English - validation) sampling later, I want to conduct a series of experiments this weekend. General observations: if the test sample is taken before a trading period, a neuronet will make "fine tuning" for this period while learning on a larger sample space. The plus side is that since the test sample is not mixed with the training sample, we give the network data it has not even approximately seen yet and that data can be said to reflect the actual state of the market. If we mix the test sample with the training sample, the error on it is usually smaller, because the network sees examples surrounding the test sample and therefore the algorithm finds deeper error minima - but not the fact that the new data will reach at least a similar result. This I have personally obtained and observed repeatedly.

 
alexeymosc:

To summarize the research results: you don't need a very large sample to train.

Let's get away from the training sample, I didn't tell you the full construction scheme, there's nothing wrong with it.

General considerations: if the test sample is taken before the trading period, the neural network will make "fine-tuning" for this period, learning on a larger sample space. Plus, because the test sample is not mixed with the training sample, we give the network data it has not even approximately seen yet and that data, we can say, reflects the current state of the market.

And how is this different from expanding the training sample window? You are talking in the context of your narrow implementation. My implementation is fundamentally different, so I don't understand what I'm talking about.

What do you mean by mixed and unmixed? How is the blending achieved? What "fine tuning" if the network has never seen this data before?

If test sample is mixed with training sample, the error on it is usually less, because the network sees examples surrounding the test sample and thus the algorithm finds deeper error minima - but not the fact that the new data will yield at least similar results. This I have personally obtained and observed repeatedly.

I'm at a loss here, maybe we shouldn't pursue this line of discussion.
 
TheXpert:

What does this mean? On the same story of another DC?


Preferably on the same history, but from different quote providers.

You should not use VC quotes (and MetaQuotes too) because the lower timeframes, especially 1999-2005, are of very poor quality.

These quotes were smoothed not by a sliding window, but by the entire history. In other words, there is a peek into the future that is already embedded in the quotes themselves. Neural networks find this without any problems.
Reason: