Machine learning in trading: theory, models, practice and algo-trading - page 870

 
Elibrarius:
I mean how many lines of training data (or training examples).
For example 10000 lines with 15 inputs

From memory - item 1. about 5-10 thousand. Every few epochs the data is mixed.

After a certain number of epochs, the training sequence is replaced by another one, and see point 1. And so on several times.

Total number of epochs is something like 1000. Learning time with intermediate reconfigurations and tests is about one day.

The chart above is from the first experiments, it was simpler there.

 
Yuriy Asaulenko:

From memory - item 1. about 5-10 thousand. Every few epochs the data is mixed.

After a certain number of epochs, the training sequence is replaced by another one, and see point 1. And so on several times.

Total number of epochs is something like 1000. Learning time with intermediate reconfigurations and tests is about one day.

The above chart is from my first experiments, everything was simpler there.

Interesting method - you get primary training + some additional training.
 
elibrarius:
Interesting method - you get primary training + some retraining.

I wouldn't call it that. Just a replacement of the training sequence in the course of training. So it doesn't get used to the same data).

Yes, plus annealing. Since I use the standard BP algorithm, the training parameters of the NS change manually every few epochs.

SZY Read this thread, here a little more details about structure of system like mine -https://www.mql5.com/ru/forum/239508

ТС на нейросети (оч. краткое руководство)
ТС на нейросети (оч. краткое руководство)
  • 2018.04.22
  • www.mql5.com
Вместо введения Типовая ТС состоит из ВР, индикаторов и логического блока принятия решений (БПР...
 
Yuriy Asaulenko:

I wouldn't call it that. Just a replacement of the training sequence in the course of training. So as not to get used to the same data).

In my opinion, this is what is the retraining on new data. You don't reset the network weights after training on the first block of data.

 
elibrarius:

In my opinion, it is a re-learning on new data. It's not like you're resetting the weights of the network after learning on the first block of data.

Of course I don't. But there is no new data - a new sequence on the same story. It is a continuation of the same learning, a single process. So every era can be treated as a pre-learning. Well, it's more a question of terminology.

 
elibrarius:
regression without hidden layers it seems...
It's time to switch to R, I tried it on alglib NS - it's ten times slower to calculate the same network as on R (like 24 hours vs. 30-60 minutes). Plus alglib has a maximum of 2 hidden layers, and according to your observations you need 3 consecutive conversions, i.e. 3 layers.
The regression is linear. R is not hoW, python digging - but I don't see much point.
 
Yuriy Asaulenko:

Of course I don't. But there is no new data - a new sequence on the same story. It is a continuation of the same learning, a single process. So every era can be treated as a pre-learning. Well, in general, it is rather a question of terminology.

Well, to understand each other correctly (to have the same concept of terms) is important.
Every epoch on the same data is a learning. As we have rote learning.
On new ones, without resetting the scales, it's a re-learning. And you have them new because the NS did not know about them during the initial training.
 

Regression is a prediction of the next price. In contrast to the classification, it is not the direction or type of transaction that is predicted, but the price, with all the decimal places.

There islinear regression and there is non-linear regression. Arima, Garch, and even neuronics with the right configuration (e.g. 1 output without activation) are all regressions, too.

 
Dr. Trader:

Regression is a prediction of the next price. In contrast to the classification, it is not the direction or type of transaction that is predicted, but the price, with all the decimal places.

There islinear regression and there is non-linear regression. Arima, Garch and even neuronc with correct configuration (e.g. 1 exit without activation) are also regression.

What was that all about? I wrote about logit regression, in the vast majority of cases you can confine yourself to it for classification, and not bother your brain with NS. Fast and accurate, no overrides.
 

I had a talk with San Sanych. We settled on the fact that I am preparing a file for training regression, and then we will continue what's what. So, brothers, I'm doing now unloading and invented such that many simply do not have enough traction that you can do so. How about an adaptive target for regression???? AAA?

I don't know if it's good, but I think it's worth to check this variant. :-)

Reason: