Machine learning in trading: theory, models, practice and algo-trading - page 359

 
Yuriy Asaulenko:
Write down the words.)) I want, if not a Bentley, then at least a Peugeot with an automatic).

Logan bucket :)
 
SanSanych Fomenko:

Then the question of retraining in all its glory

Re-training has become a bugbear in the topic.

There is nothing new, scary, or even unusual about overtraining. It is inherent in absolutely all models, including deterministic ones. Even Y=Ax+B can be retrained.

Typical consequence of wrong choice of inputs, the model itself and required accuracy not corresponding to the process. Say, an attempt to refine a nonlinear process with a linear model will cause the model to crumble. A crude model will give good but insufficient results - it seemed, "well, pussycat, a little more" (c).

 
Maxim Dmitrievsky:

I don't even know if I need to do any better, 20,000% in 2.5 months at opening prices on 5 minutes if I'm lucky... you throw in $1k and pre-order a Bentley. If you're not lucky, no big loss.)


Did they do it on Reshetov's neuron (RNN) again?
 
elibrarius:
Did they do that on Reshetov's neuron (RNN) again?


Yes, but there's not much left of it anymore.)

Soon I will compare it to MLP z alglib, not quite the right comparison will be true

And then there's nothing left but to switch completely to R

 
Maxim Dmitrievsky:

I don't even know if I need to do any better, 20,000% in 2.5 months at opening prices on 5 minutes if I'm lucky... you throw in $1k and pre-order a Bentley. If you're not lucky, no big loss.)



Maxim, here is a contest scheduled in July for 4 weeks - the question price of $ 10. Not a bad way to test in real life in what ever the real battle mode.
 
geratdc:

Maxim, here is a contest scheduled in July for 4 weeks - the question price of $ 10. Not a bad option on the real world to test in the most or at all a combat mode.

Maybe if there will be a final version by this time, so far I have a new version every day
 
Maxim Dmitrievsky:


Yeah, but there's not much left of it.)

Soon I'll be comparing it to MLP z algleeb, not really a fair comparison really.

And then there will be nothing left but to switch completely to R

I almost finished my version in ALGLIB. It will be interesting to compare the performance. It's only necessary to set absolutely the same tasks for the grids (indicators, indicator periods, training period, training matrix, etc.) and see which one develops better. It would be interesting, if someone from R EA owners would join us, because both Reshetov's and ALGLIB's grids are too simple.
 
SanSanych Fomenko:

Then the question of retraining in all its glory

Nowadays, many methods have been developed in deep neural networks to significantly reduce the probability of overfitting.

In general, the translation "overfitting -> overtraining" is inaccurate in meaning. It is more likely to mean "to learn" as opposed to "to learn".

You don't have to be afraid of it, you just have to control the moment when you finish learning.

Good luck

 
geratdc:

Maxim, here is a contest scheduled in July for 4 weeks - the question price is $10. It is a good way to test in real trading mode.
Maxim Dmitrievsky:

Maybe if there will be a final version by this time, so far I have a new version every day.


It's time! It's time already!

 
Interesting!!! But the problem is a little different. Let's say your CU is down 20%. What's the question? Will it get out of the drawdown and earn on top or continue to drain???? How do you determine that the TS needs to be reoptimized?
Reason: