Machine learning in trading: theory, models, practice and algo-trading - page 493

 
Maxim Dmitrievsky:

wolf-forward is necessary, you can not optimize like that, forward will always be bad (or random) in this case, depending on what phase of the market you get into, I already have a bunch of versions of such systems-billionaires on the backtest, which on forward work like a coin) this is called overfitting

Is there an algorithm of selecting system parameters in rolling forward?
I've got a dozen optimizations with monthly shifts, in each month the best input parameters differ from parameters of other months. And which of them to choose to work with?
 
elibrarius:
Is there an algorithm for selecting system parameters when valving forward?
I received a dozen optimizations with an offset per month, in each month the best input parameters are different from the parameters of other months. And which one to choose?

I did not express myself correctly, I meant "something like it", i.e. self-optimizing system with some optimization criterion, and the same NS can be used as an optimizer

 
elibrarius:
Is there an algorithm for selecting system parameters when rolling forward?
I received a dozen optimizations with an offset per month, and in each month the best input parameters are different from the parameters of other months. And which one should I choose for my work?
Speaking of optimization and training. It takes me 23 hours, not including intermediate manipulations. After each pass (that's several epochs) I change the sample for training. No, I don't shuffle it, I change it, i.e. I don't show the same pictures. In the learning process, there are no repeated samples.
 
Yuriy Asaulenko:
Speaking of optimization and learning. It takes me 23 hours, not counting intermediate manipulations. After each pass (which is several epochs) I change the sample for training. No, I don't shuffle it, I change it, i.e. I don't show the same pictures. In the learning process, there is no repeating sampling.

And what exactly the optimization algorithm? look for the same but with L-BFGS algorithm, it will be many times faster

and your NS will be trained, well, 100 times faster, for example, not 23 hours but 10 minutes (like all normal people :))) if you have a simple gradient descent with a fixed step


here is a comparison:

http://docplayer.ru/42578435-Issledovanie-algoritmov-obucheniya-iskusstvennoy-neyronnoy-seti-dlya-zadach-klassifikacii.html

Исследование алгоритмов обучения искусственной нейронной сети для задач классификации - PDF
Исследование алгоритмов обучения искусственной нейронной сети для задач классификации - PDF
  • docplayer.ru
САНКТ-ПЕТЕРБУРГСКИЙ ГОСУДАРСТВЕННЫЙ УНИВЕРСИТЕТ Математико-Механический факультет Кафедра Системного Программирования Корыстов Максим Андреевич Исследование алгоритмов обучения искусственной нейронной сети для задач классификации Курсовая работа Научный руководитель: Невоструев К. Н. Санкт-Петербург 2014 г. Введение 1.1 Мотивация В последнее...
 
Maxim Dmitrievsky:

And what exactly the optimization algorithm? look for the same but with L-BFGS algorithm, it will be many times faster

and your NS will be trained, well, 100 times faster, for example, not 23 hours but 10 minutes (like all normal people :))) if you have a simple gradient descent with a fixed step

here is a comparison:

http://docplayer.ru/42578435-Issledovanie-algoritmov-obucheniya-iskusstvennoy-neyronnoy-seti-dlya-zadach-klassifikacii.html

Thanks, I'll read it.

More like learning, not optimizing. Not simple. Already wrote - standard BP with simulated annealing manually.

Perhaps some algorithms are better, but I use only what is in the development environment. Other, external ones are problematic.

In general, the speed is not critical, if I train once every 3 months - 23 hours is even a piss. But in the 3 month test, any deterioration was not noticed. Probably works longer.

 
Yuriy Asaulenko:

More like learning, not optimizing. Not simple. I already wrote - standard BP with simulated annealing manually.

Perhaps some algorithms are better, but I use only what is available in the development environment. Other, external ones are problematic.


whatever, training is the optimization of the target f-i

Right, they wrote about annealing, I'm not familiar with it, I'll read it

 
Maxim Dmitrievsky:

whatever, training is the optimization of the target function

I have no target function in training, i.e., there is no initial classification in the training sequence). It's like learning multiplication table with a teacher who doesn't even know it himself. The NS itself is looking for a way to go I don't know where. So faster learning is unlikely to work.
 
Maxim Dmitrievsky:

whatever, training is the optimization of the target function

Right, they wrote about annealing, I'm not familiar with it, I'll read it

Yes, annealing is imitated manually by changing training parameters after N epochs. In addition, training sequence is completely replaced (not mixed, but replaced).
 
Yuriy Asaulenko:
Yes, annealing is imitated manually by changing learning parameters after N epochs. Besides, learning sequence is completely replaced (not mixed, exactly replaced).

This is cool, where can I read more about this NS? i.e. it's like without a teacher, but you still feed something to the output?

 
Maxim Dmitrievsky:

Where can I read more about this kind of NS? i.e., it's like without a teacher, but you still feed something to the output?

Read the theory of Heikin Neural Networks and Bishop in English - no translation, but it seems to be preparing.

It's very simple. Your input is random trades, and your output is the result. The Monte Carlo method is called, and it is not very fast. And the systematization is a task of SAP.

Reason: