Machine learning in trading: theory, models, practice and algo-trading - page 610

 
Vladimir Perervenko:
There is seed = NULL parameter inside darch() function by default. set it to some state, for example seed = 12345.

This is a small learnRate value. Start with learnRate = 0.7, numEpochs = 10 for RBM and NN. But this is ceiling data. You need to optimize for a specific data set.

Good luck

If you want to make an ensemble, you'd better remove set.seed(), right? To make the grids different. Or set.seed(N grid) - for reproducibility of the whole ensemble.
 
elibrarius:
And if you want to make an ensemble, then set.seed() is better to remove, right? So that the grids are different. Or set.seed(N mesh) - for reproducibility of the whole ensemble.

Yes, that's right. An ensemble of such complex models (I mean darch) can contain no more than 3-5 pieces. And they should be very different. I.e. they must have different values of parameters (number of layers, neurons, activation functions, etc.), or (I won't describe many other variants now). An option of ensemble of the same trained structure with different initialization may be possible, but it's weak. At least make the type of initialization different.

Good luck

 
Vladimir Perervenko:

Yes, that's right. An ensemble of such complex models (I mean darch) can contain no more than 3-5 pieces. And they should be very different. I.e. they must have different values of parameters (number of layers, neurons, activation functions, etc.), or (I won't describe many other variants now). An option of ensemble of the same trained structure with different initialization may be possible, but it's weak. At least make the type of initialization different.

Good luck

If you race processor on a set of trainings on the same data, to determine the best structure, it's better to put the results in an ensemble. If to choose the easiest way - by grid with neuron interval of 5 or percentage (with this interval even models will be well different), then by results take 3-5 or 10 best results and average them. Models will be built and calculated anyway, so why should we waste our efforts? ))

 
elibrarius:

If we need to train a lot of cpu's on the same data to define better structure, then it's better to put results into ensemble. If we assume to use the simplest option - grid with neuron interval of 5 or percentage (with this interval and models will be very similar), then take 3-5 or 10 best results and average them. Models will be built and calculated anyway, so why should we waste our efforts? ))


How are you doing in general with these models? :) because the discussion is going on and no one announces the results

Maybe some benches compared to perseptrone or gbm. For forex, of course.

 
Maxim Dmitrievsky:

:) because there is a discussion but no one announces the results

If you take a profit chart as a result, there will be no results. And most people, even in this branch, do not need anything except profit chart. The only proof, we just do not understand others.
 
Maxim Dmitrievsky:

How are you doing in general with these models? :) discussion is going on and no one voiced the results

The error in the test plot is on the verge of 50%, like most. But at least it counts tens of times faster than Alglib. If here it takes 40-100 minutes to calculate the model, while on Alglib-more than a day was waiting for the same structure, did not wait and disabled the calculation.
Although if now I have to pick up models in the cycle, then it will take a lot of time again.... I also have to program it.
In general, this is a long time, as you do not put myself time limits on the MO.

Interesting - so I dig)

 
Yuriy Asaulenko:
If you take a profit chart as a result, then there will be no results. And most people, even in this branch, do not need anything but the profit chart. The only proof, we just do not understand others.

It's not like you were involved in the backwoods.

 
elibrarius:
On the test site, as with most, the error is on the verge of 50%. But at least it counts dozens of times faster than Alglib. If here it takes 40-100 minutes to calculate the model, the Alglib-e more than a day waiting for the same structure, not waited and turned off the calculation.
Although if now I have to pick up models in the cycle, then it will take a lot of time again.... I also have to program it.

I mean, the choice of signs is still the main problem :) but at least it learns faster, and that's good

 
Maxim Dmitrievsky:

That is, the choice of features is still the main problem :)

and the features and structure of the model turns out to be too

 
Vladimir Perervenko:

1. What are you talking about optimization? What plateau? What model? If you are talking about a neural network, it would be strange not to train (optimize parmeters) DNN before using it.

2. What model parameters(?) should be stable?

I don't understand your thoughts.

I was talking about optimization of hyperparameters of DNN which must be done necessarily and not in the tester.

What are you talking about optimization of?

The effectiveness of the model as anoptimization criterion for everything else.

What kind of plateau?

The plateau of performance.

What model?

Any model.

If you're talking about a neural network, it would be strange not to train (optimize parameters) DNN before using it.

And this is the main question, which I once asked you: how the results of training (parameter optimization) depends on the non-stationarity of the input predictors. Your answer was: no way. It is not clear to me, as NSs constantly have to be retrained, so they react to non-stationarity, so the model parameters are random variables, so there is a problem of stationarity of parameters. So, all that is discussed in GARCH but for some reason isn't discussed in the classification.

Типы оптимизации - Алгоритмический трейдинг, торговые роботы - MetaTrader 5
Типы оптимизации - Алгоритмический трейдинг, торговые роботы - MetaTrader 5
  • www.metatrader5.com
В данном режиме происходит полный перебор всех возможных комбинаций значений входных переменных, выбранных для оптимизации на соответствующей вкладке. Быстрая (генетический алгоритм) В основу данного типа оптимизации заложен генетический алгоритм подбора наилучших значений входных параметров. Данный тип оптимизации значительно быстрее полного...
Reason: