Machine learning in trading: theory, models, practice and algo-trading - page 614

 

Is it possible to start training just to trade? After all, the risk that I will lose my account, which I do not even doubt, is very high.

 

But Micha promised to beat Java, so soon he will shine.

I wonder why not Python? Maybe the same R? I don't get it.
 
Dr. Trader:

In Expert Advisors from Market I also often saw that good for trading parameters form a plateau in the optimization function. If there are for example MA or RSI, or some coefficient, then changing the parameter by a small value did not affect the final result.

But there it is logical, most of the parameters there are used in the formula for calculating the indicator, so a small change will just slightly affect the result, which will still be calculated on the same prices.

On the contrary, in machine learning, parameters can have an avalanche-like effect on the whole course of learning, and even a small change leads to a completely different result. For example, the number of neurons in a hidden layer - as their number increases, the number of used weights will also increase, and function of initializing weights using gpsc will set their values in a slightly different order, which will lead to another result.
Changing some parameters will also draw a plateau in optimization function, you can for each parameter study smoothly or stochastically influence the final mark of the model and for smoothly influencing parameters use optimizer additionally based on derivatives (functions optim(method="L-BFGS-B") and optimize() in R)

This is not a market, not a market at all - it is a statistic with a number of tests on the stability of the coefficients. The most famous is CUSUM.

So maybe the above monstrous dependence of the result on the network parameters indicates their fundamental unsuitability for financial markets?

Or maybe first of all it is necessary to construct something suitable to the purpose (a tractor - here it is fashionable, or a rocket), and then to argue about stability of what you get?

But anyway: either we have evidence of model stability, or it is not needed at all. Model error and model stability are two sides of the same coin.

 
Yuriy Asaulenko:
Why not Python? Maybe the same R? I don't get it.

Indeed. If you have such a craving for knowledge, you take the rating and from the top study what seems to fit.

 
Dr. Trader:

Reshetov's model is not a benchmark. For example, it searches for a set of predictors by trying different variants - the model takes a random set of predictors, is trained, and remembers the result. This is repeated in a loop a huge number of times, eventually the best result is used as the final model. This process can be noticeably accelerated if you first do the selection of predictors by a special algorithm, and then train the Reshetov model just once on this particular set. And you get Reshetov model quality at a speed comparable to AWS. The "cost" of such a model will decrease noticeably, but the quality will remain at the same level.


What kind of algaritm is this? In general I agree with the statement, the price of the model and the quality is a little bit different weight. You can get a cheap but high quality model. Reshetov's real problem is that it takes too long to count due to repeated division of the sample in random order, etc.

Again, what kind of algorithms are these? that can immediately tell which predictor is not relevant. He has some way of implementing it, but I haven't really looked at it yet.... In fact, it defines it through an invariant, which is quite relevant logically, but I think there are errors all the same :-( rather not errors, but not completions...

 

If we still use 2 hidden layers, then obviously the 2nd layer is much smaller in the number of neurons than the 1st layer.

What is the minimum for the number of neurons in a layer? It seems to me that it makes no sense to do less than 3-5.

Or a layer with 1 - 2 neurons can also make a significant contribution to the model?

 
elibrarius:

If we still use 2 hidden layers, then obviously the 2nd layer is much smaller in the number of neurons than the 1st layer.

What is the minimum for the number of neurons in a layer? It seems to me that it makes no sense to do less than 3-5.

Or layer with 1 - 2 neurons can also make a significant contribution to the model?


From practice - 1 neuron with 3 inputs can notcher normal signals for 1-1.5 months on 15 minutes and somewhere 200 trades, if i take a larger sample then the quality of cramming drops dramatically, and so does the number of trades, not enough combinations. That is, assuming that the system would remain stationary and the signals repeated, 1 neuron would be enough even.

Same thing with fuzzy logic on 3 inputs approximately, and with optimization of 4 membership functions
 
Mihail Marchukajtes:

Again, what kind of algorithms are these? that can immediately tell which of the predictors is not relevant.

There are many algorithms, even more than we want. For example -

Article from Vladimir -https://www.mql5.com/ru/articles/2029

Article from Alexey -https://habrahabr.ru/company/aligntechnology/blog/303750/

 

For those who like to load the processor with all sorts of stuff when modeling - here is a way to drastically reduce the time.


Are parallel simulations in the cloud worth it? Benchmarking my MBP vs my Workstation vs Amazon EC2

Are parallel simulations in the cloud worth it? Benchmarking my MBP vs my Workstation vs Amazon EC2
Are parallel simulations in the cloud worth it? Benchmarking my MBP vs my Workstation vs Amazon EC2
  • Kristoffer Magnusson
  • www.r-bloggers.com
If you tend to do lots of large Monte Carlo simulations, you’ve probably already discovered the benefits of multi-core CPUs and parallel computation. A simulation that takes 4 weeks without parallelization, can easily be done in 1 week on a quad core laptop with parallelization. However, for even larger simulations reducing the computation time...
 
SanSanych Fomenko:

For those who like to load the processor with all sorts of stuff when modeling - here is a way to drastically reduce the time.

Are parallel simulations in the cloud worth it? Benchmarking my MBP vs my Workstation vs Amazon EC2

C Dll and a request to install mt5, R and the right packages - it's probably unreal to get there.

Reason: