Machine learning in trading: theory, models, practice and algo-trading - page 979

 

I played with NS and kfold also with ensembles of NS from alglib for classification (softmax), first impression:

1. LBFGS trains much faster than LM (it's unreal to wait for the second one), even for one NS without ensemble

2. kfold keeps showing a large learning error, I couldn't get the strategy to + even at 2 fouls, not to mention 10. Maybe it's not very suitable for my problem, but I'll try it again.

Ensemble of NS based on bagging and on LBFGS algorithm is of the same quality as ensemble of scaffolds, but it is trained slower.

Ensemble based on early halting with validation sampling (LM is used in my opinion): no waiting for learning at all.

Tuning of parameters (step, number of restarts) doesn't give visible improvements. Increasing the number of neurons by 2-3 times gives slight improvement.

5. In general, I got the impression that NS are retrained as well as Forest, ensemble of NS on bugging is retrained a little less than the committee from the forest.

All this was done in order to compare with xgboost, for example, which I will investigate later.

++ stacking models did not give me more stability, trained the same way, sometimes you can improve results on a tray because the model just learns more

 
Maxim Dmitrievsky:

Hi, a little off-topic. Once on the forum I found your correspondence with another person. There you were discussing what program is used to make help from Microsoft. (The one which pops up when you press F1). That there to gather his variants of code in a more convenient form with a search. Please tell me again. (I hope I was not mistaken and it was you)))

 
Evgeny Raspaev:

Hi, a little off topic. Once on the forum I found your correspondence with another person. There you were discussing what program is used to make help from Microsoft. (The one which pops up when you press F1). That there to gather his variants of code in a more convenient form with a search. Please tell me again. (I hope I was not mistaken and it was you)))

I think it was not me) do not remember it

 
Maxim Dmitrievsky:

I don't think it was me) I don't remember that

Sorry, sorry)))

 
Evgeny Raspaev:

Too bad, I'm sorry)))

Try Help & Manual.
 
Dmitriy Skub:
Help & Manual try it.

Oh, that's what I need.

 

Good afternoon))


While you are searching here until now, we have already created RPE - the Russian breakthrough element.

This is the "fifth element" - grail, philosopher's stone, cinnabar, qi system, the achievement of our scientific specialists in algorithms.

Now any economic-financial projects will be optimized through deep neural analysis on RPE.

I.e. in the future 1 ruble will be equal to 1 dollar, through economic breakthroughs.


we are all on our way to a brighter future!))

 
Alexander Ivanov:

Good afternoon ))

....

And you must be one of the team of geniuses? Do you leak insider information? Wait, ordinary immature minds may not understand)

 
Maxim Dmitrievsky:

I played with NS and kfold also with ensembles of NS from alglib for classification (softmax), first impression:

1. LBFGS trains much faster than LM (it's unreal to wait for the second one), even for one NS without ensemble

2. kfold keeps showing a large learning error, I couldn't get the strategy to + even at 2 fouls, not to mention 10. Maybe it's not very suitable for my problem, but I'll try it again.

Ensemble of NS based on bagging and LBFGS algorithm is of the same quality as ensemble of scaffolds, but trains slower.

Ensemble based on early halting with validation sampling (LM is used in my opinion): no waiting for learning at all.

Tuning of parameters (step, number of restarts) doesn't give visible improvements. Increasing the number of neurons by 2-3 times gives slight improvement.

5. In general, I got the impression that NS are retrained as well as Forest, ensemble of NS on bugging is retrained a little less than the committee from the forest.

All this was done in order to compare with xgboost, for example, which I will investigate later.

++ stacking models with me did not give more stability, trained the same way, sometimes you can improve results on a tray because the model is just more trained

I would also like to know the name of that toy you're playing with.

 
Vladimir Perervenko:

I wish I knew the name of the toy you're playing with.

It is written - library of numerical analysis alglib, ported to MT5. I've already used it up and down, no problems in general, the library is good. But without visualization and newer models. It seems the library is not developing anymore, there is silence on their web site.

Reason: