Machine learning in trading: theory, models, practice and algo-trading - page 354

 
Vladimir Perervenko:

This is solved more correctly and elegantly incalibrate::CORELearn/.

And has been for quite some time.

Good luck


The funny thing is, I used calibrate without much success and gave it up. With calibration I just moved the boundary between classes, but didn't guess to leave a SPACE between classes.
 
SanSanych Fomenko:

The funny thing is that I used calibration without much success and abandoned it. With calibration, I just moved the boundary between classes, but I did not guess to leave a SPACE between classes.
Calibration makes a "hard" classifier "soft" (it can say "I don't know"). Gaps disappear.
 

Here I got to the point of preparing training data for my version of the grid...
I look at the examples and think, why should I introduce bars with no trade command in training?

If the training examples are built by the zigzag, then it is necessary to enter in the NS, only the moments of the zigzag reversal.

Or maybe not making a trade decision is also a solution? ))) And we have to learn it, too? Although, logically, if there is no buy or sit, then the decision has been made not to trade.

 

It is assumed that the trained model will make some prediction on each bar. For example, its prediction should be interpreted as "Hold long position"/"Hold short position"/"Do not trade", and then according to this prediction inside the Expert Advisor perform various trading operations - flip, close, open long or short. So the model (neuron) must learn to identify all three of these situations, and training data, respectively, are prepared in advance to show where and what kind of forecast is expected from it.

 
Dr. Trader:

It is assumed that the trained model will make some prediction on each bar. For example, its prediction should be interpreted as "Hold long position"/"Hold short position"/"Do not trade", and then according to this prediction inside the Expert Advisor perform various trading operations - flip, close, open long or short. So the model (neuron) must learn to identify all three of these situations, and training data, respectively, are prepared in advance to show where and what kind of forecast is expected from it.

Still, it seems to me that nothing-not-doing needs to be learned. Everybody is good at it).

Besides, if we aren't scalping and trade decisions are made once in 100 - 10 000 bars, NS should do all those unnecessary 10 000 bars... Obviously, the difference in training speed will also be 10 000 times. Even if we scale, for example, 1 time for 10 bars, a 10-fold increase in calculation time is also significant.

In general, practice is the criterion of truth, I will try both variants and compare them.

 
Vladimir Perervenko:
Calibration makes a "hard" classifier "soft" (it can say "I don't know"). It eliminates the slack.

Question about R, how to make versions compatible?
package 'MXNet' is not available (for R version 3.4.0) examplehttps://www.r-bloggers.com/recurrent-models-and-examples-with-mxnetr/ and you would not like to write an article about recurrent nets? :)

 
Maxim Dmitrievsky:

Such a question on R, how to do version compatibility?
Get into the code of the package, and fix it.
 
Yuriy Asaulenko:
Get into the module code, and fix it.

I am clumsy, I do not know where to go.)
 
Maxim Dmitrievsky:

I am clumsy, do not know where to go.)

Source code of the package. Download, fix, compile. Sometimes this works, sometimes not. Maybe there are only 2 lines to correct, or maybe a lot).

The easiest option is to download the previous version of R.

Reason: