Machine learning in trading: theory, models, practice and algo-trading - page 1342

 
Vladimir Perervenko:

Congratulations.

Terminal: Added API for requesting data from MetaTrader 5 terminal via applications usingthe R language.

We have prepared a special package MetaTraderR. It contains DLL for interaction between R and MetaTrader 5 terminal, documentation and auxiliary r-files. The package is now being registered in theCRANrepository and will soon be available for download and installation.

We'll wait for the sequel.

Good luck

Or maybe even condolences, because now it's easy to compare the work with the native mql on big data and understand - what a crap this r...)

 
Aleksey Vyazmikin:

Here's another way of representing the behavior of the models in the sample - here by color:

TP - correct classification "1" - green

FP - wrong classification "1" - red

FN - wrong classification "0" (actually missed "1") - blue

The screen size is large - it's more interesting to watch by click.

And the gif on pressing the two options will switch for clarity

It can be seen that my models have very little dip in the market, because a lot of blue - we need to look into the causes of inactivity. Perhaps we should look for other ways to stop training, not only on the accuracy. In my opinion, I would have set both completeness and accuracy to some limits, but for some unknown reason this variant of stopping the training is not provided by the developers, it is a pity.

Skips because of chips going out of range

have you found any interesting predictors?

 
Maxim Dmitrievsky:

skips due to features exceeding their value ranges

have you found any interesting predictors?

That is, you think that previously such values have not been encountered in the history and that is why the model in inactivity forms skips, i.e. insufficient sample completeness for training?

Well, my predictors are all interesting - they have been made up for years :) What works better, I do not know yet, I'm working on a script, which will help me understand it better, I hope.

 
Vladimir Perervenko:

Congratulations.

Terminal: Added API for requesting data from MetaTrader 5 terminal via applications usingthe R language.

We have prepared a special package MetaTraderR. It contains DLL for interaction between R and MetaTrader 5 terminal, documentation and auxiliary r-files. The package is now being registered in theCRANrepository and will soon be available for download and installation.

We'll wait for the sequel.

Good luck

Very interesting, we will wait

 

What the trees are noisy about...

In the graph, the Y-axis is the number of the leaf (binary tree), and the X-axis is the sampling string (the test one in this case). The color ranges in the legend are values taken modulo, show the response of the leaf. The model uses 7 trees, i.e. a value from each leaf will come to one line at a time, a total of 7, they are summed up and then the logistic function is applied, for example the sum of 0 will be equal to 0.5.

From the graph you can conclude that some of the leaves were not activated in the test sample period, i.e. situations in the training and test sample were not repeated, you can also note a large concentration of leaves with low response (red), which do not significantly affect the result, indicating more likely the noise or disconnection of logically similar conditions.

Graph of the model in the market

Accuracy balance graph

The model was taken simply as an example containing a small number of leaves (trees).

 

I tried to train a neural network in Python. The package is scikit-learn, the NS itself is sklearn.neural_network.MLPRegressor. Neurons over 100, hidden layers -7, inputs -19, output - 1. The task is to predict a random process.

The task is artificial, made on noise generator, and so that theoretically this noise could be predicted. I tried it for several counts ahead.

Result of comparing forecast with real for 5 thousand randomly chosen points:

X is the forecast, Y is the real value. They all lie very close to a 45 degree straight line. That is, the prediction is almost perfect (on an artificial sample).

Learning is very fast, 24 epochs. Time is about 10 seconds.

I must say, I am very surprised. I tried very hard to hide the data. Surprised that she found it. In general, close to mysticism).

Conclusions: The NS sklearn.neural_network.MLPRegressor is quite usable. I haven't tried the classifier yet.

I have already tried something with the market, no results so far. Not looking for it, says there is nothing there, although the task of the same class as the artificially generated.

 
Yuriy Asaulenko:

I tried to train a neural network in Python. The package is scikit-learn, the NS itself is sklearn.neural_network.MLPRegressor. Neurons over 100, hidden layers -7, inputs -19, output - 1. The task is to predict a random process.

The task is artificial, made on noise generator, and so that theoretically this noise can be predicted. I tried it for several counts ahead.

Result of comparing forecast with real for 5 thousand randomly chosen points:

X is the forecast, Y is the real value. They all lie very close to a 45 degree straight line. That is, the prediction is almost perfect (on an artificial sample).

Learning is very fast, 24 epochs. Time is about 10 seconds.

I must say, I am very surprised. I tried very hard to hide the data. Surprised that she found it. In general, close to mysticism).

Conclusions: The NS sklearn.neural_network.MLPRegressor is quite usable. I haven't tried the classifier yet.

I have already tried something with the market, no results so far. No search, says there is nothing there, although the task is of the same class as the artificially generated one.

This is not a problem of the same class.

The market is NOT a noise generator.
 
Oleg avtomat:

This is a task NOT of the same class.

The market is NOT a noise generator.

The question is very controversial.) Give your model, and if it is possible to drive it in the NS, at the same time and check whether this tractor works).

 
Yuriy Asaulenko:

The task is to predict a random process.

The problem is artificial, made on a noise generator, and so that theoretically this noise can be predicted. I tried it for several counts ahead.

The result is a comparison of the prediction with the real at 5,000 randomly chosen points:


I.e., the prediction is almost perfect (on the artificial sample).

So, the data is not random, otherwise how to explain it.

 
Yuriy Asaulenko:

The question is very controversial.) Give your model, and if it is possible to drive it in the NS, at the same time and check whether this tractor works).

The question is not controversial at all. And that's exactly what your NS says, quite working on noise generator, but not working on market BP with its result"Not searching, says - there' s nothing there".

Reason: