Machine learning in trading: theory, models, practice and algo-trading - page 407

 

Finally, the model was counted, I added it to the existing one. Profitability has grown to no limit, on the same area outside the sample profitability is 12.65

It is true the deposit load is not small, but it is bearable, so something like this. I ran the example from the first post for optimization, but I will not force the machine if it does not calculate quickly. Although there are few columns there, but it's a lot of lines, so let's see what's up...

here's the report. The ratio of trades is quite interesting, but the drawdown is quite large.... It's a matter of taste, as they say...


 
Mihail Marchukajtes:

Finally, the model was counted, I added it to the existing one. Profitability has grown to no limit, on the same area outside the sample profitability is 12.65

It is true the deposit load is not small, but it is bearable, so something like this. I ran the example from the first post for optimization, but I will not force the machine if it does not calculate quickly. It has few columns but too many lines, so let's see how it goes...



Try to get more trades to estimate the result. The more of them the faster you may understand in real trading when the model should be retrained. For example I estimate it this way - on testing results maximal loss series is 2 trades, if real trade shows 4 losses in a row I need retraining. On average 400/60 = 6-7 deals per day, i.e. during the whole day 1 you can understand if it is worth retraining.

I have so far, 400 trades in 3 months, 15 minute timeframe. The training sample is 1 month (in the middle), on the right and left there is a month out of sample. I bolded my lot especially for beauty. The initial balance is $ 1000 :) I have not yet made automatic retraining for simultaneous running of all history, for this I should port J-predictor or use another neural network, because weights are selected through optimizer now.

And it is clear that profitable trades are 90%, but the average losing trade is more, because the average stoploss is larger than the average take profit. The maximum profitable series is 33 profits in a row, against only 2 losses in a row, but the total profit of 33 trades is only 4 times greater than the total loss of 2 trades (need to shorten the stop loss). In February already does not earn with these settings.


 
If the example from the first post is not counted by tonight, I'll knock it out. Especially since the task itself is pointless and has no practical interest. And to race the computer for a few days for fun, I do not like it. Resource after all...
 
Mihail Marchukajtes:
If the example from the first post won't be counted by the evening, I'll knock it out. Moreover, the problem itself is pointless and has no practical interest. And I don't want to race the computer for a few days just for fun. Resource after all...

You will be checked on the validation or test site? If you're counting on a full file, you can check on the validation file from the post https://www.mql5.com/ru/forum/86386/page4#comment_2530392.
I've experimented a bit with RNN and it looks like it just remembers training examples (important in conjunction with noise predictors), and on new data the noise predictors spoil the result. That is, RNN is prone to overlearning. At least for logical problems where 0 and 1.

But it is possible that it will interpolate the average values between 0 and 1 quite well.

Машинное обучение: теория и практика (торговля и не только)
Машинное обучение: теория и практика (торговля и не только)
  • 2016.05.28
  • www.mql5.com
Добрый день всем, Знаю, что есть на форуме энтузиасты machine learning и статистики...
 
elibrarius:

Are you going to have the validation on the validation or test section? If you count on the full file, you can check on the validation file from the post https://www.mql5.com/ru/forum/86386/page4#comment_2530392
I have experimented a bit with RNN and it seems that it just remembers training examples (important in conjunction with noise predictors), and on new data noise predictors spoil the result. That is, the RNN is prone to overtraining. At least for logical problems where 0 and 1.

But it is possible that it will interpolate averages between 0 and 1 quite well.


I've run the whole file, we'll see the result of training, then I'll upload the model here, and you can check it for validation..... That's how it is...
 
But the most interesting thing will be that now another contract will start and it will be interesting to see how the model trained on the previous contract will work. So we'll see.....
 
elibrarius:

Are you going to check it on a validation or a test plot? If you calculate on the full file, then you can check on the validation file from the post https://www.mql5.com/ru/forum/86386/page4#comment_2530392
I've experimented a bit with the RNN and it looks like it just remembers training examples (important in conjunction with noise predictors), and on new data the noise predictors spoil the result. That is, RNN is prone to overlearning. At least for logical problems where 0 and 1.

But it is possible that it will interpolate the average values between 0 and 1 quite well.


What do you call overtraining? and how do you determine which predictors are noisy and which are not? Why do you think that noisy predictors spoil the results and not the important ones have stopped working? You can hardly find important predictors on the market that will work forever

RNN should be handled in a specific way - make a stationary series and take signals from extrema, hoping for a reversal

After all, any training is the essence of fitting, albeit with a non-linear sense of some kind...

 
Maxim Dmitrievsky:


Did you compare performance of different MO models, why stop at decision trees? I got the smallest error on them, I wrote about it above.

Trees, as well as other MOs have advantages and disadvantages, and stopped on this method in relation to the problem from the first post, on the principle of reasonable sufficiency, it is accurate and fast both in the resulting code, and in recursive generation.

Although, to avoid trolling in this thread, it seems it was necessary to sandbox the forest with some bootstrap or infinitely improve the trees, some boosting, and each step should describe and describe everything...)

 
Vasily Perepelkin:
Trying to talk some sense into you and others who have gone astray.
Decisions are made by a man, not by trees, so quit fooling around.
 
Vasily Perepelkin:
Decisions are made by a man, not trees, so quit messing around.
I agree with you, but not completely.
A man has to assess the situation and understand the environment in which he and his family live.
The world is developing very fast now, and the information environment, a big field in which you can find anyone.
You simply cut off part of your strategic overview and lose sight of what is going on, which puts you, as a male protector, at unnecessary risk.
Reason: