Machine learning in trading: theory, models, practice and algo-trading - page 1545

 
mytarmailS:

Try to predict not the sign of the increment, but for example the price of the next knee of the zigzag or something better, or for example the consequence of 30 successive candles or something like that. or something like that, you should use regression, not classification, but regression not one step ahead, but look for the extremum. I think you will be pleasantly surprised.

Alas, miracles never happen, aggregate variables such as price (extremum, etc.) are not predicted at all, well, not much better than with a wand (that is, nothing) and relative ones (price deviation from extremum) are as bad as increments.

 
The grail:

Alas, there are no miracles, aggregate variables such as price (extremum, etc.) can not be predicted at all, well, not much better than a wizard (that is, nothing), and relative variables (price deviation from the extremum) are as bad as increments.

I'll tell you this, the extremum which will be the most significant in the next hour, for example, is easier to predict than the value of a trailing candle, or the candle's color, or the direction of a zig-zag or...

At least it is so for me and there is a reasonable explanation.

 
Maxim Dmitrievsky:

tester for python, liba - there are a lot of different ones

Regarding everything else - now I'm racing with different parameters and enthusiasm is gone, the same overfit as in the forest

it's not hard to understand where the trainee is and where the test is. That is, in fact, nothing has changed, catbust did not give advantages.

I will try lstm later.


If you think that overtraining, then stop generating trees earlier, but judging by the number of deals there more likely undertraining...

What do you get for different samples of Precision and Recall?

Do you have a sample in the file - it would be interesting to me to spin and compare the dynamics of learning with my data, if there is a good model, then I'll send.


By the way, I've decided to try to pull leaves from CatBoost, I don't know, will I find good ones among them, or does the ideology of boosting itself does not suggest that, what do you think?

 
mytarmailS:

I'll tell you this, the extremum which will be the most significant in the next hour, for example, is easier to predict than the value of a trailing candle, or the color of a candle, or the direction of a zig-zag or...

At least it is so for me, and there is a reasonable explanation.

What does "the most significant extremum" mean - how do you check later if it is significant or not?

What's a reasonable explanation for all this is quite interesting.

 
Aleksey Vyazmikin:

If you think it is overtraining, then stop generating trees earlier, but judging by the number of transactions, it is more likely undertraining...

What do you get for different samples of Precision and Recall?

Do you have a sample in the file - it would be interesting to me to spin and compare the dynamics of learning with my data, if there is a good model, then I'll send.


By the way, I've decided to try to pull tree leaves from CatBoost - I don't know, will I find good specimens among them, or the ideology of boosting itself does not imply that, what do you think?

overlearning in the sense of weak generalization. Above I have already written how you can get around the problem, but there are more elegant approaches, I'm sure

there is no problem with the quality of training onrain + validation

 

I see everyone is trying to train the network with the help of a teacher.

Has anyone tried to train on a target function, such as the recovery factor?

 
Aleksey Vyazmikin:
take the leaves out of CatBoost - I don't know if there are any good specimens among them, or do you think the ideology of boosting itself does not suggest that?

No, it doesn't.

In XGBoost the first tree is a rough model. The others correct the first one with a microscopic coefficient. You can't get anything working there individually, they only give good results with the whole crowd.
In Katbust apparently the same basic principle, with its own features.
 
Maxim Dmitrievsky:

Regarding everything else - now I race with different parameters and enthusiasm is lost, the same overfit as the forest

it's not hard to understand where the track and where the test. That is, in fact, nothing has changed, katbust did not give advantages.

Apparently there is no point in complicating the system with MQL + Python + Catbust. And I will search for regularities with the algibu forest.

If there is a pattern, then let the forest teach it 90%, not 99% like catbust. The main thing is to find it, and then chase the percentages. Now both are about 50%.

 

It seems that with these forests everyone has gone into the wilderness,

and it is impossible to get out of there without help ))))

 
elibrarius:

Apparently there is no point in complicating the system with a set of programs MQL + Python + Catbust. And to look for patterns with the help of the Algiba forest.

If there is a pattern, let the forest teach it by 90%, and not by 99% like the catbust. The main thing is to find it, and then to chase percentages. Now both there and there is about 50%.

It's interesting to touch everything, both... If there's nothing to compare it with, then it's impossible to understand anything.

Reason: