Machine learning in trading: theory, models, practice and algo-trading - page 3483

 
Yuriy Asaulenko #:

TensorFlow? There will be no stopping, as many epochs assigned as there will be.

CatBoost.

 
Yuriy Asaulenko #:

Why do I need it? I haven't been here for a few years and I don't plan to be here any more. Here, I'm looking at Masha's training thread - it's still there.

Really?

 
Yuriy Asaulenko #:

Why do I need it? I haven't been here for a few years and I don't plan to be here any more. Here, I'm looking at Masha's training thread - it's still there.

Well, this is from the category - I ran after you for three hours to say how I do not care about you.

Why don't come in for years and then come in to delete the ac))))
 
Now all the flooders should turn on the other flooders and admit to them that they are flooders.
 
Aleksey Vyazmikin #:

You can imagine that we have made a transformation through some function - a conditional sine wave. The function is the same in all samples. This is what changed the scale and the order of construction on this scale.

The easiest to understand: you move blocks/quanta from columns to another place (on other rows), but you do not change the teacher's column. In the new location, these moved quanta do not correspond to the teacher rows (they remain in the old location). I.e. teacher's answers in new place become random for moved quanta/rows.
 
Forester #:
The easiest to understand: you move blocks/quants from columns to another place (to other rows), but you do not change the teacher column. In the new place these moved quanta do not correspond to the teacher's rows (they remain in the old place). That is, the teacher's answers in the new location become random to the moved quanta/rows.

Nah, it doesn't. Imagine we have ranges in the predictor:

1. 0 to 10 - 55% probability.

2. 10 to 15 - 45% probability.

3. 15 to 25 , 50% probability.

Here I've just changed the numbers after ordering by probability, now the new numbering is relative to the old 1=2, 2=3, 3=1. In fact, I changed only their designation.

Accordingly, the tree algorithm got a smoother transition of the probability of detecting "1" over its range.

Of course, I may have overlooked something and am wrong.

 
Aleksey Vyazmikin #:

No, it's not. Imagine we have ranges in the predictor:

1. 0 to 10, 55% probability.

2. 10 to 15 , 45% probability.

3. 15 to 25 - 50% probability

Here I just changed the numbers after ordering by probability, now the new numbering is relative to the old 1=2, 2=3, 3=1. In fact, I've only changed their designation.

Accordingly, the algorithm of tree building got a smoother transition of probability to detect "1" by its range.

Of course, I may have overlooked something and am wrong.

As far as I understand, it is strange that something changed at all. Perhaps the model is unstable and different implementations give different results.

 
Yuriy Asaulenko #:

As far as I understand, it is strange that something changed at all. Perhaps the model is unstable and different implementations give different results.

The tree maths works like this - going through the split from larger to smaller, after ordering it becomes easier to do so, which leads to faster learning. Well, which is not true, as the probabilities will then change on new data more than 50% of the time.

 
Aleksey Vyazmikin CatBoost change?

It should change of course, the ACF of signs will change.

 
Maxim Dmitrievsky #:

It should change, of course. The ACF will change.

Is this used in the learning algorithm? I can't figure out where.

Reason: