Machine learning in trading: theory, models, practice and algo-trading - page 3483

You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
TensorFlow? There will be no stopping, as many epochs assigned as there will be.
CatBoost.
Why do I need it? I haven't been here for a few years and I don't plan to be here any more. Here, I'm looking at Masha's training thread - it's still there.
Really?
Why do I need it? I haven't been here for a few years and I don't plan to be here any more. Here, I'm looking at Masha's training thread - it's still there.
You can imagine that we have made a transformation through some function - a conditional sine wave. The function is the same in all samples. This is what changed the scale and the order of construction on this scale.
The easiest to understand: you move blocks/quants from columns to another place (to other rows), but you do not change the teacher column. In the new place these moved quanta do not correspond to the teacher's rows (they remain in the old place). That is, the teacher's answers in the new location become random to the moved quanta/rows.
Nah, it doesn't. Imagine we have ranges in the predictor:
1. 0 to 10 - 55% probability.
2. 10 to 15 - 45% probability.
3. 15 to 25 , 50% probability.
Here I've just changed the numbers after ordering by probability, now the new numbering is relative to the old 1=2, 2=3, 3=1. In fact, I changed only their designation.
Accordingly, the tree algorithm got a smoother transition of the probability of detecting "1" over its range.
Of course, I may have overlooked something and am wrong.
No, it's not. Imagine we have ranges in the predictor:
1. 0 to 10, 55% probability.
2. 10 to 15 , 45% probability.
3. 15 to 25 - 50% probability
Here I just changed the numbers after ordering by probability, now the new numbering is relative to the old 1=2, 2=3, 3=1. In fact, I've only changed their designation.
Accordingly, the algorithm of tree building got a smoother transition of probability to detect "1" by its range.
Of course, I may have overlooked something and am wrong.
As far as I understand, it is strange that something changed at all. Perhaps the model is unstable and different implementations give different results.
As far as I understand, it is strange that something changed at all. Perhaps the model is unstable and different implementations give different results.
The tree maths works like this - going through the split from larger to smaller, after ordering it becomes easier to do so, which leads to faster learning. Well, which is not true, as the probabilities will then change on new data more than 50% of the time.
It should change of course, the ACF of signs will change.
It should change, of course. The ACF will change.
Is this used in the learning algorithm? I can't figure out where.