Machine learning in trading: theory, models, practice and algo-trading - page 3146
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Going round and round in two circles on account of you being obtuse, supposedly not understanding what I write.
There's no code. You figure it out.
Too bad))))))
Yuri Asaulenko used to be here, I don't remember his surname. He also wrote something for some reason all the time, without any details.
When people asked him to explain why he was writing it, he would tell them to figure it out for themselves.
distinguish between blabbermouths and practitioners ...
It's so simple....
2-3 features, that's all...
you can build a classifier on a single decision tree and it will work better than gullible you...
The window is the number of predictor values that are fed to the model input. Mine is 1500 bar on H1.
Sanych, the window of 1500 bars on H1 is always the same: a tangent to the middle of the window.
It is said above - going round and round.
That's right!
differentiate between blabbermouths and practitioners.....
It's that simple....
2-3 signs, just...
you can build a classifier on a single decision tree and it will work better than the gullible you...
The gullible ones are those who write articles and advertise themselves here on the forum, not traders ).
Yuri Asaulenko used to be here, I don't remember his surname. He also wrote something for some reason all the time, without any details.
When people asked him to explain why he was writing it, he would tell them to figure it out for themselves.
Oh yes! And what epic sheets he and Renat Akhtyamov used to write. Epic in volume and meaninglessness.)
Going round and round in two circles on account of you being dumb, supposedly not understanding what I write.
There's no code. You figure it out.
No code. Judging by the fact that even the set of features changes from step to step, we are talking about ordinary retraining to fit some "predictive criterion" you have invented. There can be no question of any stability if the pattern jumps so much from step to step.
Oh yes! And what epic sheets he and Renat Akhtyamov used to write. Epic in volume and meaninglessness)
It's just a matter of putting predictors into neuronics without thinking....
If NS trained once doesn't work all the time, it won't be profitable ever!
О! Yes, fitting i.e. MO can be done even on every tick. ))