Machine learning in trading: theory, models, practice and algo-trading - page 89

 
Vizard_:
Sliding control is for the amateur))))) and the fact that only in this case the model is suitable is a delusion.
But it's all nonsense, everyone has their own quirks-necessities at different stages, you can add it too.
I meant the % output for both the ob. and the test (test = OOS - applying the formula to new data)
The advantage of window applications, in their quick usability. You can make a normal shell,
Reshetov is an experienced coder, so you need to make it normal. That's all. All imho of course.
I will not try Yuri's software not because I think he made crap. Even though he is an experienced programmer. It's just that everything is already implemented, even turns out there is a walk forward in the package, which I use. And there are 150-200 models per sample, from SVM, to linear model with regularization, to XGBOOST.
 
Alexey Burnakov:


Where selection of a good model in a sample gives good results outside the sample means that the model is suitable for the chosen problem.

Once again for the especially gifted: if a model passes selection, this does not mean that it is potentially suitable for the chosen problem, especially in non-stationary environments. This only means that it somehow passed the selection.

If an applicant has passed selection through the university entrance exams, it does not mean that she will defend her diploma, much less that she will subsequently work in her chosen specialty.

The selection only reduces the probability of unsuitability, but not always to zero. And don't forget that selection results can also be false positive and false negative. That is, there is a non-zero probability that in the process of selection the "baby was thrown out of the fonts as well.

 
Yury Reshetov:

Once again for the especially gifted: If a model passes the selection, this does not mean that it is potentially suitable for the chosen problem, especially in non-stationary environments. It just means that it somehow passed the selection.

If an applicant has passed the selection through the entrance exams to the university, it does not mean that he will defend his diploma, much less that he will subsequently work in his chosen specialty.

The selection only reduces the probability of unsuitability, but not always to zero. And don't forget that selection results can also be false positive and false negative. That is, there is a non-zero probability that during the selection process, the "baby was thrown out of the fonts as well.

Let me explain once again for the very fussy.

All results are certainly probabilistic!

There are model results in training, there are results in cross-validation or testing (to pick model parameters and make an early learning stop). There are also out-of-sample model results - the final estimate.

If the results on testing correlate well with the out-of-sample results, then the quality of the dependency modeling has inertia for the out-of-sample period. In this case, we can take the best model in testing (not "out of sample"). Subsequently, we can retrain the model on all new data with known parameters and take the best one, since the correlation with future results is practically established.

If the results on the test correlate poorly with the out-of-sample results, it makes no sense to take the best model on the training. Taking the best model in "out of sample" is a fit. There is only one way out - to reject the method of model creation itself or to change significantly the ranges of parameters.

 
And I, for example, to increase the generalization ability by Reshetov's optimizer, I use deep learning, and with the new possibility to assess the quality of the predicate, it is even a beauty. With deep learning from 50% of generalization can be raised to 80-90%, and in this case the model starts to work at hurrah.... So, keep it in mind, guys. And don't fight (c) Leopold and all the rest......
 
.

"You could suggest that they create a thread for a week, for example -- a week for people to express their opinions -- then let them rub it in."

I don't know what hobgoblin put the above quote in my post, but it's not my post.

If moderators don't like something in my posts, that's their right. Just don't paste someone else's text into my posts, and act more culturally: indicate what exactly you didn't like and in your own name. And I, not to irritate moderators leave from this site to theirs: the link in the profile.

Goodbye, everybody!

 
Yury Reshetov:
.

"You could suggest that they create a thread for a week, for example -- a week for people to express their opinions -- then let them rub it in."

I don't know what hobgoblin put the above quote in my post, but it's not my post.

If moderators don't like something in my posts, that's their right. Just don't paste someone else's text into my posts, and act more culturally: point out what exactly you didn't like and in your own name. And I, not to irritate moderators leave from this site to theirs: the link in the profile.

Bye to all!

WOW! And I thought that you wrote that... Smart move... so it's...
 
Yury Reshetov:
.

"You could suggest that they create a thread for a week, for example -- a week for people to express their opinions -- then let them rub it in."

I don't know what hobgoblin put the above quote in my post, but it's not my post.

If moderators don't like something in my posts, that's their right. Just don't paste someone else's text into my posts, and act more culturally: indicate what exactly you didn't like and in your own name. And I, not to irritate moderators leave from this site to theirs: the link in the profile.

Bye to all!

Come back.
 
Vizard_:

And I'll look later for interest, although I think I did crap))))
Yes, +R is that it's probably already implemented everything possible. ML for a long time I do not use, but just models without retrain. One of the last things I did with ML was just
you fill the model with events, so that they always get into the target model. Combine them, you usually get 93-96%. The rest, you train. In other words.
If the child is already a little bit of walking, no need to tell him the same thing every day, just tell (retrain) when to jump over the puddle
(lack of prediction in the target).A little drip, the target of course not the color of the candle)))
Can you somehow rephrase what is written more structured or what, because it seems that is written very interesting, but so chaotic that no one understands
 
Vizard_:

A little dripping, the target of course not the color of the candle)))
It's just not the color of the candle )
 
Nested cross validation for model selection
Nested cross validation for model selection
  • stats.stackexchange.com
How can one use nested cross validation for model selection? From what I read online, nested CV works as follows: There is the inner CV loop, where we may conduct a grid search (e.g. running K-fold for every available model, e.g. combination of hyperparameters/features) There is the outer CV loop, where we measure the performance of the model...
Reason: