Machine learning in trading: theory, models, practice and algo-trading - page 2388

 

you can get a 1-2% improvement by retraining the same model several times.)

but not 512 times ))

 
Maxim Dmitrievsky:

It doesn't work that way.

remove low-importance features from the model and break it, then compare your ass with your finger (other features), and so on

Delete by 1 also tried. It removes. When there is a lot of trash model almost does not notice the loss of a fighter)
 
I used to only work with bars, though. Naturally, the neighboring ones replace the retired one.
Now I will have to deal with a lot of chips with the mashkas and so on.
 
elibrarius:
I also tried deleting 1 at a time. It does. When there is a lot of trash model almost does not notice the loss of a fighter)

This is all tuning, not a way to find some kind of pattern, so the tail should not wag the dog

 
elibrarius:
After selecting the first best feature, the second one will be chosen with the best interaction with the first one, and so on, when you reach 10, the next one will be chosen with the best interaction with any of the 10 previously selected, but most likely with all of them.

I'm not a fan of immediately discarding all possible options, perhaps this approach can provide an interesting option.

The thing is, I just can't automatically do a lot of stop/start cycles, taking into account sifting out any features at each iteration.

I can prepare data for one iteration, then I need to prepare it again - that's why I need Python.

By the way, I don't mind trying my predictors using your method as well, if there is a ready-made automaton for this purpose.

 
Aleksey Vyazmikin:

I'm not a fan of immediately discarding all possible options, perhaps this approach can provide an interesting option.

The thing is, I just can't automatically do a lot of stop/start cycles, taking into account the elimination of any features at each iteration.

I can prepare data for one iteration, then I need to prepare it again - that's why I need Python.

By the way, I don't mind trying my predictors by your method as well, if there is a ready-made automaton for this purpose.

If you go all the way there will be almost 1000000 models trained for 1000 features.
The automaton is simple - 2 nested cycles. You have a problem in automatic training start. Solve it, everything else will be a trifle.

 
elibrarius:

If you go all the way to the end, there will be almost 1000000 models trained for 1000 features.

That's a lot - right now 1000 models take about a day to train.

It may be faster on a random forest if you parallelize it.

elibrarius:

The automaton is simple - 2 nested cycles. You have a problem with automatic start of training. Solve it, everything else will be trivial.

That's the problem, I can't automate the process.

 
Maxim Dmitrievsky:

It's all tuning, not a way to find some kind of pattern, so the tail shouldn't wag the dog.

You just didn't get the point I was trying to make - the best model in terms of classification statistics doesn't mean the best in terms of profitability. It is only true in the case of fixed SL and TP.

I'm looking for a method to influence the income and expense curves - the green and red curve.

This is what the probability distribution of the model response to the sample looks like when trained:

This is how it looks when the independent sample is fed:

As you can see, the curves have almost merged, while the patterns have not deteriorated as much - the aqua curve is zeros and the magnet curve is ones - they are quite acceptably spaced, and the patterns are sort of globally preserved, but the cost of those patterns has not been sort of weighted in terms of income/expenditure.

 
Aleksey Vyazmikin:

That's the problem, I can't automate the process.

2-3 days of studying Python and you can make something simple, like launching a catbust. All the more, there are examples in Maxim's articles.
 
Maxim Dmitrievsky

In the next article please addstop and take profit to the python code

Reason: