Machine learning in trading: theory, models, practice and algo-trading - page 2584

 
Aleksey Nikolayev #:

I think it would be worthwhile to better study the issue of customizing the loss function for our trader's needs.

As an example, here isan article on the subject.

What if I wanted to expand my horizons?

First justify the meaning from a practical point of view: for example, if you do this, you will get this, it will lead to this... etc. .

Here you can type any word on the subject in Google and fill the branch with links to the impossible in the blink of an eye.

 
Aleksey Nikolayev #:

I think it would be worthwhile to better study the issue of customizing the loss function for our trader's needs.

As an example, here isan article on the subject.

I agree.

Standard classification and regression are not very suitable for BP.

 
elibrarius #:

I agree.

Standard classification and regression are not very suitable for BP.

First I want to learn how to build any necessary and correct loss functions - to be more similar, for example, to profit maximization, and for training algorithms to work properly with these functions. Apparently I will have to go into the very basics even in the case of the simplest linear regression.

 
Aleksey Nikolayev #:

I would like to start by learning how to build any desired and correct loss functions - to be more similar, for example, to profit maximization, and for the learning algorithms to work properly with these functions.

What's wrong with the maximization itself?
 
Custom metrics are used to select models, but learning is still by standard metrics (logloss for classification, for example). Because your metric has nothing to do with the feature/target relation, while the standard ones do. And here it's kind of unclear whether to then select models by Sharpe Ratio or R2, or immediately make the training stop when they are maximized. I guess you can do both.
 
mytarmailS #:
What's wrong with the maximization itself?

There may be a problem with poor conditionality, which depends on the metrics used. There may be a problem with gradient and hessian counting for boosting.

 
Aleksey Nikolayev #:

There may be problems with poor conditionality, which depends on the metrics used. There may be a problem with gradient and hessian calculations for boosting.

In case of large feature space (dozens of features), how to determine beforehand which conditionality is better?
 
Maxim Dmitrievsky #:
Custom metrics are used to select models, but learning is still by standard metrics (logloss for classification, for example). Because your metrics are not related to the feature/target relation in any way, while the standard ones are. And here it's kind of unclear whether to then select models by Sharpe Ratio or R2, or immediately make the training stop when they are maximized. You could probably do it both ways.

Anyway, it would be interesting to experiment with completely abandoning standard metrics and replacing them with similar ones that are used in metatrader optimization) Most likely I will have to go to a lower level and work directly with optimization packages - something like that.

I'm not ready to say that the grail is provided) But I think I'll try to figure it out sometime.

Fitting Linear Models with Custom Loss Functions in Python
  • alex.miller.im
As part of a predictive model competition I participated in earlier this month, I found myself trying to accomplish a peculiar task. The challenge organizers were going to use “mean absolute percentage error” (MAPE) as their criterion for model evaluation. Since this is not a standard loss function built into most software, I decided to write...
 
Aleksey Nikolayev #:

However, it would be interesting to experiment with completely abandoning the standard metrics and replacing them with similar ones that are used in metatrader optimization) Most likely, I will have to go to a lower level and work directly with optimization packages - something like this.

I'm not ready to promise that this is a grail), but I think I'll try to figure it out sometime.

This is interesting, but I do not know where to start. Probably, the loss should be based on some idea of market trends. Well, for example it is possible to make corrections on volatility.
 
Maxim Dmitrievsky #:
In the case of a large feature space (dozens of features), how to determine in advance which conditionality will be better?

For sure for standard metrics conditionality is always better - otherwise they wouldn't be a standard.)

Reason: