Machine learning in trading: theory, models, practice and algo-trading - page 2585
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Interesting, but it is not clear where to start. The loss must be based on some idea of market trends. Well, for example one can make corrections on volatility.
I think to start quite simple - classification by logistic regression into two classes (enter/exit) and with a small number of signs. Just to see why the topic is not very popular.
Custom metrics are used to select models, but learning is still by standard metrics (logloss for classification, for example). Because your metrics are not related to the feature/target relation in any way, while the standard ones are. And here it's kind of unclear whether to then select models by Sharpe Ratio or R2, or immediately make the training stop when they are maximized. I guess you can do it both ways.
Total fallacy, I'm kind of embarrassed for you.
Still, it would be interesting to experiment with completely abandoning the standard metrics and replacing them with similar ones that are used in metatrader optimization) Most likely I will have to go to a lower level and work directly with optimization packages
That's what I've been telling you for half a year, it's better to train AMO through FITNESS FUNCTION!
Total delusion, I'm kind of embarrassed for you.
You just expressed Japanese shame for the whole world community) There is a large set of custom metrics, but the main training on minimizing the logloss. Stopping learning by custom ones does. Matching basis with custom ones is great, no point in getting fucked up. Study, student.
I'm a lifelong student.
Studying, lifelong student
Logloss shows the amount of mutual information between the traits and the target, as far as I understand it. This is the most objective f-i, without any description of the dependence type. The model is trained to minimize the loss of such information, especially the boosting works this way.
Example.
There is a feature dataframe "X"
There is a model "M"
There are 5 time series "tc5".
The task is.
Model "M" takes "X" as input (everything is as usual)
At the output "M" produces two vectors that should be
1) maximally stacionary
2) do not correlate with all "Мc5" vectors as much as possible
There is no targeting in its usual form, but there are requirements to outputs of the model.
We don't predict price, zz, returns andpr, that's another song altogether
How will you solve this with your boost out of the box ?
example.
There is a feature dataframe "X"
There is a model "M"
There are 5 time series "tc5".
The task is.
Model "M" takes "X" as input (everything is as usual)
At the output "M" produces two vectors that should be
1) maximally stacionary
2) do not correlate with all "Мc5" vectors as much as possible
There is no targeting in its usual form, but there are requirements to outputs of the model.
We don't predict price, zz, returns andpr, that's another song altogether
How will you solve this with your boost out of the box ?
By enumerating the targets, the task is the opposite
1) you teach the model millions of times and see what happens?
Yes, well, target take from the ceiling, or f-u. You're doing the training backwards, as far as I understood from the description. What's its advantage over classical, this question needs to be answered
))))
I'm out of the loop.)