Machine learning in trading: theory, models, practice and algo-trading - page 3019

 
Igor Makanu #:

Yandex wrote something similar https://academy.yandex.ru/handbook/ml/article/optimizaciya-v-ml

Nice tutorial by Yandex, not badly written. Another section of it has more to do with what my thoughts revolve around. It describes the general kind of loss function used in tree construction. The idea there is that optimising the average error price, and maximising profit is equivalent to optimising the sum of error prices.

If translated into profit terms, it is the difference between the total profit and the average profit in a trade. Since I am solving the problem of binary classification (enter/not enter), maximising the average profit in a trade will stupidly lead to entering one or two trades and discarding the rest.

I am trying to understand whether this is an insurmountable boundary between optimisation and MO or not.

Решающие деревья
Решающие деревья
  • academy.yandex.ru
Обучение древесных моделей для классификации и регрессии. Эффективное построение решающих деревьев
 
Aleksey Nikolayev #:

Nice tutorial by Yandex, not badly written. What my thoughts are revolving around has more to do with another section of it. It describes the general type of loss function used in tree construction. The point is that there, optimising the average error price, and maximising profit is equivalent to optimising the sum of error prices.

If translated into profit terms, it is the difference between the total profit and the average profit in a trade. Since I am solving the problem of binary classification (enter/not enter), maximising the average profit in a trade will stupidly lead to entering one or two trades and discarding the rest.

I am trying to understand whether this is an insurmountable boundary between optimisation and MO or not.

What prevents you from writing your own loss function?

 
Maxim Dmitrievsky #:
That's me summarising just about the tree ) Google works, I use it myself. Dipminds usually do very close to how I myself perceive reality.


Thanks for the advice!

 
Aleksey Vyazmikin #:

Thanks for the advice!

It's complicated out there, did a search on the topic last night. The same trees pull rules out of scales and NS layers. They're also pulling rules out of super-precision networks. I'll post as I get more insight. Tree in exploratory analysis somehow looks too cool now from this angle of rule search, probably surpasses genetic optimisation in terms of speed, with a properly prepared dataset.
I haven't tried it myself, maybe there are some pitfalls.
 
mytarmailS #:
You should be the one to deal with your topics, not someone else.....
Once it gets to your head, it's a process...

Think about it.

I solve problems in MQL5, and we were talking about R.

A fact is a fact - you say something without thinking, and then go to the bushes.

 
mytarmailS #:

What's stopping you from writing your FUN?

Well, I can't yet figure out how to implement profit maximisation in the same bousting, for example.

I am doing something, of course, but I would like to hear other informative opinions on the topic.

 
Aleksey Nikolayev #:

Well, I can't yet figure out how to implement profit maximisation into the same bousting, for example.

I am doing something, of course, but I would like to hear other informative opinions on the topic.

Accuracy works fine with balanced classes. Tried all standard metrics, almost no difference in results. Profit maximisation is implemented through markup with maximally profitable trades, isn't it)?
 
СанСаныч Фоменко #:

Your whole idea of separating the "good rules" from the bad rules" is completely dead end, methodologically dead end.

You somehow think that "good" rules (trees) are really "good".

And it's not just the vagueness of their future, it's the fact that there are no rules that can be taken according to some criteria at all. There are rules that produce a VARIABILITY of "goodness" that changes as the window moves. and it is quite possible that this rule will go from "good" to "bad" as the window moves. This variability is determined by a value that divides the prediction probability into classes.

Standardly in MO algorithms, the division into classes is done by dividing the class prediction probability in half, but this is completely incorrect. I consider the value of the division into classes - it is never 0.5: this value varies and depends on the particular predictor.

Now back to your "good" trees.

If you have selected trees whose "Goodness" lies close to the threshold that moves. This is why I argued above that your selected "good" trees can easily become bad trees.


It's a dead end.

You yourself make hypotheses about what I think I think and you contradict them. Try asking questions in the beginning.

Dead end or not, I have shown real results. Show the same with a forest that 50% of leaves profitably classify 3 classes, two years after training. As far as I remember - you have a concept of regular retraining of models, almost once a week.

I don't need to explain about drift - I have created a separate thread on the forum, where attempts to solve the problem are taking place - you want to share ideas - join.

So the method is promising, but there is something to improve and develop.

 
Aleksey Nikolayev #:

Well, I can't yet figure out how to implement profit maximisation into the same bousting, for example.

I am doing something, of course, but I would like to hear other informative opinions on the topic.

Well I showed you how to train forrest on profit maximisation.

This is a simple gradient-free learning through a fitness function, essentially RL.

I've thrown the code here, but this method is not very effective for large tasks.


For large tasks we need to convert gradient-free learning into gradient learning, i.e. into a regular typical RL dip.

Look at the first half of this video , it tells you how to train it.

There's an example with neuronics, but it's not important whether it's a boost or something else.

Deep Learning на пальцах 13 - Reinforcement Learning
Deep Learning на пальцах 13 - Reinforcement Learning
  • 2019.05.15
  • www.youtube.com
Курс: http://dlcourse.ai
 
Maxim Dmitrievsky #:
Accuracy works fine with balanced classes. Tried all standard metrics, almost no difference in results. Profit maximisation is implemented through markup with maximum profitable trades, no?)


1) trade costs are not taken into account through classification, the class mark may show that it is necessary to sell, but it may be economically more profitable to keep buying,

profit maximisation takes this into account.


2) the same with volatility.


3) it is not clear how to implement the three states buy, sell, do nothing, not in the context of the three classes, but just about trading.


4) it is not clear how to manage stops/teaks via MO through classification

.....

..

..

Reason: