Machine learning in trading: theory, models, practice and algo-trading - page 3020

 
Maxim Dmitrievsky #:
Accuracy works fine with balanced classes. Tried all standard metrics, almost no difference in results.

Still, they are different values. Let's assume for simplicity that take profit = stop loss = 1 and spread = 0. In each trade we either enter or not - for simplicity, the system is only for purchases (for sales let a different model).

Accuracy = (True positives + True negatives)/ (True positives + True negatives + False positives + False negatives)

Profit_total = True positives - False positives

Accuracy seems to fit the requirements for the split method in the tree, but profit seems not.

Maxim Dmitrievsky #:
Profit maximisation is implemented through markup with maximally profitable trades, isn't it?).

For simplicity, all trades give the same profit or loss (1 or -1)

 
mytarmailS #:


1) trade costs are not taken into account through classification, the class mark may show that it is necessary to prolong, but it may be so that it is more economically profitable to keep on buying,

profit maximisation takes this into account.


2) the same with volatility


3) it is not clear how to realise the three states buy, sell, do nothing, not in the context of the three classes, but specifically about trading


4) it is not clear how to manage stops/teaks via MO through classification

.....

..

..

You put the costs into the markup and that's it. Profit maximisation is maximising the number of points earned minus the costs. It is marked up in one pass :)

3. I implemented it in the last article.
Stops, take-outs - this is usually after the TS is fine-tuned in the optimiser.

...
..,
The main thing is to start :)
 
Aleksey Nikolayev #:

Still, these are different values. Let's assume for simplicity that always take profit = stop loss = 1 and spread = 0. In each trade we either enter or not - for simplicity, the system is only for purchases (for sales let a different model).

Accuracy = (True positives + True Negatives)/ (True positives + True negatives + False positives + False negatives)

Profit_total = True positives - False positives

Accuracy seems to fit the requirements for the split method in the tree, but profit seems not to.

For simplicity, all trades give the same profit or loss (1 or -1)

Too subtle, don't get it 😁 the algorithm with the teacher is trying to approximate what it is taught in the training dataset, under any stopping criterion. All these metrics are purely auxiliary, +-. That's what I'm basing this on. It seems to me that they will make minimal difference, which was confirmed when going through them. So it's secondary to the teacher.
 
Aleksey Vyazmikin #:

Have you tried this approach ? (look for the Model Interpretation section about halfway down the page)

 
The order of such markup is approximately as follows: you take profitable trades with a minimum step, in different directions, depending on fluctuations. You go through them, combining unidirectional ones into one and counting the number of pips taking into account the costs. If it is more than two, you combine them into one, otherwise you leave the short ones.

In one pass on the chart, Carl.
 
Maxim Dmitrievsky #:
The order of such markup is approximately as follows: you take profitable trades with a minimum step, in different directions, depending on fluctuations. You go through them, combining unidirectional ones into one and counting the number of pips taking into account the costs. If it is more than two, you combine them into one, otherwise you leave the short ones.

In one pass on the chart, Carl.

1) Even if it works, it turns out that for each task you need to invent some crutch algorithm to implement it as a ready-made target?

Isn't it easier to write a FF and just say AMO is good/bad, and it will be good for any task, universal solution...?


2) good target != well trained AMO for this target.

The target may be good, but the algorithm cannot be trained for it, so it's not the target that should be evaluated, it's the trained AMO.

And you realised this when I was talking about FF, but I see you have forgotten it already

 
mytarmailS #:

1) Even if it works, it turns out that for each task it is necessary to invent some crutch algorithm to implement it as a ready target?

Wouldn't it be easier to write a FF and just say AMO - good/bad, and it will be good for any task, universal solution...?


2) good target != well-trained AMO under this target.

The target may be good, but the algorithm cannot be trained for it, so it is not the target that should be evaluated, but the trained AMO.

And you realised this when I was talking about FF, but I see you've forgotten.

I realise you don't understand that the FF is built into the dataset. You're confusing warm and soft, you're doing unnecessary work.

It'll learn like a baby for everything, every line will be memorised.

You can set other objectives. For example, which deals to give more weight to. All this is done at the markup stage, of course.

You can't do a lot of things through FF, it will be 3-storey formulas.

You're like Susanin in MO 😀 you're always dragging in the swamp.
I generally sit still and realise everything on the way home or to the shop. Sometimes I forget why I came to the shop, but it's a cost.
 
Maxim Dmitrievsky #:
I understand, you don't realise that the FF is put into the dataset. You're confusing warm and soft, you're doing extra work.

He'll learn like a baby for everything, memorise every line.

You can set different goals. For example, which trades to give more weight to. All this is done at the markup stage, of course.

You won't be able to do a lot of things through FF, it will be 3-storey formulas

You're like Susanin in the MoD 😀 always dragging you into the swamp.

If everything was as you say, there would be no RL...


And in general, it's good that everyone does it his own way, more opinions - richer search space....

I don't do it much anymore, I'm past that stage...

 
mytarmailS #:

If it was as you say, there'd be no RL in the first place.

It's nowhere to be found, only on paper.
RL is for interacting with an unknown environment, exploring it. What happens if I go this way or that way. You have the graph in front of you.
 
Maxim Dmitrievsky #:
And it's nowhere to be found, only on paper
RL is for interacting with an unknown environment, exploring it. What happens if I go this way or that way. You have the graph in front of you.
And in general, it's good that everyone does it differently, more opinions - richer search space...
Reason: