Machine learning in trading: theory, models, practice and algo-trading - page 3074

 
mytarmailS #:
If you have traded on real Forex, don't they widen the spread there at night around 1am? Just like in regular kitchens.

.

 
Comments not relevant to this thread have been moved to "Off Topic".
 
Valeriy Yastremskiy #:

Yes, the M5)

I'll repeat it now) Good thing I have enidesk and access to my work computer))))

270 dollars. and best sl 11500, overall 5200 sl and 350 tp is about the same result.

Maybe the spread is big. I have more profit, overall the curve is the same. A bit of a rough learning curve the last few years, yes. But the previous ones are better.

 
Andrey Dik #:
What it would all look like if the world wasn't smooth.....
interesting, but many questions arise right off the bat, yes, all "beings" are blind in terms of making the next decision, but if a better decision is known (seen), then the point of making a decision disappears - free will disappears? strange logical puzzle.
there are many search strategies, open or not yet, but it is known that orienting to the optimum known to the society does not always allow to find a better solution (the paradox of the collective thinking trap again).

In the market this optimum is constantly changing - like the terrain of the planet after an earthquake.... and so our task is to predict when it will happen, or after, but most importantly the moment when it is necessary to search for a new optimum....

 
Paper comparing different methods. In addition to the videos and the book. It has many references to other papers.
It's not in the statistics sections or in the MOE books. Kind of like it should have been in econometrics, but it's not there either. Somehow it stands alone.
One of the researchers (who came up with double machine learning, in 2018 I think) is our emigrant Chernozhukov. His work is also in English. Prado also adopted his meta models from there, apparently. But very superficially.

 
Maxim Dmitrievsky #:
Paper comparing different methods. In addition to the videos and the book. It has many references to other papers.
It's not in the statistics sections or in the MOE books. Kind of like it should have been in econometrics, but it's not there either. Kind of stands alone.
One of the researchers (who came up with double machine learning, in 2018 I think) is our emigrant Chernozhukov. His work is also in English. Prado also adopted his meta models from there, apparently. But very superficially.

h ttps://arxiv.or g/pdf/2201.12692.pdf


From this picture.

For MO, sequential sampling is not acceptable - only random sampling, and not just random sampling.

 
СанСаныч Фоменко #:

From this picture.

For MO, sequential sampling is not acceptable - only random sampling, and not just random sampling.

Maxim Dmitrievsky #:
Paper comparing different methods. In addition to the videos and the book. It has many references to other papers.
This is not in the statistics sections or in the MOE books. Kind of like it should have been in econometrics, but it's not there either. Kind of stands alone.
One of the researchers (who came up with double machine learning, in 2018 I think) is our emigrant Chernozhukov. His work is also in English. Prado also adopted his meta models from there, apparently. But very superficially.

h ttps://arxiv.or g/pdf/2201.12692.pdf


Wonderful article!

As I understood the applications, the result of classification depends not only on the quality of the original data, but also on how we form the training and evaluation set. And on something else I haven't understood yet.

 
СанСаныч Фоменко #:

Wonderful article!

As I understood the applications, the result of classification depends not only on the quality of the original data, but also on how we form the training and evaluation set. And on something else, which I haven't understood yet.

Hehe. Watch more vids before this one, might clear up the picture. The point is to find such samples in the data, let's say X with a vector of values of features W, which react as well as possible to the tritment (training of the model in our case) and allocate them to the class "to trade", when the others are better not to touch, "not to trade", because they react badly to training (on new data the model makes a mistake when including them in the tritment group). In marketing, these are user examples. Where one sample of users will be impacted by an ad campaign, but others are not worth using the ad campaign budget on.

I understand it this way in the context of TC.

 
Maxim Dmitrievsky #:

Hehehe. Look at more vids before this, maybe it will clear up the picture. The point is to find such samples in the data, let's say X with a vector of values of features W, which react as well as possible to the tritment (training of the model in our case) and allocate them to the class "to trade", when the others are better not to touch, "not to trade", because they do not react well to training (on new data the model makes a mistake when including them in the tritment group). In marketing, these are user examples. When one sample of users will be affected by an advertising campaign, but it is inappropriate to use the advertising campaign budget on others.

I understand it this way in the context of TC.

Your understanding has a persistent whiff of determinism, while the article is the apotheosis of randomness and even on unbalanced data. No sample selection, it's the opposite. We recommend X-learner, which

first estimates the two response functions µ(x, 1) and µ(x, 0). It then uses these estimates to impute the unobserved individual treatment effects for the treated, ˜ξ 1 i , and the control, ˜ξ 0 i . The imputed effects are in turn used as pseudo-outcomes to estimate the treatment effects in the treated sample, τ (x, 1), and the control sample, τ (x, 0), respectively. The final CATE estimate τ (x) is then a weighted average of these treatment effect estimates weighted by the propensity score, e(x). Thus the X-learner additionally uses the information from the treated to learn about the controls and vice-versa in a cross regression style, hence the X term in its naming label.

Nothing like a "good" selection.

 
Propensity score is estimated for each conditional object, from whose scores an overall cate score is formed.