Machine learning in trading: theory, models, practice and algo-trading - page 3074

You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
If you have traded on real Forex, don't they widen the spread there at night around 1am? Just like in regular kitchens.
.
Yes, the M5)
I'll repeat it now) Good thing I have enidesk and access to my work computer))))
270 dollars. and best sl 11500, overall 5200 sl and 350 tp is about the same result.
Maybe the spread is big. I have more profit, overall the curve is the same. A bit of a rough learning curve the last few years, yes. But the previous ones are better.
In the market this optimum is constantly changing - like the terrain of the planet after an earthquake.... and so our task is to predict when it will happen, or after, but most importantly the moment when it is necessary to search for a new optimum....
Paper comparing different methods. In addition to the videos and the book. It has many references to other papers.
From this picture.
For MO, sequential sampling is not acceptable - only random sampling, and not just random sampling.
From this picture.
For MO, sequential sampling is not acceptable - only random sampling, and not just random sampling.
Paper comparing different methods. In addition to the videos and the book. It has many references to other papers.
Wonderful article!
As I understood the applications, the result of classification depends not only on the quality of the original data, but also on how we form the training and evaluation set. And on something else I haven't understood yet.
Wonderful article!
As I understood the applications, the result of classification depends not only on the quality of the original data, but also on how we form the training and evaluation set. And on something else, which I haven't understood yet.
Hehe. Watch more vids before this one, might clear up the picture. The point is to find such samples in the data, let's say X with a vector of values of features W, which react as well as possible to the tritment (training of the model in our case) and allocate them to the class "to trade", when the others are better not to touch, "not to trade", because they react badly to training (on new data the model makes a mistake when including them in the tritment group). In marketing, these are user examples. Where one sample of users will be impacted by an ad campaign, but others are not worth using the ad campaign budget on.
I understand it this way in the context of TC.
Hehehe. Look at more vids before this, maybe it will clear up the picture. The point is to find such samples in the data, let's say X with a vector of values of features W, which react as well as possible to the tritment (training of the model in our case) and allocate them to the class "to trade", when the others are better not to touch, "not to trade", because they do not react well to training (on new data the model makes a mistake when including them in the tritment group). In marketing, these are user examples. When one sample of users will be affected by an advertising campaign, but it is inappropriate to use the advertising campaign budget on others.
I understand it this way in the context of TC.
Your understanding has a persistent whiff of determinism, while the article is the apotheosis of randomness and even on unbalanced data. No sample selection, it's the opposite. We recommend X-learner, which
first estimates the two response functions µ(x, 1) and µ(x, 0). It then uses these estimates to impute the unobserved individual treatment effects for the treated, ˜ξ 1 i , and the control, ˜ξ 0 i . The imputed effects are in turn used as pseudo-outcomes to estimate the treatment effects in the treated sample, τ (x, 1), and the control sample, τ (x, 0), respectively. The final CATE estimate τ (x) is then a weighted average of these treatment effect estimates weighted by the propensity score, e(x). Thus the X-learner additionally uses the information from the treated to learn about the controls and vice-versa in a cross regression style, hence the X term in its naming label.
Nothing like a "good" selection.