Machine learning in trading: theory, models, practice and algo-trading - page 3124

 
Maxim Dmitrievsky #:
The model is biased. So we need to force it to learn without such a bias. But first we need to find the bias coefficients, let's say it is a slope or a free term (intercept), as in regression. What if we make it train in such a way that this term does not vary by traine and OOS. Basically quoting books on kozulu.

In catbusta and other models, you can assign weights to labels during training. For example, the offset is output, then converted to weights and the model is trained with correction factors already on the traine. This is one of the ways.

Suppose there is a glob trend upwards for 3 months. The price has grown by 7%. At the same time there are changes of up to 2% in both directions during a day.
What weight should be given to retournaments H1 of the 1st bar, 2nd bar ..... 100 bar? And the rest of the fiches. I doubt that there are any scientifically (or at least experimentally) justified formulas.
Giving out hundreds of weights will make the search for a suitable model even more difficult. There are already a lot of hyperparameters.

 
Forester #:

Let's say there's been a globally upward trend for 3 months. The price has grown by 7%. At the same time, there are changes up to 2% in both directions per day.
What weight should be given to returns H1 of the 1st bar, 2nd bar .... 100 bar? And the rest of the fiches. I doubt that there are any scientifically (or at least experimentally) justified formulas.
Giving out hundreds of weights will make the search for a suitable model even more difficult. There are already a sea of hyperparameters.

When there is no clear certainty of cause and effect, only through randomised experiments. It's not super reliable, but there's no other way.

There's a scientifically valid Frisch-Wu-Lovell formula. Apparently you haven't read that book.

Of course, you can continue in terms of: it bounced from this level, jumped under that curve, and on the news everything was knocked out again... but nobody has proved the usefulness of such a formula. If we play with randomness, then we should play with taste.
 
Maxim Dmitrievsky #:
When there is no clear certainty of cause and effect,

Several ticks already predict the future direction of price.

You won't see that on the hourly or daily bar, of course.

Here's a suggestion.

And in principle, you shouldn't.

 
Uladzimir Izerski #:

And in principle, you shouldn't.

+
 
Forester #:
The model on sell starts to sag when the global (just 1-1.5 years) trend is up. It finds an opportunity to earn on the trade, but on the OOS it goes into drawdown.
Perhaps the first variant with buy|sell selection by one model will be better. But if she adjusts to the global trend, she will drain at the moments of trend change. And will probably trade one way for years.

The main sign of model overtraining is a divergence on the TRAIN and OOS. If there is such a discrepancy, then everything should be thrown out, everything is empty, the whole trip is false.

 
СанСаныч Фоменко #:

The main sign of model overtraining is a divergence on the traine and OOS. If there is such a discrepancy, then everything should be discarded, everything is empty, the whole hike is false.

Outdated information.

Tell better what you do with mahalanobis, we will spin.
 
Maxim Dmitrievsky #:

outdated information.

Tell me what you are doing with mahalanobis, let's have a spin.

Outdated information ( The main sign ofmodel retraining is a divergence on traine and OOS).

Of course it's outdated. I suspect, if applied, everything you do will have to throw out all your p-squared to mythical balance.


Tell me better what you do with mahalanobis, we'll give it a spin.

I don't.

In R, the fastmatrix::Mahalanobis(x, centre, cov, inverted = FALSE) package counts the Euclidean distance between vectors.

Why do we need this?

We need the predictive power of the predictor, i.e. the ability to predict different classes, and in the future, so that the fluctuations in predictive power are minimal, well, at least within 10%. That's why I use a different approach, I've posted the results of calculations once.

 
СанСаныч Фоменко #:

outdated information ( The main sign ofmodel retraining is a discrepancy on the traine and OOS).

Of course it's outdated. I suspect if applied, all you do is have to throw out all your p-squared to mythical balance.


Tell us what you're doing with mahalanobis, we'll give it a spin.

I don't.

In R package fastmatrix::Mahalanobis(x, centre, cov, inverted = FALSE) counts Euclidean distance between vectors.

Why do we need this?

We need the predictive power of the predictor, i.e. the ability to predict different classes, and in the future, so that the fluctuations in predictive power are minimal, well, at least within 10%. That's why I use a different approach, I posted the results of calculations once.

And why do we need your unknowns, what's the point of writing about them?
 
Maxim Dmitrievsky #:
mahalanobis

You asked about mahalanobis, I answered, and not just answered, but wrote the reason why I don't use it.

 
СанСаныч Фоменко #:

You asked about mahalanobis, I answered, and not just answered, but wrote the reason why I don't use it.

Will the dialogue ever become concrete, or will we continue reading about your guesses and assumptions about "ability to predict different classes"?

judging by the experience of dialogue with you, concreteness is not your speciality, more of a packet.

Reason: