Machine learning in trading: theory, models, practice and algo-trading - page 3363

 
fxsaber #:

Knew it was simple.

Answered that question.


Let's take the average as an example, it has a parameter - period.

This parameter can be a constant, or it can be changed according to a formula.

I understand you have parameters as a constant?

 
mytarmailS #:

Let's take the average for example, it has a parameter - period.

This parameter can be a constant, or it can be changed according to some formula.....

I understand you have parameters as a constant?

I'm not familiar with that terminology. Five parameters optimised in MT5-tester.

 
fxsaber #:

That's how you can explain anything. Obviously, there's nothing concrete to say. I myself think it's an adjustment. Because the beginning of the "drift" to the left coincides very much with the Sample start point. In such a situation, OOS can be explained in this way, of course.


This is also EURUSD. The OOS on the right is the last four months of 2023. The OOS is the rest of 2023.

Any options for another explanation? 😀 Can't tell anything specific from the chart, right

You can calculate the chances of correct overoptimisation via some wolf-forward. How many times over-optimised with a profit on the forward in a month, and how many with a loss. This will give some financial confidence and bravado.
 
Maxim Dmitrievsky #:
You can calculate the chances of correct over-optimisation via some wolf-forward. How many times over-optimised with a profit on the forward in a month, and how many with a loss. This will give some financial confidence and bravado.

Perhaps such a method would work to confirm/refute the hypothesis that the market has changed at Sample and therefore the good OOS on the right is not a fluke. Thanks, I'll give it some thought.

 
mytarmailS #:


Let's take the average for example, it has a parameter - period.

This parameter can be a constant, or it can be changed according to some formula.....

I understand you have parameters as constants?

Constants, they don't change after optimisation.
 
fxsaber #:

I'm not familiar with this terminology. Five optimised parameters in MT5-tester.

Maybe it makes sense to search for a parameter and a formula to calculate your optimised parameters. Based on the results of optimisation. Of course it's complicated.
 
Valeriy Yastremskiy #:
Maybe it makes sense to look for a parameter and a formula by which to calculate your optimised parameters. Based on the results of the optimisation. Of course it's complicated.
+
That's what I was trying to say, but I wanted him to understand.
People only value their own guesses.
 
fxsaber #:

Perhaps such a method would work to confirm/refute the hypothesis that the market has changed at Sample, and therefore the good OOS on the right is not a fluke. Thanks, I'll give it some thought.

Yes, if you move the sample window backwards, all the OOS curves will change, roughly like in polynomial regression its prediction jumps like crazy when you move the window. The larger the opt parameters or degree of the polynomial, the more wiggly this peece is. Ideally, you should have such a fast optimisation that you can move the window with your mouse and look at it immediately. I think you did something like that with best interval.

In the last article I suggested a variant on how to make training more stable for MO. That is, less retraining. But yield suffers.

This is the bias-variance tradeoff, when increasing the TS parameters leads to drift on new data, and decreasing them leads to greater variance of predictions. Local optimisers can't understand this.
 
Maxim Dmitrievsky #:
Yes, if you move the sample window backwards, all the OOS curves will change, just like in polynomial regression its prediction jumps like crazy when you move the window. The larger the opt parameters or degree of the polynomial, the more wiggly this peeve is. Ideally, you should have such a fast optimisation that you can move the window with your mouse and look at it immediately. I think you did something like that with best interval.

In the last article I suggested a variant on how to make training more stable for MO. That is, less retraining. But profitability suffers.

This is the bias-variance tradeoff, when increasing the parameters of the TS leads to drift on new data, and decreasing them leads to greater dispersion of predictions. Local optimisers can't understand this.

Everything is much simpler.

They fitted something to some section of a non-stationary random process, not realising that any section of a non-stationary process has nothing to do with any other section of a non-stationary process. Therefore, the results at other segments are arbitrary: they may be good, but they may be bad, but in reality the sandwich ALWAYS falls down in butter.

By the way, the concept of "dispersion" refers to a stationary random process.

 
СанСаныч Фоменко #:
any part of a non-stationary process has NOTHING to do with any other part of a non-stationary process.

The market, in the sense of prices for various assets in time is too multifactorial process for today to regulate or predict it, the last in this rank of factors is apparently the psyche of individuals, which is also difficult. But it is definitely not a non-stationary SB)))))) This is an assumption for today, as long as there is not enough power. Apparently.)))))

Maxim Dmitrievsky #:
TC leads to drift on new data, and reduction leads to greater prediction variance.

the usual dilemma of accuracy and complexity or cost lacking.

Reason: