Machine learning in trading: theory, models, practice and algo-trading - page 2560

 
Aleksey Nikolayev #:

I think I understand what I need - the ability to set a custom flash file. But this function HMMFit() does not support this possibility, because it implements the Baum-Welch with a firmly sewn into it - LLH. You can only set some Baum-Welch parameters.

You need another package where you can set a user-defined f.f.

The funny thing is that I haven't seen any such packages with AMO where you can use your phd...

You either set X,Y (date, target) or just X (date).

But it's always possible to get into the "guts" of AMO and there move them and see what happens in terms of f.f..

I train neuronics this way, I also trained Forest, and now I want to do more SMM.

 
mytarmailS #:

The funny thing is that I have not encountered such pacts with AMO where you can use your ff...

You either set X,Y (date, target) or just X (date).

But it's always possible to get into the "guts" of AMO and there move them and see what happens in terms of f.f..

I train neuronics this way, I also trained Forrest, and now I want to do more SMM.

In LightGBM you can set your own, but more often than not there is no such opportunity.

 
Aleksey Nikolayev #:

In LightGBM you can set your own, but most often there is no such opportunity.

Are you sure you don't confuse FF with custom metrics?
 

Do you want me to tell you again what metrics I use and by what criteria I select models?


After all, this is the most important thing in MO, the fundamental question :-)

 
Maxim Dmitrievsky #:
Probably, we should return to simple definitions generally accepted.
Decomposition into trend, seasonality, cycles, and noise. You can try to predict any of the above, with varying degrees of success.
Stationarity - the constancy of the average and dispersion, it is not observed in the market.
Regularity - the presence of repeatability, some analogue of 2D cycles or persistence, a certain level of signal. It is kind of like an opportunity for speculation. It differs from SB for the better.

Regarding the definition of stationarity - here is clearly an abstraction, because either it is a single point without fluctuations, and then the measurement window does not matter, or it is still fluctuations with a minimum window or with a range of windows to measure.

Regularity, on the other hand, can generate stationarity just the same - since it is the state of a single point, not their measurement window.

Accordingly, stationarity directly affects predictability, and hence learning, if that stationarity has information about the target.

As I wrote earlier, just now I use approach of selection of predicates through estimation of their stationarity with given measurement window.

 
Aleksey Nikolayev #:

In LightGBM you can set your own, but most often there is no such a possibility.

Also xgboost can, but it is very difficult to write your own function. You have to output the formulas.

http://biostat-r.blogspot.com/2016/08/xgboost.html - 6th paragraph.

 
Aleksey Vyazmikin #:

Regarding the definition of stationarity - here is clearly an abstraction, because either it is a single point without fluctuations, and then the measurement window does not matter, or it is still fluctuations with a minimum window or with a range of windows to measure.

Regularity, on the other hand, can generate stationarity just the same - since it is the state of a single point, not their measurement window.

Accordingly, stationarity directly affects predictability, and thus learning, if that stationarity has information about the target.

As I wrote earlier, just as I am now using the approach of selecting predicts through estimating their stationarity with a given measurement window.

I don't understand anything at all

statinoir should be noise after the model is built, it is not needed anywhere else
 
Maxim Dmitrievsky #:

I don't understand anything at all.

Do you want to understand?

 
mytarmailS #:
Are you sure you're not confusing ff with custom metrics?

I don't think so - the example is in python.

 
Maxim Dmitrievsky #:
statistical should be noise after model building, it is not required anywhere else

Right, that's exactly the connection between the predictor and the target I'm talking about.

Now, I'm not aware of a method for building a model that gives an estimate of "stationarity" at different sample intervals with splitting or some other mechanism for combining predictors. All models make a fit to sample plots, estimating only the quantitative improvement rate, but you need to estimate it across intervals, then the model may be more robust.

Reason: