Machine learning in trading: theory, models, practice and algo-trading - page 1494

 

I am currently trying the package ldhmm. In contrast to depmixS4, which has almost no tuning parameters, here, depending on the distribution type (normal and lamda-distribution) ,several pairs of parameters mu, sigma, and lamdaare initially set . These parameters are needed to calculate the mixing distribution P(x; mu, sigma, lamda) and the transition probability matrix. The model is constructed by optimizing (MLE) these values according to AIC, BIC, MLLK criteria . Here is the first result of running the strategy (BUY/SELL) in the tester on USDJPY-H1 from 2018.08.20 to 2019.05.21. This is a variant of getting a forecast without decoding the states using viterbi. I will do the next run with viterbi.

 
Ilya Antipin:

I am currently trying the package ldhmm. In contrast to depmixS4, which has almost no setup parameters, here, depending on the distribution type (normal and lamda-distribution), the parameters mu, sigma and lamda are initially set. These parameters are needed to calculate the mixing distribution P(x; mu, sigma, lamda) and the transition probability matrix. The model is constructed by optimizing (MLE) these values according to AIC, BIC, MLLK criteria . Here is the first result of running the strategy (BUY/SELL) in the tester on USDJPY-H1 from 2018.08.20 to 2019.05.21. This is a variant of getting a forecast without decoding the states using viterbi. I will do the next run with viterbi.

What graph is obtained there? i.e. 2 hidden states n-number of observed states? can be visualized somehow?

 
Maxim Dmitrievsky:

What graph is obtained there? i.e. 2 hidden states n-number of observables? can it be visualized somehow?

I use 2 hidden states with time series length of 11000 bars. As the time series of observations I use logarithmic return: series[i] = MathLog(iOpen(NULL, 0, i) / iOpen(NULL, 0, i+1)).


 
The archive contains the MQ4 version of the RHMM indicator from post .
Files:
RHMM.zip  122 kb
 
Maxim Dmitrievsky:

Strange that on the log returns and already some result, because they have little information

Try fractional differentiation (you can set the degree from 0.1 to 0.9), it should be even better then (indicator). If you set it to 1.0 you'll have the same returns with unit lag, if you decrease it to 0.1 you'll get more information in returns, but they will remain stationary

Let's try.
 
Grail:

On training you can make any mask a grail. And in general much has already been said that the choice of classification/regression method has little effect, as well as with "indicators", which, by the way, can also be hardly called MO (if with optimization).

Optimization of indicators in the tester may be a certain MO, but the TS needs to have a lot of "degrees of freedom" - in the end we will get the fitting on the history, like in the NS training - I've already gone through it


Who has an example of logit - regression on the alglib? - have some ideas, want to test

 
Igor Makanu:

Optimization of indictors in the tester can be a certain MO, but you need the TS to have many "degrees of freedom" - as a result, we will get a fit on the history as in the training of NS - I have already passed this


Who has an example of logit - regression on the alglib? - I have some ideas, I want to check it out.

i gave you the banditos, look for it on the PM

or I'll write an article with logit soon, I just decided to finish it today )
 
Maxim Dmitrievsky:

I threw you a banditos, look it up in the PM

I will write an article with logit soon, I just decided to finish it today )

Yeah, thanks! I'm just a couple of years behind you, I read a lot, and watch YouTube, but the material wagon

here in general, what i want to do and check - just chase the MO on the CD - success will be like with the usual TS, alas, the market is like that

but i think i can try to attach the "market context" to some robust TS in the form of logit regression - i.e. after testing to evaluate all trades as a probability

all in all there is something to do now

 
Igor Makanu:

Yeah, thanks! I'm just a couple of years behind you, I read a lot, and I watch YouTube, but the material is a carload

I want to do and check - just run the MO over the CA - success will be the same as with conventional TS, alas, the market is like that

but i think i can try to attach the "market context" to some robust TS in the form of logit regression - i.e. after testing to evaluate all trades as a probability

i.e. there is something to do now

Well, yes, there is. I am looking more towards the optimizer within the optimizer. I.e., hyperparameters are optimized by genetics in tester (e.g., window size and parameters of internal optimizer), while internal optimizer works inside the bot all the time. Logit is suitable because it's fast, though primitive.

I wish I could find those stinking mql chains hidden somewhere :) cool stuff
 
Maxim Dmitrievsky:

Logit is suitable because it is fast, albeit primitive.

yes, the accuracy always worsens the TS - there is some kind of logit regression graph and it is enough to estimate the probability

i still have to read, but i think the task may be very simple - the logit regression output should be presented as 50/50 and below 0,5 all losses, above 0,5 takeaways, and the higher the probability, the bigger takeaway

Maybe I'll try to visualize it, maybe I'll try to color the channel indicator that way, what if I do! )))

Reason: