Machine learning in trading: theory, models, practice and algo-trading - page 1206

 
mytarmailS:

And why should I approximate it? It is already divided into 10 states by the Viterbi algorithm, as a cluster in essence

I think the price should be approximated before doing returns, or not to do returns?

Well it depends on the model, it's hard to say what should and should not be approximated) I just take the returns usually

 
mytarmailS:

By the way, if anyone wants to dabble with "cmm" here is an article with code and examples in R

http://gekkoquant.com/2014/09/07/hidden-markov-models-examples-in-r-part-3-of-4/

By the way, the SMM states in the article are quite interpretable.

 
mytarmailS:

And the dependence, in fact, is there...

I trained "SMM" (hidden Markov model) on returnees, divided it into 10 states and taught it without a teacher, so it itself divided different distributions


state distributions.


And here I grouped returns by states, i.e. each row is a separate market state

Some states (1,4,6,8,9) have too few observations, so you can not take them at all

And now I will try to restore the series, that is to make a cumulative sum, if some tendency is found in some of the states - the regularity in the direction

I did a cumulative summation.

States 5 and 7 have a stable structure, 5 for the bay and 7 for the village

Very interesting distributions and curves. Almost all have asymmetry. Thanks, I'll have another look and admire.

 
Aleksey Nikolayev:

By the way, the SMM states in the article are quite interpretable.

Well, no one is arguing, I just wrote about it to Maxim

 
mytarmailS:

Well, no one is arguing, I was just writing about it to Maxim.

The grail will be published soon, just wait a little... You can send letters of thanks with cash distributions later

gobble on :)

https://www.mql5.com/ru/articles/4777

Применение метода Монте-Карло в обучении с подкреплением
Применение метода Монте-Карло в обучении с подкреплением
  • www.mql5.com
В предыдущей статье мы познакомились с алгоритмом Random Decision Forest и написали простого самообучающегося эксперта на основе Reinforcement learning (обучения с подкреплением).   Было отмечено основное преимущество такого подхода: простота написания торгового алгоритма и высокая скорость "обучения". Обучение с подкреплением (далее просто RL...
 
Maxim Dmitrievsky:

The grail will be published soon, wait a bit... you will send thank you letters with cash distributions later

gobble away :)

https://www.mql5.com/ru/articles/4777

Cool, it feels like a peek into a magic lab. The value for ordermagic only confirms this feeling)

 
Aleksey Nikolayev:

Cool, it feels like a peek into a magic lab. The value for ordermagic only confirms this feeling)

I have some more material on PCA, overloading predictors and other stuff, I think I'll write one more article later, before I go to Python MO

 
Maxim Dmitrievsky:

there is more material on PCA, overwriting predictors and other stuff, I think I'll write one more article later, before switching to Python MO

Yes, that won't be out of place.

 
FxTrader562:

Thanks for the article.

So finally you have combined "Monte Carlo" with RDF:)))

The article seems to be interesting...I will see how effective it is in live testing and what improvements can be done and will update you...

If you have any key concerns to address in this version to improve the forward testing results, then you can let me know.

Instead of "random sampling" with shift_prob (shifted probability in code) I want to make samples from different distributions, which depends from current market states... you can think about

can try different distributions for it

 

Interested here, it came across

Fundamentals of Bayesian Data Analysis in R!

Reason: