Discussing the article: "Integrating Hidden Markov Models in MetaTrader 5"

 

Check out the new article: Integrating Hidden Markov Models in MetaTrader 5.

In this article we demonstrate how Hidden Markov Models trained using Python can be integrated into MetaTrader 5 applications. Hidden Markov Models are a powerful statistical tool used for modeling time series data, where the system being modeled is characterized by unobservable (hidden) states. A fundamental premise of HMMs is that the probability of being in a given state at a particular time depends on the process's state at the previous time slot.

Hidden Markov Models are a powerful statistical tool used for modeling time series data, where the system being modeled is characterized by unobservable (hidden) states. A fundamental premise of HMMs is that the probability of being in a given state at a particular time depends on the process's state at the previous time slot. This dependence represents the memory of an HMM.

In the context of financial time series, the states could represent whether a series is trending upwards, trending downwards, or oscillating within a specific range. Anyone who has used any financial indicator is familiar with the whipsaw effect caused by the noise inherent in financial time series. An HMM can be employed to filter out these false signals, providing a clearer understanding of the underlying trends.

To build a HMM, we need observations that capture the totality of the behaviour defining the process. This sample of data is used to learn the parameters of appropriate HMM. This dataset would be made up of various features of the process being modeled. For example, if we were studying the close prices of a financial asset, we could also include other aspects related to the close price, like various indicators that ideally, help in defining the hidden states we are interested in.

The process of learning the model parameters is carried out under the assumption that the series being modeled will always be in one of two or more states. The states are simply labeled 0 to S-1. For these states, we must assign a set of probabilities that capture the likelihood of the process switching from one state to another. These probabilities are usually referred to as the transition matrix. The first observation has a special set of initial probabilities for being in each possible state. If an observation is in a particular state, it is expected to follow a specific distribution associated with that state.

An HMM is therefore fully defined by four properties:

  • The number of states to assume
  • The initial probabilities for the first observation being in any one of the states
  • The transition matrix of probabilities
  • The probability density functions for each state.


Author: Francis Dube

 

" At least one two-dimensional array is expected as input data. " - what to put in this array? The usual predictor values?

I don't understand, during training is there auto-selection of predictors or not?

If the predictors have different distributions, then what about it?

Is there a setting for the number of predictor splits (quantisation)?