Machine learning in trading: theory, models, practice and algo-trading - page 3703

 
Maxim Dmitrievsky #:
Figured out how to investigate such structures by means of MO. I have written several approaches, but there are still options. As a replacement of the old efficient market hypothesis, the multifractal hypothesis looks more interesting. There are special packages to test the hypothesis. Re-reading Mandelbrot even for that, just for a refresher.

Crossing Mandelbrot with MO is a very interesting approach. He had, I recall, price fractal generation using a three part generator.

I tried to come up with some indicators and price representations (up to formal stochastic grammars) based on it. I failed to find something interesting - everything is somehow complicated and non-stationary. I also often got an implicit hard-to-see look ahead (like with a zigzag), which constantly led to "grails" and subsequent disappointments.

 
Aleksey Nikolayev #:

Crossing Mandelbrot with MO is a very interesting approach. I remember he had price fractal generation using a three-part generator.

I tried to invent some indicators and price representations (up to formal stochastic grammars) based on it. I failed to find something interesting - everything is somehow complicated and non-stationary. I also often got an implicit hard-to-see look ahead (like with a zigzag), which constantly led to "grails" and subsequent disappointments.

I'll try to show something soon, what I've thought up :)
 
Maxim Dmitrievsky #:
I'll try to show something soon, what I have in mind :)
I hope for the article - let's read it and be inspired for new feats on the grail front )
 

For information new for machine learning in MQL5:

  • MQL5: Added the @ matrix multiplication operator. It works according to the rules of linear algebra and allows you to multiply matrices and vectors, as well as perform scalar product of vectors.

    Matrix multiplication (matrix × matrix)
    matrix A(2, 3);
    matrix B(3, 2);
    matrix C = A @ B; // Result: matrix C of size [2,2]
    Matrix multiplication (matrix × vector)
    matrix M(2, 3);
    vector V(3);
    vector R = M @ V; // Result: vector R of 2 elements
    Matrix multiplication (vector × matrix)
    matrix M(2, 3);
    vector V(1, 2);
    vector R = V @ M; // Result: vector R of 3 elements
    Scalar product (vector × vector)
    vector V1(1, 3), V2(1, 3);
    double r = V1 @ V2; // Result: scalar
  • MQL5: Added ddof parameter to Std, Var and Cov methods. It defines the number of degrees of freedom that are subtracted from the divisor when calculating the standard deviation. The parameter defaults to 0 for Std and Var, and 1 for Cov.

    What is the effect of ddof:

    • By default, ddof=0 and the standard deviation is calculated for the entire population (population standard deviation).
    • If ddof=1, the sample standard deviation is used, which adjusts the estimate for the final sample (used in statistics when working with a subset of data).

  • MQL5: Added new OpenBLAS methods:

    Calculation of eigenvalues and eigenvectors

    • EigenTridiagonalDC - calculates eigenvalues and eigenvectors of a symmetric tridiagonal matrix using the "divide and conquer" algorithm (LAPACK-function STEVD).
    • EigenTridiagonalQR - computes eigenvalues and eigenvectors of a symmetric tridiagonal matrix using the QR algorithm (LAPACK function STEV).
    • EigenTridiagonalRobust - computes eigenvalues and eigenvectors of a symmetric tridiagonal matrix using the MRRR (Multiple Relatively Robust Representations) algorithm (LAPACK function STEVR).
    • EigenTridiagonalBisect - calculates eigenvalues and eigenvectors of a symmetric tridiagonal matrix using the bisection algorithm (LAPACK-function STEVX).
    • ReduceToBidiagonal - reduces a generic real or complex matrix of size m×n to an upper or lower bidiagonal form B using an orthogonal transformation: Q**T * A * P = B. If m≥n, B is an upper bidiagonal matrix, otherwise it is a lower bidiagonal matrix. (LAPACK function GEBRD).
    • ReflectBidiagonalToQP - generates the orthogonal matrices Q and P**T (or P**H for complex types) determined by the ReduceToBidiagonal method of reducing a real or complex matrix A to bidiagonal form: A = Q * B * P**T. Q and P**T are products of the elementary reflectors H(i) or G(i), respectively. (LAPACK functions ORGBR, UNGBR).
    • ReduceSymmetricToTridiagonal - reduces the real symmetric or complex Hermite matrix A to the tridiagonal form B using an orthogonal similarity transformation: Q**T * A * Q = B. (LAPACK functions SYTRD, HETRD).
    • ReflectTridiagonalToQ - generates an orthogonal matrix Q, which is the product of n-1 elementary reflectors of order n returned by the ReduceSymmetricToTridiagonal function.

    • LinearEquationsSolution - computes a system of linear equations with a square matrix of coefficients A and multiple right-hand sides.
    • LinearEquationsSolutionTriangular - computes a system of linear equations with a square-triangular matrix of coefficients A and multiple right-hand sides.
    • LinearEquationsSolutionSy - calculates a system of linear equations with symmetric or Hermite-conjugate matrix A and several right parts.
    • LinearEquationsSolutionComplexSy - calculates a system of linear equations with complex symmetric matrix A and several right parts.
    • LinearEquationsSolutionGeTrid - calculates a system of linear equations with symmetric or Hermite-conjugate positive definite matrix A and several right parts.
    • LinearEquationsSolutionSyPD - computes a system of linear equations with a general (asymmetric) tridiagonal matrix of coefficients A and several right parts.
    • LinearEquationsSolutionSyTridPD - calculates the system of linear equations with symmetric tridiagonal positively defined matrix A and several right parts.
    • FactorizationQR - computes the QR decomposition of a general matrix of size m into n: A = Q * R (LAPACK function GEQRF).
    • FactorisationQRNonNeg - computes the QR decomposition of a general matrix of size m by n: A = Q * R, where R is an upper-triangular matrix with non-negative elements on the diagonal (LAPACK-function GEQRFP).
    • FactorisationQRPivot - computes the QR decomposition of a general matrix of size m into n with column permutation: A * P = Q * R (LAPACK function GEQP3).
    • FactorisationLQ - performs an LQ decomposition of a general matrix of size m into n: A = L * Q * Q (LAPACK function GELQF).
    • FactorisationQL - performs a QL decomposition of a general matrix of size m by n: A = Q * L (LAPACK function GEQLF).
    • FactorisationRQ - performs an RQ decomposition of a general matrix of size m into n: A = R * Q (LAPACK-function GERQF).
    • FactorizationPLU - computes the LU-decomposition of a general matrix A of size m by n using partial selection of the leading element and row permutations (LAPACK function GETRF).
    • FactorisationPLUGeTrid - computes the LU-decomposition of a general (asymmetric) tridiagonal matrix A of size n by n using partial selection of the leading element and row permutations (LAPACK function GTTRF).
    • FactorisationLDL - computes the decomposition for a real symmetric or complex Hermite matrix A using diagonal selection of the leading element by the Bunch-Kaufman method (LAPACK functions SYTRF and HETRF).
    • FactorisationLDLSyTridPD - computes the decomposition of a symmetric positive definite (for real data) or Hermite positive definite (for complex data) tridiagonal matrix A (LAPACK function PTTRF).
    • FactorisationCholesky - computes the decomposition for a real symmetric or complex Hermitian positively definite matrix A (LAPACK function POTRF).
    • FactorisationCholeskySyPS - computes the Cholesky decomposition with complete pivoting for a real symmetric (or complex Hermite) positive semi-definite matrix A of size n by n (LAPACK function PSTRF).

  • MQL5: Added Random function and method for filling vectors and matrices with random values. Random values are generated uniformly within a specified range.
    static vector vector::Random(
      const ulong   size,       // vector length
      const double  min=0.0,    // minimum value
      const double  max=1.0     // maximum value
       );
    
    static matrix matrix::Random(
      const ulong   rows,       // number of lines
      const ulong   cols        // number of columns
      const float   min=0.0,    // minimum value
      const float   max=1.0     // maximum value
       );
  • Документация по MQL5: Основы языка / Операторы / Оператор произведения матриц
    Документация по MQL5: Основы языка / Операторы / Оператор произведения матриц
    • www.mql5.com
    Оператор @ в языке MQL5 реализует матричное умножение по правилам линейной алгебры. Он позволяет выполнять произведение матриц, векторов и...
     
    Renat Fatkhullin #:
    For information new for machine learning in MQL5:

    Thanks!

    Are there plans to do matrix/vector sorting someday?

     
    Aleksey Nikolayev #:
    I hope for the article - let's read it and be inspired for new feats on the grail front ).
    There is something like generators with different number of breaks, through the analysis of symmetric structures (mirror inverted). That is, the generator has a centre (attractor) and branches to the past and future, which are often symmetric. Search for such structures through correlation/entropy. There are many structures, so generalisation via MO. The first results were a bit gratifying, but have to credit the code and approach.
    I had a problem with peeking at first, then I got rid of it, and still it turned out better than classical BP classification approaches, when a series is perceived from left to right and some correlations are looked for in it.
    The main point of this theory is that BP has a long memory and older observations can influence future behaviour more strongly than the newest ones.
     
    Chaos & Fractals in Financial Markets | PDF | Chaos Theory | Attractor
    Chaos & Fractals in Financial Markets | PDF | Chaos Theory | Attractor
    • www.scribd.com
    Chaos and Fractals in Financial Markets, index to articles, by J. Orlin Grabbe. Problems and answers. Hazardous world. Coin flips and Brownian motion. Coastlines and Koch curves. Jam session. Gamblers, zero-sets, and fractal mountains. Futures trading and the gambler's ruin problem.
     
    Seems to be suitable maths for scalping. It seems possible to add MO there too. I have not come across a popular presentation of the approach yet.
    Hawkes Models And Their Applications
    • arxiv.org
    This largest section examines the newer variants of Hawkes which were not originally covered in our book (Laub et al., 2022) . We will define each process and describe the key innovations that they bring, and (space-permitting) say some words (and mainly supply references) for each processes’ inference, simulation, theoretical results, and so...
     

    Such self-similar structures are sought out and traded only where there is a high correlation between the left side and the inverted mirror-right side. There are many such patterns found, with different periods.

    Then there's the rather fancy scheme of how to train MO on this, haven't formalised the best approach yet.

    An analogue of "eggs or chickens". You can look for different symmetries, not necessarily mirror symmetries. But mirror ones I think are the strongest so far. There should be low entropy in these areas (haven't checked yet), so predictions during the existence of such patterns will be more accurate (TC idea). That is, if you define the direction from this complete pattern n bars ahead into the future as a 0 or 1 mark, it should predict better than when no symmetry is observed.


     
    MO simply generalises the signals across sets of different patterns. There may also be similar patterns with contradictory signals, this also has to be processed separately :).