Machine learning in trading: theory, models, practice and algo-trading - page 3627

 
Maxim Kuznetsov #:

ALGLIB is also undocumented :-)

It is not in the documentation, help. From the whole description there is only a link to codobase

officially it does not exist.

There is already a series of articles on Alglib:


OpenBLAS documentation is updated almost daily, but it is posted gradually:

Function

Action

SingularValueDecompositionDC

Singular Value Decomposition, a "divide and conquer" algorithm. This algorithm is considered to be the fastest among other SVD algorithms (lapack function GESDD).

SingularValueDecompositionQR

Singular Value Decomposition, QR algorithm. This algorithm is considered a classical SVD algorithm (lapack function GESVD).

SingularValueDecompositionQRPivot

Singular Value Decomposition, QR algorithm with pivoting (lapack function GESVDQ).

SingularValueDecompositionBisect

Singular Value Decomposition, bisection algorithm (lapack function GESVDX).

SingularValueDecompositionJacobiHigh

Singular Value Decomposition, Jacobi high level algorithm (lapack function GEJSV).

SingularValueDecompositionJacobiLow

Singular Value Decomposition, Jacobi low level algorithm (lapack function GESVJ). In some cases, computes small singular values and their singular vectors much more accurately than other SVD subroutines.

SingularValueDecompositionBidiagDC.

Singular Value Decomposition, a divide and conquer algorithm for a bidiagonal matrix (lapack function BDSDC).

SingularValueDecompositionBidiagBisect

Singular Value Decomposition, bisection algorithm for bidiagonal matrices (lapack function BDSVDX).

EigenSolver

Computation of eigenvalues and eigenvectors of a regular square matrix by classical algorithm (lapack function GEEV).

EigenSolver2

Computation of generalised eigenvalues and eigenvectors for a pair of ordinary square matrices (lapack function GGEV).

EigenSolverX

Calculation of eigenvalues and eigenvectors of an ordinary square matrix in Expert mode, i.e. with possibility to influence the calculation algorithm and with possibility to get related calculation data (lapack-function GEEVX).

EigenSolverShur

Computation of eigenvalues, the upper-triangular matrix in Schur form and the matrix of Schur vectors (lapack function GEES). See Schur decomposition.

EigenSymmetricDC

Computation of eigenvalues and eigenvectors of a symmetric or Hermite (complex-conjugate) matrix using the "divide and conquer" algorithm (lapack functions SYEVD, HEEVD).

EigenSymmetricQR

Calculation of eigenvalues and eigenvectors of symmetric or Hermitian (complex-conjugate) matrix by means of classical QR algorithm (lapack functions SYEV, HEEV).

EigenSymmetricRobust

Computation of eigenvalues and eigenvectors of a symmetric or Hermitian (complex-conjugate) matrix using Multiple Relatively Robust Representations, MRRR algorithm (lapack functions SYEVR, HEEVR).

EigenSymmetricBisect

Calculation of eigenvalues and eigenvectors of a symmetric or Hermite (complex-conjugate) matrix using the bisection algorithm (lapack functions SYEVX, HEEVX).

SingularSpectrumAnalysisSpectrum

Function-method for calculating relative contributions of spectral components by eigenvalues.

SingularSpectrumAnalysisForecast

Function-method for calculating reconstructed and forecasted data using spectral components of the input time series.

SingularSpectrumAnalysisReconstructComponents

Function-method for calculating reconstructed components of the input time series and their contributions.

SingularSpectrumAnalysisReconstructSeries

Function-method for calculating the reconstructed time series using the first component_count components.


OpenBLAS is a high-performance open source linear algebra library that implements BLAS (Basic Linear Algebra Subprograms) and some LAPACK functions. OpenBLAS aims to improve computational performance, especially in matrix operations and vector computations, which are important in scientific and engineering tasks such as machine learning, numerical methods and simulations.

Key features of OpenBLAS:

  • Multi-threading support: OpenBLAS can efficiently utilise multiple processor cores for parallel computing, which significantly speeds up operations on multiprocessor systems.
  • Optimisation for processor architectures: OpenBLAS includes optimised builds for various processors such as Intel, AMD, ARM and others. The library automatically detects processor characteristics and selects the most appropriate function implementations.
  • Support for a wide range of BLAS operations: OpenBLAS implements basic BLAS functions such as operations with vectors (e.g., vector addition and scalar product), matrices (multiplication), and vector-matrix operations.
  • LAPACK compatibility: The library supports a number of LAPACK (Linear Algebra PACKage) functions that are needed for more complex linear algebra operations, such as solving systems of linear equations, calculating eigenvalues of matrices, and others.
  • High performance: Compared to other BLAS libraries, OpenBLAS often performs better due to manual optimisation for specific processor architectures.


There will be articles on OpenBLAS too - the topic is huge. There are a lot of materials on the net.


 

Proper use of gradient optimisers.

And I'll duplicate the notebook link.

Обсуждение статьи "Методы оптимизации библиотеки Alglib (Часть II)"
Обсуждение статьи "Методы оптимизации библиотеки Alglib (Часть II)"
  • 2024.11.01
  • Maxim Dmitrievsky
  • www.mql5.com
Опубликована статья Методы оптимизации библиотеки Alglib (Часть II) : Автор: Andrey Dik...
 
Maxim Kuznetsov #:

ALGLIB is also undocumented :-)

It is not in the documentation, help. From the whole description there is only a link to codobase

officially it does not exist.

--

now there's gonna be another undocumented thing.

Maxim, use it, look what an attractive feature!!! :

https://www.mql5.com/ru/docs/matrix/openblas/ssa/ssa_for

// somewhere above I expressed my desire for vectors to build something like this.

// but it's very nice that it's already done!

// and by the way, there is a great mathematician in the MQL team.
Документация по MQL5: Методы матриц и векторов / OpenBLAS / Singular Spectrum Analysis / SingularSpectrumAnalysisForecast
Документация по MQL5: Методы матриц и векторов / OpenBLAS / Singular Spectrum Analysis / SingularSpectrumAnalysisForecast
  • www.mql5.com
Функция-метод расчёта реконструированных и прогнозируемых данных с использованием спектральных компонент входного временного ряда. Вычисления для...
 
Renat Akhtyamov #:

Maxim, use it, look what an attractive chip!!!! :

https://www.mql5.com/ru/docs/matrix/openblas/ssa/ssa_for

// somewhere above I expressed my desire for vectors to build something like this.

// but it's very nice that it's already done!

// and by the way, the MQL team now has a great mathematician.
Is that a caterpillar? I don't do digital signal processing, I don't know how to use it. But this thing is probably necessary.
 
Renat Akhtyamov #:


// and by the way, there is a very good mathematician in the MQL team.

And not just one and for a long time.

Look at how many years we have been persistently developing mathematics, machine learning, neural networks, the compiler itself and the algo-trading community. We can already talk about decades, as this month we celebrate 24 years of the company.

 
Maxim Dmitrievsky #:

Proper use of gradient optimisers.

And I'll duplicate the notebook link.

You may be using them correctly, but you are testing them incorrectly. You pass off single successful results on low-dimensional functions (two-dimensional) as typical behaviour of methods on any dimension. Besides, you don't give the number of calls to the target function (and this is the key point - achieving the best results for the smallest number of calls to the FF).

Proper testing of gradient methods using MQL5.

 
Andrey Dik #:

You may be using it correctly, but you are testing it incorrectly. You pass off single successful results on low-dimensional functions (two-dimensional) as typical behaviour of methods on any dimensions. Besides, you don't give the number of accesses to the target function (and this is the key point - achieving the best results for the smallest number of accesses to the FF).

Proper testing of gradient methods using MQL5.

The testing has been performed correctly. It is incorrect in principle to compare different types of algorithms for different tasks by the number of calls to FF or something else. Everything else is fantastic. I don't get the joke, in short.
 
Maxim Dmitrievsky #:
The testing has been carried out correctly. It is incorrect in principle to compare different types of algorithms for different tasks by the number of calls to FF or something else. Everything else is fantastic.

Well, to each his own. Someone uses ready-made things in Python without understanding how it works, where it works and under what conditions it doesn't work. And some people need understanding, transparency and availability of source codes.

Both approaches have their place and there is no reason to argue. I hope I have made my point clearly.

 
Andrey Dik #:

Well, to each his own. Someone uses ready-made python without understanding how it works, where it works and under what conditions it does not work. And someone needs understanding, transparency, availability of source codes.

Both approaches have their place and there is no reason to argue. I hope I have made my point clearly.

Out of order. The same algorithms work the same way regardless of the presence or absence of source codes.

I have shown how to find the maximum of any function from the article. You can replace it with any other one.
 
Maxim Dmitrievsky #:

I showed how to find the maximum of any ph-i from the article.

Unfortunately, it does not correspond to the methodology of testing algorithms in the articles. The number of calls to the FF is not specified, incorrect ranges without function shifting are set, tests on different dimensions are not performed. The results are not summarised.

All sources are provided in the articles and comments, anyone can look at the code and reproduce the results. I don't see the point in continuing the topic.