Machine learning in trading: theory, models, practice and algo-trading - page 3627
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
ALGLIB is also undocumented :-)
It is not in the documentation, help. From the whole description there is only a link to codobase
officially it does not exist.
There is already a series of articles on Alglib:
OpenBLAS documentation is updated almost daily, but it is posted gradually:
Function
Action
SingularValueDecompositionDC
Singular Value Decomposition, a "divide and conquer" algorithm. This algorithm is considered to be the fastest among other SVD algorithms (lapack function GESDD).
SingularValueDecompositionQR
Singular Value Decomposition, QR algorithm. This algorithm is considered a classical SVD algorithm (lapack function GESVD).
SingularValueDecompositionQRPivot
Singular Value Decomposition, QR algorithm with pivoting (lapack function GESVDQ).
SingularValueDecompositionBisect
Singular Value Decomposition, bisection algorithm (lapack function GESVDX).
SingularValueDecompositionJacobiHigh
Singular Value Decomposition, Jacobi high level algorithm (lapack function GEJSV).
SingularValueDecompositionJacobiLow
Singular Value Decomposition, Jacobi low level algorithm (lapack function GESVJ). In some cases, computes small singular values and their singular vectors much more accurately than other SVD subroutines.
SingularValueDecompositionBidiagDC.
Singular Value Decomposition, a divide and conquer algorithm for a bidiagonal matrix (lapack function BDSDC).
SingularValueDecompositionBidiagBisect
Singular Value Decomposition, bisection algorithm for bidiagonal matrices (lapack function BDSVDX).
EigenSolver
Computation of eigenvalues and eigenvectors of a regular square matrix by classical algorithm (lapack function GEEV).
EigenSolver2
Computation of generalised eigenvalues and eigenvectors for a pair of ordinary square matrices (lapack function GGEV).
EigenSolverX
Calculation of eigenvalues and eigenvectors of an ordinary square matrix in Expert mode, i.e. with possibility to influence the calculation algorithm and with possibility to get related calculation data (lapack-function GEEVX).
EigenSolverShur
Computation of eigenvalues, the upper-triangular matrix in Schur form and the matrix of Schur vectors (lapack function GEES). See Schur decomposition.
EigenSymmetricDC
Computation of eigenvalues and eigenvectors of a symmetric or Hermite (complex-conjugate) matrix using the "divide and conquer" algorithm (lapack functions SYEVD, HEEVD).
EigenSymmetricQR
Calculation of eigenvalues and eigenvectors of symmetric or Hermitian (complex-conjugate) matrix by means of classical QR algorithm (lapack functions SYEV, HEEV).
EigenSymmetricRobust
Computation of eigenvalues and eigenvectors of a symmetric or Hermitian (complex-conjugate) matrix using Multiple Relatively Robust Representations, MRRR algorithm (lapack functions SYEVR, HEEVR).
EigenSymmetricBisect
Calculation of eigenvalues and eigenvectors of a symmetric or Hermite (complex-conjugate) matrix using the bisection algorithm (lapack functions SYEVX, HEEVX).
SingularSpectrumAnalysisSpectrum
Function-method for calculating relative contributions of spectral components by eigenvalues.
SingularSpectrumAnalysisForecast
Function-method for calculating reconstructed and forecasted data using spectral components of the input time series.
SingularSpectrumAnalysisReconstructComponents
Function-method for calculating reconstructed components of the input time series and their contributions.
SingularSpectrumAnalysisReconstructSeries
Function-method for calculating the reconstructed time series using the first component_count components.
OpenBLAS is a high-performance open source linear algebra library that implements BLAS (Basic Linear Algebra Subprograms) and some LAPACK functions. OpenBLAS aims to improve computational performance, especially in matrix operations and vector computations, which are important in scientific and engineering tasks such as machine learning, numerical methods and simulations.
Key features of OpenBLAS:
There will be articles on OpenBLAS too - the topic is huge. There are a lot of materials on the net.
Proper use of gradient optimisers.
And I'll duplicate the notebook link.
ALGLIB is also undocumented :-)
It is not in the documentation, help. From the whole description there is only a link to codobase
officially it does not exist.
--
now there's gonna be another undocumented thing.
Maxim, use it, look what an attractive feature!!! :
https://www.mql5.com/ru/docs/matrix/openblas/ssa/ssa_for
// somewhere above I expressed my desire for vectors to build something like this.
// but it's very nice that it's already done!
// and by the way, there is a great mathematician in the MQL team.Maxim, use it, look what an attractive chip!!!! :
https://www.mql5.com/ru/docs/matrix/openblas/ssa/ssa_for
// somewhere above I expressed my desire for vectors to build something like this.
// but it's very nice that it's already done!
// and by the way, the MQL team now has a great mathematician.
// and by the way, there is a very good mathematician in the MQL team.And not just one and for a long time.
Look at how many years we have been persistently developing mathematics, machine learning, neural networks, the compiler itself and the algo-trading community. We can already talk about decades, as this month we celebrate 24 years of the company.
Proper use of gradient optimisers.
And I'll duplicate the notebook link.
You may be using them correctly, but you are testing them incorrectly. You pass off single successful results on low-dimensional functions (two-dimensional) as typical behaviour of methods on any dimension. Besides, you don't give the number of calls to the target function (and this is the key point - achieving the best results for the smallest number of calls to the FF).
Proper testing of gradient methods using MQL5.
You may be using it correctly, but you are testing it incorrectly. You pass off single successful results on low-dimensional functions (two-dimensional) as typical behaviour of methods on any dimensions. Besides, you don't give the number of accesses to the target function (and this is the key point - achieving the best results for the smallest number of accesses to the FF).
Proper testing of gradient methods using MQL5.
The testing has been carried out correctly. It is incorrect in principle to compare different types of algorithms for different tasks by the number of calls to FF or something else. Everything else is fantastic.
Well, to each his own. Someone uses ready-made things in Python without understanding how it works, where it works and under what conditions it doesn't work. And some people need understanding, transparency and availability of source codes.
Both approaches have their place and there is no reason to argue. I hope I have made my point clearly.
Well, to each his own. Someone uses ready-made python without understanding how it works, where it works and under what conditions it does not work. And someone needs understanding, transparency, availability of source codes.
Both approaches have their place and there is no reason to argue. I hope I have made my point clearly.
Unfortunately, it does not correspond to the methodology of testing algorithms in the articles. The number of calls to the FF is not specified, incorrect ranges without function shifting are set, tests on different dimensions are not performed. The results are not summarised.
All sources are provided in the articles and comments, anyone can look at the code and reproduce the results. I don't see the point in continuing the topic.