Machine learning in trading: theory, models, practice and algo-trading - page 2441
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
We have already said that we are moving towards implementing machine learning in MQL5.
Soon we will release native support for complex numbers (ready), speed vectors and matrices. This is a native functionality of the language, not a library.
Next, we will include a large set of ML mechanics and give functionality similar to TensorFlow. This will allow us to write native robots at a completely different level.
That's interesting, but we need CatBoost model interpreters with support for categorical predictors and different tree variants, plus with multiclassification. I proceed from the fact that we need functionality to use modern achievements, and then functionality to recreate, improve, recycle them.
Built in different methods of clustering, classification, conversion, compression of dimensionality, transformation, selection of predictors can be useful with some correction for trading.
It is interesting, but we need interpreters of models of the same CatBoost with support of categorical predictors and different variants of construction of trees, plus with multiclassification. I proceed from the fact that we need functionality to use modern achievements, and then functionality to recreate, improve, recycle them.
Built in different methods of clustering, classification, conversion, compression of dimensionality, transformation, selection of predictors - can be useful with correction for trading.
All step by step.
We've already done things (complexes, vectors and matrices in alpha versions) that conventional languages, including Python (which doesn't even have native arrays of simple types) don't have.
In reality, the filters and engines inside TensorFlow are not super complex. They can be creatively ported to MQL5 without over-compatibility with anything and everything that is in the initial project.
In our time we have transferred and presented in MQL5 source code about 500 functions from R. Moreover, functions in MQL5 are from 3 to 50 times faster.
We have already said that we are moving towards implementing machine learning in MQL5.
Soon we will release native support for complex numbers (ready), speed vectors and matrices. This is a native functionality of the language, not a library.
Next, we will include a large set of ML mechanics and give functionality similar to TensorFlow. This will allow you to write a completely different level of native robots.
Will you use WinML or DirectML or will you use your own solution?
Will there be support for ONNX?
We have already said that we are moving towards implementing machine learning in MQL5.
Soon we will release native support for complex numbers (ready), speed vectors and matrices. This is a native functionality of the language, not a library.
Next, we will include a large set of ML mechanics and give functionality similar to TensorFlow. This will allow to write native robots of quite another level.
Renat Fatkhullin:
Soon we'll release native support for complex numbers (ready), speed vectors and matrices.
We really need the ability to work with arrays without loops, like in matlab and numpy (multiplication by number, element-by-element multiplication, slice).
I really need the ability to work with arrays without loops, like in matlab and numpy (multiplication by number, element-by-element multiplication, slice).
This already exists at language level.
Are you going to use WinML or DirectML or your own solution?
Will there be support for ONNX?
First, we are making native support for new data types and operations on them directly in the language.
The acceleration of operations through OpenCL/multithreading will be hidden and transparent for developers.
We will think about WinML/ONNX later.
We plan to automatically and transparently apply OpenCL to matrix and ML operations.
In fact, we are going to squeeze the maximum out without using tons of monstrously configurable CUDA and tensorflow libraries.
Won't OpenCL be automatically applied to vectors?
That is, if we work with several vectors, it would be more reasonable to use a matrix?
Or will vectors be supported in OpenCL as well?
Added.
Will the hardware resource on the CPU or GPU be selected automatically from what is available?
Or will it be possible to determine which resource to use?
Won't OpenCL be automatically applied to vectors?
That is, if we work with several vectors, it would be more rational to use a matrix?
Or will vectors also be supported in OpenCL?
Added.
Will the hardware resource on the CPU or GPU be selected automatically from what is available?
Or will it be possible to determine which resource to use?
There is little point in using high-cost OpenCL for single vectors.
We will use it wherever we can find an effect. OpenCL is not an end in itself.
Wait for the beta versions of matrix operations without OpenCL at first. Once basic functionality has been debugged, we will move on to speeding up.
Everything will be surely covered by stress-tests and benchmarks.