Machine learning in trading: theory, models, practice and algo-trading - page 2441

 
Renat Fatkhullin:

We have already said that we are moving towards implementing machine learning in MQL5.

Soon we will release native support for complex numbers (ready), speed vectors and matrices. This is a native functionality of the language, not a library.

Next, we will include a large set of ML mechanics and give functionality similar to TensorFlow. This will allow us to write native robots at a completely different level.

That's interesting, but we need CatBoost model interpreters with support for categorical predictors and different tree variants, plus with multiclassification. I proceed from the fact that we need functionality to use modern achievements, and then functionality to recreate, improve, recycle them.

Built in different methods of clustering, classification, conversion, compression of dimensionality, transformation, selection of predictors can be useful with some correction for trading.

 
Aleksey Vyazmikin:

It is interesting, but we need interpreters of models of the same CatBoost with support of categorical predictors and different variants of construction of trees, plus with multiclassification. I proceed from the fact that we need functionality to use modern achievements, and then functionality to recreate, improve, recycle them.

Built in different methods of clustering, classification, conversion, compression of dimensionality, transformation, selection of predictors - can be useful with correction for trading.

All step by step.

We've already done things (complexes, vectors and matrices in alpha versions) that conventional languages, including Python (which doesn't even have native arrays of simple types) don't have.

In reality, the filters and engines inside TensorFlow are not super complex. They can be creatively ported to MQL5 without over-compatibility with anything and everything that is in the initial project.

In our time we have transferred and presented in MQL5 source code about 500 functions from R. Moreover, functions in MQL5 are from 3 to 50 times faster.

Статистические распределения в MQL5 - берем лучшее из R и делаем быстрее
Статистические распределения в MQL5 - берем лучшее из R и делаем быстрее
  • www.mql5.com
Рассмотрены функции для работы с основными статистическими распределениями, реализованными в языке R. Это распределения Коши, Вейбулла, нормальное, логнормальное, логистическое, экспоненциальное, равномерное, гамма-распределение, центральное и нецентральные распределения Бета, хи-квадрат, F-распределения Фишера, t-распределения Стьюдента, а также дискретные биномиальное и отрицательное биномиальные распределения, геометрическое, гипергеометрическое и распределение Пуассона. Есть функции расчета теоретических моментов распределений, которые позволяют оценить степень соответствия реального распределения модельному.
 
Renat Fatkhullin:

We have already said that we are moving towards implementing machine learning in MQL5.

Soon we will release native support for complex numbers (ready), speed vectors and matrices. This is a native functionality of the language, not a library.

Next, we will include a large set of ML mechanics and give functionality similar to TensorFlow. This will allow you to write a completely different level of native robots.

Will you use WinML or DirectML or will you use your own solution?

Will there be support for ONNX?

 
Renat Fatkhullin:

We have already said that we are moving towards implementing machine learning in MQL5.

Soon we will release native support for complex numbers (ready), speed vectors and matrices. This is a native functionality of the language, not a library.

Next, we will include a large set of ML mechanics and give functionality similar to TensorFlow. This will allow to write native robots of quite another level.

Renat, this is really interesting. I hope for full-fledged documentation on the developed direction. Thank you!
 

Renat Fatkhullin:

Soon we'll release native support for complex numbers (ready), speed vectors and matrices.

We really need the ability to work with arrays without loops, like in matlab and numpy (multiplication by number, element-by-element multiplication, slice).

 
Rorschach:

I really need the ability to work with arrays without loops, like in matlab and numpy (multiplication by number, element-by-element multiplication, slice).

This already exists at language level.

 
Koldun Zloy:

Are you going to use WinML or DirectML or your own solution?

Will there be support for ONNX?

First, we are making native support for new data types and operations on them directly in the language.

The acceleration of operations through OpenCL/multithreading will be hidden and transparent for developers.

We will think about WinML/ONNX later.

 
Offtopic deleted.
 
Renat Fatkhullin:

We plan to automatically and transparently apply OpenCL to matrix and ML operations.

In fact, we are going to squeeze the maximum out without using tons of monstrously configurable CUDA and tensorflow libraries.

Won't OpenCL be automatically applied to vectors?
That is, if we work with several vectors, it would be more reasonable to use a matrix?
Or will vectors be supported in OpenCL as well?

Added.
Will the hardware resource on the CPU or GPU be selected automatically from what is available?
Or will it be possible to determine which resource to use?

 
Roman:

Won't OpenCL be automatically applied to vectors?
That is, if we work with several vectors, it would be more rational to use a matrix?
Or will vectors also be supported in OpenCL?

Added.
Will the hardware resource on the CPU or GPU be selected automatically from what is available?
Or will it be possible to determine which resource to use?

There is little point in using high-cost OpenCL for single vectors.

We will use it wherever we can find an effect. OpenCL is not an end in itself.

Wait for the beta versions of matrix operations without OpenCL at first. Once basic functionality has been debugged, we will move on to speeding up.

Everything will be surely covered by stress-tests and benchmarks.

Reason: