Machine learning in trading: theory, models, practice and algo-trading - page 3236

 

If I understood correctly, the scheme of work with ONNX simplified looks like this:

MQL5_Input_Data -> ONNX -> MQL5_TS, where:

MQL5_Input_Data - data acquisition

ONNX - model (neural network with weight coefficients)

MQL5_TS - handler and trading system.


It is not clear how the ONNX model is executed, whether it is executed directly by MT5, or whether Python should be used for this purpose.

 
Andrey Dik #:

If I understood correctly, the scheme of working with ONNX simplistically looks like this:

MQL5_Input_Data -> ONNX -> MQL5_TS, where:

MQL5_Input_Data - receiving data

ONNX - model (neural network with weight coefficients)

MQL5_TS - handler and trading system.


It is not clear how the ONNX model is executed, whether it is executed directly by MT5, or whether Python should be used for this.

MT5 executes it. The output is just an exe of the bot. As a resource it is filed.
 
Andrey Dik #:

If I understood correctly, the scheme of working with ONNX simplistically looks like this:

MQL5_Input_Data -> ONNX -> MQL5_TS, where:

MQL5_Input_Data - receiving data

ONNX - model (neural network with weight coefficients)

MQL5_TS - handler and trading system.


It is not clear how the ONNX model is executed, whether it is executed directly by MT5, or whether Python should be used for this.

My understanding is as follows:. Onnx - model produces only
Signal (roughly speaking a number or a set of numbers), and MT5 terminal itself does the trading.
Through the OrderSend() function
 
Andrey Dik #:

It is not clear how the ONNX model is executed, whether it is executed directly by MT5 or whether Python needs to be involved for this.

Apparently, it is executed via onnxruntime from Microsoft, which is included in MT5. There was a brief moment when it was necessary to add some dlls to the root of the terminal to run the model.

GitHub - microsoft/onnxruntime: ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
GitHub - microsoft/onnxruntime: ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
  • microsoft
  • github.com
ONNX Runtime is a cross-platform inference and training machine-learning accelerator . ONNX Runtime inference can enable faster customer experiences and lower costs, supporting models from deep learning frameworks such as PyTorch and TensorFlow/Keras as well as classical machine learning libraries such as scikit-learn, LightGBM, XGBoost, etc...
 
Maxim Dmitrievsky #:
MT5 does it. How it is sewn into the bot body afterwards - I think Renat wrote about it, I don't remember. The output is just an exe of the bot.

If an EA with ONNX is just an exe, how will the organisers know that ONNX is used at all?

Or, "don't worry, they will know!"?)))

 
Andrey Dik #:

If an EA with ONNX is just an exe, how will the organisers know that ONNX is being used at all?

Or, "don't worry, they will know!"))))

No, the model is added as a resource. You can add it as a separate file - everything is described in the manual.
 
Andrey Dik #:

If an EA with ONNX is just an exe, how will the organisers know that ONNX is being used at all?

Or, "don't worry, they will know!"?)))

🤷‍♂️
 
Aleksey Nikolayev #:
No, the model is added as a resource.

Well, that's what I'm saying, the model as a resource is embedded in the EA exe, the exe is sent to the organiser. anything can be in the exe, up to the absence of the ONNX model at all))))

 
Andrey Dik #:

Well, that's what I'm saying, the model as a resource is embedded in the EA exe, the exe is sent to the organiser. anything can be in the exe, up to the absence of the ONNX model at all)))

))

No, there are some messages about the launch of onnx. And they will accept models, not exeshniks, and then they will run models in the same template for all.

 
But my namesake, apparently, intends to cram some of his own handicraft into the model) Logically, for 15 thousand USD is not on the road).
Reason: