Discussion of article "How to use ONNX models in MQL5"

 

New article How to use ONNX models in MQL5 has been published:

ONNX (Open Neural Network Exchange) is an open format built to represent machine learning models. In this article, we will consider how to create a CNN-LSTM model to forecast financial timeseries. We will also show how to use the created ONNX model in an MQL5 Expert Advisor.

There are two ways to create a model: You can use OnnxCreate to create a model from an onnx file or OnnxCreateFromBuffer to create it from a data array.

If an ONNX model is used as a resources in an EA, you will need to recompile the EA every time you change the model.


Not all models have fully defined sizes input and/or output tensor. This is normally the first dimension responsible for the package size. Before running a model, you must explicitly specify the sizes using the OnnxSetInputShape and OnnxSetOutputShape functions. The model's input data should be prepared in the same way as it was done when training the model.

Author: MetaQuotes

MetaQuotes
  • 2023.04.04
  • www.mql5.com
Trader's profile
 
To use this I would rather use a simple engulfing pattern😀 complicated for no good reason
 
it's great thanks to this article that i know how to get data from mt5 to python and traling model in python then get model from python to generate ea on mt5, i don't know if this model more great than traditional models but i'm sure sure it will make different models. it is future
How to use ONNX models in MQL5
How to use ONNX models in MQL5
  • www.mql5.com
ONNX (Open Neural Network Exchange) is an open format built to represent machine learning models. In this article, we will consider how to create a CNN-LSTM model to forecast financial timeseries. We will also show how to use the created ONNX model in an MQL5 Expert Advisor.
 
Thank you very  much. Really appreciate it. Wish much more article about it in the near future
 

there is a problem with onyx lib any idea ?

while installing onyx got an error

ERROR: Could not find a version that satisfies the requirement onyx (from versions: 0.0.5, 0.0.17, 0.0.19, 0.0.20, 0.0.21, 0.1, 0.1.1, 0.1.3, 0.1.4, 0.1.5, 0.2, 0.2.1, 0.3, 0.3.2, 0.3.3, 0.3.4, 0.3.5, 0.3.6, 0.3.7, 0.3.8, 0.3.9, 0.3.10, 0.3.11, 0.3.12, 0.4, 0.4.1, 0.4.2, 0.4.4, 0.4.5, 0.4.6, 0.4.7, 0.5, 0.6.1, 0.6.2, 0.6.3, 0.6.4, 0.7.3, 0.7.4, 0.7.5, 0.7.6, 0.7.7, 0.7.8, 0.7.10, 0.7.11, 0.7.12, 0.7.13, 0.8.5, 0.8.7, 0.8.10, 0.8.11)

ERROR: No matching distribution found for onyx

then when running t2fonnx 

 import onyx

ModuleNotFoundError: No module named 'onyx'



 
donbar upbar #:

there is a problem with onyx lib any idea ?

while installing onyx got an error

ERROR: Could not find a version that satisfies the requirement onyx (from versions: 0.0.5, 0.0.17, 0.0.19, 0.0.20, 0.0.21, 0.1, 0.1.1, 0.1.3, 0.1.4, 0.1.5, 0.2, 0.2.1, 0.3, 0.3.2, 0.3.3, 0.3.4, 0.3.5, 0.3.6, 0.3.7, 0.3.8, 0.3.9, 0.3.10, 0.3.11, 0.3.12, 0.4, 0.4.1, 0.4.2, 0.4.4, 0.4.5, 0.4.6, 0.4.7, 0.5, 0.6.1, 0.6.2, 0.6.3, 0.6.4, 0.7.3, 0.7.4, 0.7.5, 0.7.6, 0.7.7, 0.7.8, 0.7.10, 0.7.11, 0.7.12, 0.7.13, 0.8.5, 0.8.7, 0.8.10, 0.8.11)

ERROR: No matching distribution found for onyx

then when running t2fonnx 

 import onyx

ModuleNotFoundError: No module named 'onyx'



Hi Donbar, it looks like you're trying to install the wrong package. It should be onnx, not onyx.
 

Hi, thanks for the article. A great walkthrough on how to build an ML model and incorporate it into an EA!

I have tried to reproduce your results, but am having some issues. I was hoping you might be able to help me understand why.

I followed the article carefully, but ended up with radically different results in the strategy tester. I understand there are some random characteristics to these algorithms, although I'm still surprised by the difference. I was also careful to utilise the same time periods so that at least my training and test data was the same for model building purposes, and my MT5 backtest was over the same period. I got very different outcomes.

I've tried to identify possible causes, and I think the most interesting difference starts during model building. My loss functions suggest that you achieved a far better generalization when looking at performance over the test/validation data. I've included them at the end of this message.

Can you suggest possible causes of this? Is the model just so fragile that this isn't unexpected?

My most recent effort to reproduce involved simply copy-pasting your final Python code, inserting some Matplotlib calls to produce the loss graphs, but I had basically the same results. Can you suggest how I might better reproduce your results?

Thanks

Files:
LOSS.png  99 kb
RMSE.png  120 kb
copypaste.py  5 kb
 
Bill M #:

Hi, thanks for the article. A great walkthrough on how to build an ML model and incorporate it into an EA!

I have tried to reproduce your results, but am having some issues. I was hoping you might be able to help me understand why.

I followed the article carefully, but ended up with radically different results in the strategy tester. I understand there are some random characteristics to these algorithms, although I'm still surprised by the difference. I was also careful to utilise the same time periods so that at least my training and test data was the same for model building purposes, and my MT5 backtest was over the same period. I got very different outcomes.

I've tried to identify possible causes, and I think the most interesting difference starts during model building. My loss functions suggest that you achieved a far better generalization when looking at performance over the test/validation data. I've included them at the end of this message.

Can you suggest possible causes of this? Is the model just so fragile that this isn't unexpected?

My most recent effort to reproduce involved simply copy-pasting your final Python code, inserting some Matplotlib calls to produce the loss graphs, but I had basically the same results. Can you suggest how I might better reproduce your results?

Thanks

Facing the same issue here too.

Can someone help please?

 
Joseph #:

Facing the same issue here too.

Can someone help please?

Continue my investigation of the issue I am facing (likely others too); and updates of my findings.

First of all, thank you very much MetaQuotes (the author) for sharing this detailed article. I learn a great deal in my ML trading quest.

Running the original onnx files from the article on my MetaQuates-Demo account, I manage to reproduce the same results. However, retraining the onnx model with the attached ONNX.eurusd.H1.120.Training.py:

data start date = 2022-09-03 00:00:00
data end date = 2023-01-01 00:00:00

the model (onnx attached: ) scores:

RMSE         : 0.005212606864326095
MSE          : 2.7171270322019527e-05
R2 score     : -3.478924709873314

and the 1Jan2023-26Mar2023 backtest results attached: "backtest results.png"

MetaQuotes
  • 2023.11.23
  • www.mql5.com
Trader's profile
Files:
 

I retrain the attached ONNX.eurusd.H1.120.Training.py with the following:

data start date = 2022-11-28 12:28:00
data end date = 2023-03-28 12:28:00

the model (onnx attached:) scores:

RMSE         : 0.0014680559413400179
MSE          : 2.155188246903726e-06
R2 score     : 0.9699715149559284

and the 1Jan2023-26Mar2023 bactest results attached: "bacttest result2.png"

So, from the above exercises, I guess the model used to produce the final result from the article would likely not trained with the following dates?

data start date = 2022-09-03 00:00:00
data end date = 2023-01-01 00:00:00
Would appreciate someone can comment on these.
Files:
 
Nice article. The Prediction Graph on Testing Data is disappointing. You might as well skip all that DNN modeling/training and simply use a prediction of the next price equal to the last know price. I bet the prediction accuracy of such trivial model will be higher that using your DNN model. I suggest to compare those two accuracies and show them here. In general, using DNN to predict prices is a bad idea. They are better suited for classification of the price patterns (e.g., buy, sell, hold). Also, the number of weights in your DNN is astronomical. It must be overfitting.
Reason: