You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
there is a problem with onyx lib any idea ?
while installing onyx got an error
ERROR: Could not find a version that satisfies the requirement onyx (from versions: 0.0.5, 0.0.17, 0.0.19, 0.0.20, 0.0.21, 0.1, 0.1.1, 0.1.3, 0.1.4, 0.1.5, 0.2, 0.2.1, 0.3, 0.3.2, 0.3.3, 0.3.4, 0.3.5, 0.3.6, 0.3.7, 0.3.8, 0.3.9, 0.3.10, 0.3.11, 0.3.12, 0.4, 0.4.1, 0.4.2, 0.4.4, 0.4.5, 0.4.6, 0.4.7, 0.5, 0.6.1, 0.6.2, 0.6.3, 0.6.4, 0.7.3, 0.7.4, 0.7.5, 0.7.6, 0.7.7, 0.7.8, 0.7.10, 0.7.11, 0.7.12, 0.7.13, 0.8.5, 0.8.7, 0.8.10, 0.8.11)
ERROR: No matching distribution found for onyx
then when running t2fonnx
import onyx
there is a problem with onyx lib any idea ?
while installing onyx got an error
ERROR: Could not find a version that satisfies the requirement onyx (from versions: 0.0.5, 0.0.17, 0.0.19, 0.0.20, 0.0.21, 0.1, 0.1.1, 0.1.3, 0.1.4, 0.1.5, 0.2, 0.2.1, 0.3, 0.3.2, 0.3.3, 0.3.4, 0.3.5, 0.3.6, 0.3.7, 0.3.8, 0.3.9, 0.3.10, 0.3.11, 0.3.12, 0.4, 0.4.1, 0.4.2, 0.4.4, 0.4.5, 0.4.6, 0.4.7, 0.5, 0.6.1, 0.6.2, 0.6.3, 0.6.4, 0.7.3, 0.7.4, 0.7.5, 0.7.6, 0.7.7, 0.7.8, 0.7.10, 0.7.11, 0.7.12, 0.7.13, 0.8.5, 0.8.7, 0.8.10, 0.8.11)
ERROR: No matching distribution found for onyx
then when running t2fonnx
import onyx
Hi, thanks for the article. A great walkthrough on how to build an ML model and incorporate it into an EA!
I have tried to reproduce your results, but am having some issues. I was hoping you might be able to help me understand why.
I followed the article carefully, but ended up with radically different results in the strategy tester. I understand there are some random characteristics to these algorithms, although I'm still surprised by the difference. I was also careful to utilise the same time periods so that at least my training and test data was the same for model building purposes, and my MT5 backtest was over the same period. I got very different outcomes.
I've tried to identify possible causes, and I think the most interesting difference starts during model building. My loss functions suggest that you achieved a far better generalization when looking at performance over the test/validation data. I've included them at the end of this message.
Can you suggest possible causes of this? Is the model just so fragile that this isn't unexpected?
My most recent effort to reproduce involved simply copy-pasting your final Python code, inserting some Matplotlib calls to produce the loss graphs, but I had basically the same results. Can you suggest how I might better reproduce your results?
Thanks
Using ONNX models in µl solves the implementation issue. However, not all models and not quite easy.
Training and optimisation of models is solved by a separate process in Python.
But of all the above, the first stage is the most time-consuming, highly creative and the most important. And it is impossible to realise it on µl. We don't consider primitive scaling as preprocessing. And the folk wisdom says: "Garbage in - rubbish out". There is too much to be additionally developed and implemented in MCL for full use of MO only on MCL. It is impossible to embrace the immense, especially since it is constantly expanding.
Therefore, to execute a preprocess, either make it in another language (whoever has mastered R/Python/Julia, etc.) or try to convert it to ONNX.
The benefit of implementing ONNX so far is only in learning how to convert, create, simplify and optimise ONNX models. It may be useful in the future.
You couldn't say it better, everything is precise and to the point
Transfer preprocessing to MT is not a problem, I really want to test the system in MT tester, in python/R I need to write my own tester, fraught with errors.
There are already plenty of testers, tested in R and python.
But to transfer preprocessing is just a problem, preprocessing is not a trivial scaling or stochastic calculation.Hi, thanks for the article. A great walkthrough on how to build an ML model and incorporate it into an EA!
I have tried to reproduce your results, but am having some issues. I was hoping you might be able to help me understand why.
I followed the article carefully, but ended up with radically different results in the strategy tester. I understand there are some random characteristics to these algorithms, although I'm still surprised by the difference. I was also careful to utilise the same time periods so that at least my training and test data was the same for model building purposes, and my MT5 backtest was over the same period. I got very different outcomes.
I've tried to identify possible causes, and I think the most interesting difference starts during model building. My loss functions suggest that you achieved a far better generalization when looking at performance over the test/validation data. I've included them at the end of this message.
Can you suggest possible causes of this? Is the model just so fragile that this isn't unexpected?
My most recent effort to reproduce involved simply copy-pasting your final Python code, inserting some Matplotlib calls to produce the loss graphs, but I had basically the same results. Can you suggest how I might better reproduce your results?
Thanks
Facing the same issue here too.
Can someone help please?
Facing the same issue here too.
Can someone help please?
Continue my investigation of the issue I am facing (likely others too); and updates of my findings.
First of all, thank you very much MetaQuotes (the author) for sharing this detailed article. I learn a great deal in my ML trading quest.
Running the original onnx files from the article on my MetaQuates-Demo account, I manage to reproduce the same results. However, retraining the onnx model with the attached ONNX.eurusd.H1.120.Training.py:
the model (onnx attached: ) scores:
and the 1Jan2023-26Mar2023 backtest results attached: "backtest results.png"
I retrain the attached ONNX.eurusd.H1.120.Training.py with the following:
the model (onnx attached:) scores:
and the 1Jan2023-26Mar2023 bactest results attached: "bacttest result2.png"
So, from the above exercises, I guess the model used to produce the final result from the article would likely not trained with the following dates?
Would appreciate someone can comment on these.