You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Congratulations for such a great article after a long time!!!
How to train and test other currency pairs?
The coding part is complicated for me to do any editing or any form of improvements for testing purpose:)
Hi, I have not tried to run a python programme from MT5. Maybe there are some peculiarities there.
Try to run it from another python editor. I use VScode or jupyter
Hi, I have not tried running a python programme from MT5. Perhaps there are some peculiarities.
Try to run it from another python editor. I am using VScode or jupyter
Ok, I will try VSCode and see.
EURUSD pair is working fine. Attached report for 6 years backtest.
But how can I know whether it is curve fitting or not ??? :))
So I want to create and test other currency pairs to confirm whether it is working or not.
Ok, I will try VSCode and see.
EURUSD pair is working fine. Attached report for 6 years backtest.
But how can I know whether it is curve fitting or not ??? :))
So I want to create and test other currency pairs to confirm whether it is working or not.
well, this is common problem for all trade systems
you can try another pairs or even can change predictors
this is a general approach described in the article
well, this is a common problem for all trading systems.
you can try other pairs or even can change predictors
this is a general approach described in the article
Well, I am not an expert level programmer like you who can edit it easily :)))
I am a basic level programmer. I j ust installed VSCode and trying to use it for first time to edit for USDCAD currency pair to test.
Can you please help with the errors? Attached screenshot.
Well, I am not an expert level programmer like you who can edit it easily :))))
I am a basic level programmer. I j ust installed VSCode and trying to use it for the first time to edit for USDCAD currency pair to test.
Can you please help with the errors? Attached screenshot.
ahh, these just pylint errors (pylint is python linter), he cannot find definitions in the MT5 library distribution. You can change language server on microsoft.
go to settings, write ''jedi'' in search field and change for, like here
but it's not an error actually, just a warnings, you can ignore it.
ahh, this just pylint errors (pylint is python linter), it cannot find definitions in the MT5 library distribution. You can change language server on microsoft.
go to settings, write ''jedi'' in search field and change for, like here
but it's not an error actually, just a warnings, you can ignore it.
Ok, thanks. The programming seems to work:)))
Have I got the point right?
1) train the model on random 1000 examples
2) evaluate all other examples by the model
3) we add 1000 examples that are most incomprehensible for the model to the first 1000 examples (in batches of 50 and retraining with each addition).
4) train the model on the obtained 2000 examples as in the previous article
Am I getting the point right?
1) train the model on random 1000 examples
2) estimate all other examples by the model
3) add the 1000 most obscure examples to the first 1000 examples (in batches of 50 and retraining with each addition).
4) train the model on the obtained 2000 examples as in the previous article
Yes, but the rest of the examples are unlabelled
Yes, but the rest of the examples are unlabelled
Well the partitioning for the first 1000 and the additional 1000 applies?
Well the markup for the first 1000 and the added 1000 applies?
is trained on a small labelled dataset, then labels a new large dataset, selects the points with the lowest confidence from it, adds, trains. And so it goes round and round
The sizes of unlabelled and labelled data are not regulated in any way, nor is the choice of the correct metrics. So here's an experimental approach - do as you wish ).
Actually, it is very similar to sampling examples from the estimated distribution, as in the case of the article about GMM, so I decided to check it out. But the first one turned out to be more interesting.