Machine learning in trading: theory, models, practice and algo-trading - page 2842
You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Maxim Vladimirovich, what do you think about quantum clustering?
https://github.com/enniogit/Quantum_K-means
The word "optimisation" has a bad reputation on our forum for obvious reasons. Therefore, it is quite understandable that we want to keep away from it somehow and not even use the word itself. Nevertheless, any training of an MO model is almost always an optimisation, so you can't take words out of a song.
I don't want to hurt anyone, teach them about life or explain how to do business) I write only with a faint hope that metaquotes will take my remarks into account when implementing MO in MT5.
This is correct for automatic control systems, but absolutely NOT correct for models operating in financial markets with non-stationary processes. There is such an evil, an absolute evil, called "overtraining". This is the main evil (after input rubbish) that completely makes absolutely any model inoperable. A good model should always be sub-optimal, some coarsening of reality. I think that it is the global optimum that makes a special contribution to model overtraining.
Right idea for automatic control systems, but absolutely NOT right for models operating in financial markets with NOT stationary processes. There is such an evil, an absolute evil, called "overtraining". This is the main evil (after input rubbish) that completely makes absolutely any model inoperable. A good model should always be sub-optimal, some coarsening of reality. I think it is the global optimum that makes a special contribution to model overtraining.
It seems that concepts with different contexts are used.
For example, "plateau" is rather a wide range of settings of the way of obtaining external factors influencing the logic of the model. For example, a wide range of efficiency of mashka on the basis of which the predictor is made.
Optimisation with MO algorithms, discussed here, is concerned with building the decision logic, while optimisation in the strategy tester is usually concerned with tuning the input data, while the decision logic is already prescribed and at best has variability.
The two types of optimisation are different - one changes the space and the other the relationships in it.
Now I wondered what to tune first - signs/predictors or look for a model and then look for optimal settings in the terminal optimiser. Although, it is extremely difficult to search for settings if there are a lot of input data.
Is it possible to change space and logic at once during training, maybe we should think how to do it?
SanSanych Fomenko, should we expect sampling?
I see. You have a superficial acquaintance with machine learning models.
The first element of the chain is preprocessing, which takes 50% to 70% of the labour. This is where future success is determined.
The second element of the chain is training the model on the training set.
The third element of the chain is the execution of the trained model on the test set. If the performance of the model on these sets differs by at least one third, the model is retrained. You get it once in a while, if not more often. An overtrained model is a model that is too accurate. Sorry, the basics.
SanSanych Fomenko, should we expect a sample?
What is this about?