Machine learning in trading: theory, models, practice and algo-trading - page 2842

 

Maxim Vladimirovich, what do you think about quantum clustering?

https://github.com/enniogit/Quantum_K-means

 
Aleksey Nikolayev #:

The word "optimisation" has a bad reputation on our forum for obvious reasons. Therefore, it is quite understandable that we want to keep away from it somehow and not even use the word itself. Nevertheless, any training of an MO model is almost always an optimisation, so you can't take words out of a song.

I don't want to hurt anyone, teach them about life or explain how to do business) I write only with a faint hope that metaquotes will take my remarks into account when implementing MO in MT5.


as it was taken off the tongue... There is a real negative attitude to the concept of "optimisation".
I would only add that one should always remember that the model (TS) is primary, optimisation is secondary. if the model does not work, optimisation will not add robustness.
In reality, there are TS working in a wide range of parameters. But even in such systems, there are still optimal parameters that will give a higher trading result in the end. i.e., optimisation cannot, by its definition, worsen the model.
When building a trading system, the model is important in the first place, its evaluation criteria in the second place, and only then comes the optimisation. the opposite would be fundamentally wrong.
If someone says that optimisation is evil, it means that he or she has made the sequence incorrectly.
Only by understanding the above, one can come to the understanding that, no matter how it is, it is impossible to achieve MO without optimisation.
The tester and optimiser as a bundle have gained notoriety just by the fact that users make a damn thing and think that after optimisation this crap will be profitable. no, it won't, that's why it will be a damn thing. this is facilitated by the ease of creating Expert Advisors in ME and the availability of ready-made variants in the delivery. but at the same time there are practically no tools for evaluating the strategy of creating clusters of working sets. the good thing is that µl fills this gap to the full.
To summarise the above we can conclude: optimisation algorithms will make successful people even more successful (this applies to any sphere of human activity), and unfortunately unhappy people even more unhappy. the reason is simple - wrongly set priorities.
so even formula one cars are carefully optimised, why? why? because these cars are good as they are))) the answer is simple, they are optimised according to the user criteria of the driver of the car. although the general characteristics of the cars are the same, but tuning allows you to adjust the car for example on the acceleration curve, another driver will prefer a higher speed in a straight line. no one of the drivers of the cars thinks "optimisation is shit!, what the hell do I need, I'll drive by default!" - otherwise you're going to lose, you're going to have hungry kids, an angry wife and all the other delights of failure.

hence the sequence necessary for success: car (TC) - tuning criteria (evaluation criteria of TC) - optimisation.
no other correct sequences are possible in principle.
 
I would also like to add that optimisation algorithms are first of all search algorithms, they are not only used to search for MASK parameters as many people think.
You can do much more complex and not trivial things.
 
Andrey Dik #:

I. e., optimisation cannot, by definition, degrade the model.

This is correct for automatic control systems, but absolutely NOT correct for models operating in financial markets with non-stationary processes. There is such an evil, an absolute evil, called "overtraining". This is the main evil (after input rubbish) that completely makes absolutely any model inoperable. A good model should always be sub-optimal, some coarsening of reality. I think that it is the global optimum that makes a special contribution to model overtraining.

 
another important point.
where the fewer parameters, the better the model does in the chain of
model - criterion - optimisation,
As the degrees of freedom increase, and this is bad, increasing the number of criteria or parameters of the criterion, on the contrary, reduces the degrees of freedom of the model, acting as a kind of "boundaries".
As for the number of AO parameters, I refer to the disadvantages of the large number of tuning possibilities, as this complicates the practical application of AO, although, in skilful hands of a researcher who understands what he is doing, it allows to obtain additional advantages in quality and speed of optimisation, in some way indirectly reducing the variability of the model even more, if it has too many parameters. this is often the case in neural networks.
 
СанСаныч Фоменко #:

Right idea for automatic control systems, but absolutely NOT right for models operating in financial markets with NOT stationary processes. There is such an evil, an absolute evil, called "overtraining". This is the main evil (after input rubbish) that completely makes absolutely any model inoperable. A good model should always be sub-optimal, some coarsening of reality. I think it is the global optimum that makes a special contribution to model overtraining.


The overtraining is not a consequence of misuse of optimisation, but a consequence of wrong choice of the model evaluation criterion. the mistake was made BEFORE the optimisation. and it is quite possible that even at the first element of the chain - the model is shitty.
to say that the model should be a bit undertrained is as wrong as a good undertrained sapper or surgeon. you should blame either the sapper or the surgeon or their teachers, not the very possibility to learn (improve, optimise).
Blaming non-stationarity is also wrong, bringing in optimisation as well. it means that the researcher does not have a good model for a non-stationary series.
 
I apologise if I offended anyone by plunging the reader into harsh reality.
 

It seems that concepts with different contexts are used.

For example, "plateau" is rather a wide range of settings of the way of obtaining external factors influencing the logic of the model. For example, a wide range of efficiency of mashka on the basis of which the predictor is made.

Optimisation with MO algorithms, discussed here, is concerned with building the decision logic, while optimisation in the strategy tester is usually concerned with tuning the input data, while the decision logic is already prescribed and at best has variability.

The two types of optimisation are different - one changes the space and the other the relationships in it.

Now I wondered what to tune first - signs/predictors or look for a model and then look for optimal settings in the terminal optimiser. Although, it is extremely difficult to search for settings if there are a lot of input data.

Is it possible to change space and logic at once during training, maybe we should think how to do it?

SanSanych Fomenko, should we expect sampling?

 
Andrey Dik #:

overtraining is not a consequence of misuse of optimisation, but a consequence of wrong choice of model evaluation criterion. the error was made BEFORE optimisation. and it is quite possible that on the first element of the chain - the model is shitty.
to say that the model should be a bit undertrained is as wrong as a good undertrained sapper or surgeon. one should blame either the sapper or the surgeon or their teachers, not the very possibility to learn (improve, optimise).
Blaming non-stationarity is also wrong, bringing in optimisation as well. it means that the researcher does not have a good model for a non-stationary series.

I see. You have a superficial acquaintance with machine learning models.

The first element of the chain is preprocessing, which takes 50% to 70% of the labour. This is where future success is determined.

The second element of the chain is training the model on the training set.

The third element of the chain is the execution of the trained model on the test set. If the performance of the model on these sets differs by at least one third, the model is retrained. You get it once in a while, if not more often. An overtrained model is a model that is too accurate. Sorry, the basics.

 
Aleksey Vyazmikin #:


SanSanych Fomenko, should we expect a sample?

What is this about?

Reason: