Machine learning in trading: theory, models, practice and algo-trading - page 2591

 
Maxim Dmitrievsky #:
Yes, it is possible to test and optimize it as an ordinary bot in MT5, by trying the parameters externally. It is quickly tested with respect to bars, but there may be slows for ticks, because trees are evaluated for a long time by themselves.

I do not want to optimize anything further after adding ML. The wind of overfitting blows from that side). Although if the speeds are normal, you can at least try it for sure. If it is integrated and I can test normal speeds in tester conditionally-natively, it certainly opens new horizons.


And in general, better speeds (relative to my solution, I think there will be a normal difference in the speed) is always good - both when there are a lot of robots and when timeframes are small and the speed is more critical.

 
Aleksey Nikolayev #:

In the parameter space of the model? It has an enormous dimensionality. This is possible only for very simple models with a small number of predictors.

It is not very clear how you can build a surface in the space of huge dimensionality. We simply have very few points in comparison with this dimensionality. Unless by some dimensional downscaling visualization methods like PCA, etc., but the point is unclear.

Everything is jumbled together.

What model parameters are we discussing?

If a model is something from MO, that's one thing, if a model is an EA in a tester, that's another thing entirely.

Models that are optimized in the tester are usually not about anything. For example, we take a wrecker and start to pick a period and get a certain set of results. If there are a lot of such "wizards" with their parameters, then we get NOT smooth surfaces in the result, we choose random peaks, which may coincide by chance in the future. Why? For me, the answer is obvious: the parameters of these "wipes" are NOT relevant to the performance of the model, they are just noise.


Another thing is if the model parameters in MO are a set of predictors, then the problem can be posed meaningfully: does the predictor/no predictor relate to the RESULT of the simulation or not. If it does, what does it have to do with. The situation is similar when we choose models: RF, neuronics or something else .....

 
SanSanych Fomenko #:

Everything is jumbled together.

What model parameters are we discussing?

If a model is something from MO, that's one thing; if a model is an EA in the tester, that's something else entirely.

Models that are optimized in the tester are usually not about anything. For example, we take a wizard and start to select a period and get a certain set of results. If there are a lot of such "wizards" with their parameters, then we get NOT smooth surfaces in the result, we choose random peaks, which may coincide by chance in the future. Why? For me, the answer is obvious: the parameters of these "wipes" are NOT relevant to the performance of the model, they are just noise.


Another thing is if the model parameters in MO are a set of predictors, then the problem can be posed meaningfully: does the predictor/no predictor relate to the RESULT of the simulation or not. If it does, then what kind. The situation is similar if we choose models: RF, neuronics or something else .....

Indeed, it's all in a bunch. Parameters are parameters, predictors are predictors. In your example with the dummies: parameters are their periods, and predictors are the values of those dummies. It is not difficult to build the required surface for one or two wagons, but for hundreds of wagons the sense is already lost completely due to the growth of dimensions of predictor and parameter spaces.

I do not see a fundamental difference between models in the tester and in MO packages - the difference is only technical (capabilities of the software used).

 
Aleksey Nikolayev #:

Indeed, everything is in a bunch. Parameters are parameters, predictors are predictors. In your example with the slices: the parameters are their periods, and the predictors are the values of those slices. It is not difficult to build the required surface for one or two wagons, but for hundreds of wagons the sense is already lost completely due to the growth of dimensions of predictor and parameter spaces.

I do not see a fundamental difference between models in the tester and in MO packages - the difference is only technical (capabilities of the software used).

I don't like to interfere, but just a remark about hundreds or other MA ...there is a limit value of their reasonable number and it is no more than 1.386*ln(N) (where N is the whole observed history)

 
The analysis of the optimization surface is also a double-edged sword. And reaching a plateau does not guarantee anything, though it gives temporary excitement until you realize it is time to go to the plant. Moreover, optimization/learning algorithms are tuned to some extent to break local extrema, i.e. they are tuned to search for global ones.
 
Maxim Dmitrievsky #:
Optimization surface analysis is also a double-edged sword. And reaching a plateau guarantees nothing, although it gives temporary inspiration until the moment of realization that it is time to go to the factory.
God's criticism
 
mytarmailS #:
Divine Criticism
Tried :)
 
Maxim Dmitrievsky #:
I tried :)
I wish I understood the difference between a "plateau" and a global minimum
 
mytarmailS #:
I wish I understood the difference between a "plateau" and a global minimum
Depends on what you're looking for. Meaning a plateau on the global, like an ideal and a dream
 
No one is arguing that robustness is a good thing. The problem is that there are no simple and absolute ways to achieve it.
Reason: