Discussing the article: "Fast trading strategy tester in Python using Numba" - page 4

 
fxsaber #:

How is this parameter calculated?

Seems to have missed this question. How is volatility calculated? MaxMin per "hour"? Normalisation?

 
The interval on which the model is trained cannot be a fixed length, this interval always floats in a certain range. It depends on the behaviour of the financial instrument, it is necessary to track the moment of change of this behaviour. That is, we again come to the necessity of building a fault indicator.

There are hundreds of machine learning models, but few people pay attention to what interval to optimise these models? Obviously, the question is complicated, it is easier to build a mathematical model, even a very complex one, than to solve this question. Imho
 
fxsaber #:

Seems to have missed this question. How is volatility calculated? MaxMin per "hour"? Normalisation?

Std in a sliding window with different periods, default period is 20. From my phone, I may not see something, I apologise.
 
fxsaber #:
There have been so many discussions about "reasonableness". It wouldn't even occur to me to ask.
Sometimes picks up the right words, in consolation :)
 
Evgeniy Chernish machine learning models, but few people pay attention to what interval to optimise these models? Obviously, the question is complicated, it is easier to build a mathematical model, even a very complex one, than to solve this question. Imho
Complex and incomprehensible, we have to invent :)
 
Evgeniy Chernish #:
we come to the necessity of building a fault indicator.
What is this beast?
 
fxsaber #:
What is this beast?
An indicator that tracks the probabilistic shape of the distribution of a series and signals a change in this shape. Smirnov indicator, for example, as an attempt.
 
Filters are more often done by moment distributions, for some reason the bevel filter works best. The std filter is also good.
 
Maxim Dmitrievsky #:
Filters are more often done by moment distributions, for some reason the bevel filter works best. Std is also good.
Yes, it's even easier that way. It is not convenient to work with distributions.
 
Maxim Dmitrievsky #:

If there are any well versed statisticians, the question is which is better:

  • Optimisation of TS with n number of parameters on the chart
  • Building a basic retrained model (a certain generalised base of deals), then searching for intervals where it is robust.
  • Both options are curvefitting

Imho, it would be worth rephrasing this opposition in terms of MOE. There are two models far from each other on the bias-dispersion trade-off curve. The TC, due to a small fixed number of parameters, has a bias towards increasing bias (the usual example for MO is linear regression), while the complex model, on the contrary, has a bias towards increasing variance.

Obviously, if the simpler model captures the actual pattern, it is better. If neither model captures it, then again, the simpler one is better - it is harder to see its fallacy in the complex one because of its better adaptability to noise) Not surprisingly, there is a point to complication only if it is beneficial. This is the obvious theoretical answer.

If a bit more practically, then essentially the second point means stacking models (at least two) - one model breaks down (looking for discrepancies) and the other makes trading decisions. There can also be a third model that switches the trading model on/off, etc. Stacking, as it is known, has a reputation of "black magic" in MO) As a rule, it is used by winners of all kinds of competitions, but there is no theory or recipes for it. If you're lucky enough to find a working stacking, good for you). Imho, in general, stacking simpler models looks more logical than trying to cram everything into one more complex one.

Yes, the decomposition problem needs to be solved, since our series are non-stationary. But I wouldn't emphasise it, because it will be solved anyway - either explicitly or implicitly)