Machine learning in trading: theory, models, practice and algo-trading - page 3718

 
Aleksey Nikolayev #:

GARCH? So it is a stationary model (with appropriate conditions on the coefficients). Only the conditional variance changes there, while the unconditional variance remains constant.

In fact, it is an attempt to drive the observed non-stationarity (in the sense of volatility fluctuations) into the framework of a stationary model.

Yes, I was referring to GARCH. In its practical use, one tries to isolate something stationary. But you also hope to get from WMO - "separation of random fluctuations into stationary and non-stationary components".

 

sibirqk #:
от ВМО  вы надеетесь получить  - "разделение случайных колебаний на стационарную и нестационарную составляющие"

I was too lazy to specify what is meant there, I will try to do it here)

When using the WMO algorithm at the inference stage, we should be able to check how the real sample fits into the predicted distribution. And when comparing the algorithm on samples from different time intervals, we should be able to check if they fit equally (or if something changes over time - non-stationarity). Imho, potentially WMO could potentially provide more information here.

 
Aleksey Nikolayev #:

I was too lazy to specify what is meant there, I'll try to do it here)

When using the WMO algorithm at the inference stage, we should be able to check how the real sample fits into the predicted distribution. And when comparing the algorithm on samples from different time intervals, we should be able to check if they fit equally (or if something changes over time - non-stationarity). Imho, potentially WMO could potentially provide more information here.

Non-stationarity of what?

The initial quotes? - Not stationary.

The increments? - Not stationary, or rather, less stationary than the original quotes.

You could take the logarithms of the increments or something. even LESS stationary.

What's "less"?

Let's go one step deeper.

We build a wooden model, such as a random forest. Usually over 50 trees, the classification error changes very little. These trees are assumed to be in the future, but this is not true, i.e. trees are not stationary and change in the future.

Will the probability distributions of the class be stationary? If not, then this is again talk of "less" non-stationarity.

And Garch suggests not to talk about more or less non-stationarity, but to try to define this non-stationarity analytically

ARIFMA - set the residual trend in increments.

Various arcs - take into account tails, slopes, etc....

So how about feeding the MO input with the result of the arch?

 
СанСаныч Фоменко #:
So, how about feeding the result of Garch to the MO input?

Feed the MathRandom() output to the MO input - it will be more useful.

When (if) you get the result, there are offices that will gladly buy it

 
СанСаныч Фоменко #:
Unsteady what?

Roughly speaking, model errors.

SanSanych Fomenko #:
So can you feed the MO input with the GARCH result?

Usually GARCH is watched on the daily. Imho, it will add little to the session periodicity on the hours (at Forex). Although it would be nice if someone would carefully investigate it.

 
Maxim Dmitrievsky #:
Welcome back!)
[Deleted]  
Aleksey Nikolayev #:
Welcome back!)

Thanks :) I see that valuable AI specialists are bored again, again old songs about the main things :)

I came across an interesting paper on determining autocausality in time series, seems to be better than hirst and entropy and Granger.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4841224

Causal Interactions Indicator Between Two Time Series Using Extreme Variations in First Eigenvalue of Lagged Correlation Matrices
  • papers.ssrn.com
This paper presents a method to identify causal interactions between two time-series. The largest eigenvalue follows a Tracy-Widom distribution, derived from a
 
Maxim Dmitrievsky #:

Thanks :) I see the valuable AI specialists are bored again, old songs about the main thing again :)

I came across an interesting paper on determining autocausality in time series, seems to be better than hirst and entropy and Granger.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4841224

kind of classic

formulas are the same

correlation, cointegration, etc.

I don't see the difference

Maxim, can you point me in the right direction?

or did he multiply the tails by a parabola?
 
Maxim Dmitrievsky #:
I see that valuable AI specialists are bored again, again old songs about the main thing :)

Too valuable for foundations to find so much money to hire such specialists by data-scientists).

I am slowly diving into the topic of probabilistic IO. Maybe even (if I'm not lazy) I'll start a channel/blog in Telegram on the topic - it's inconvenient to systematise here.

Maxim Dmitrievsky #:

I came across an interesting article on the definition of autocausality in time series, it seems to be better than Hirst and entropy and Granger.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4841224

Well, I don't know about market analogies with physical systems - econophysics has been forgotten lately.

[Deleted]  
Aleksey Nikolayev #:

Too valuable for foundations to find so much money to hire such specialists by data-scientists)

I am slowly diving into the topic of probabilistic MO. I may even (if I'm not lazy) start a channel/blog in Telegram on the topic - it's inconvenient to systematise the information here.

Well, I don't know about analogies of the market with physical systems - econophysics has somehow been forgotten lately.

In the cart you can fully feel the sweet taste of admin and other moderation, i.e. be on the other side of good and evil :)

Yes, well, all such tools are originally from physics.