Machine learning in trading: theory, models, practice and algo-trading - page 557

 
SanSanych Fomenko:

During the exercises with Garch I got an amazing pattern.


i.e., i don't know what timeframe it is? i think it may be the trading sessions, or weekday dependence... or they float and are not bound to the trading time

if the trend is subtracted... and the trend separately determined by the MA)

 
Maxim Dmitrievsky:

and what timeframe is it? i need to see what caused it - maybe the trading sessions, or the dependence on the days of the week... or they float and are not tied to the time of trading

i.e., it turns out that the arima should work with such quotes, if the trend is subtracted... and the trend is determined separately by the MA MA)


This is H1.

Here are just the increments. The intervals are the weekend. This is how xts is drawing and these values are not in the file



Here are the absolute values of the increments, i.e. increased, taken from the upper chart



PS.

arima will not work because:

  • variance is clearly variable
  • there is a leverage effect
  • there is a skewness


As a result of the test with H0: the absence of the ARCH effect will be rejected

 
Maxim Dmitrievsky:

it turned out that simple NS beyond the boundaries of the training sample works very poorly (comes out to a hyperb. tangent constant)... in the case of regression, i.e. not much better than RF

very illustrative article

https://habrahabr.ru/post/322438/



Especially for Maksim I looked into the works of Richard Feynman.

This is what he wrote back in the 60's:

He urged all and sundry, old and small, clever and stupid, in short all in a row - to work with the probability functions of the price, not with the price itself. :)))

 
Alexander_K2:

Especially for Maksim, I looked at the works of Richard Feynman.

This is what he wrote back in the 1960s:

And he urged all and sundry, old and small, smart and dumb, in short all in a row - to work with the probability functions of the price, not the price itself. :)))


it makes sense :) my current situation is like this: one NS is learning to predict the most probable event (there's no such thing as a 100% prediction), and the other is learning to trade on these probabilities

The problem is probably in the number of deals... I would like to make more, but the quality is beginning to suffer

I want more, but the quality begins to suffer.

 
Maxim Dmitrievsky:

It makes sense :) my current situation is like this: one NS is trying to predict the most probable event (there's no such thing as 100% probability), while the other is learning to trade on these probabilities

the problem is probably in the number of trades... i want more, but the quality is starting to suffer

О! This looks like the right direction!

I suffer now myself because of the lack of deals in my model - well, one can die of boredom.

But if you manage to combine quantity and quality of deals - I'll be the first to subscribe to your signal, because working with probabilities is the right way. I will be the first to subscribe to your signal, because the probability work is the right way. Good luck!

 
Alexander_K2:

О! Now that sounds like a good direction!

I myself suffer now because of the lack of deals in my model - well, one can die of boredom.

But, if you manage to combine quantity and quality of deals - I'll be the first one to subscribe to your signal, because working with probabilities is the right way. Good luck!


Well, theoretically it seems impossible without some kind of insider tips or search for specific market conditions (distributions?) that exist at the moment, like SanSanych showed

But let's see, thanks :)

 
Maxim Dmitrievsky:


R. Feynman, in his calculations of the amplitudes of the probabilities of transitions from state A to state B, used the following quantity as input data:

S=(X(t)-X(t-1))/deltaT,

where

X(t) is the current value,

X(t-1) - previous value

deltaT - time between X(t) and X(t-1).

Maybe this data should be used in NS?

 
Alexander_K2:

R. Feynman, in his calculations of the amplitudes of the probabilities of transitions from state A to state B, used the following quantity as input data:

S=(X(t)-X(t-1))/deltaT,

where

X(t) is the current value,

X(t-1) - previous value

deltaT - time between X(t) and X(t-1).

Maybe this is the data that needs to be tucked into the NS?


but you can try, usually log(x(t)/x(t-n)) is used

but I also have other predictors with different periods (lags)

you can take exponential time of course... as you said, but it takes a lot of history

 
Maxim Dmitrievsky:

but you can try, usually log(x(t)/x(t-n)) is used

but I also have other predictors with different periods (lags)

you can take exponential time of course... as you said, but that takes a lot of history


Feynman worked with quanta and deltaT-->0. In our case this is the time between ticks.

Something got me interested in NS too... No good... I may start again to develop some theory :))))

 
Alexander_K2:

Feynman worked with quanta and deltaT-->0. In our case it is the time between ticks.

I got interested in NS too... No good... I'm going to develop some theory again :))))


If there is something to teach it why not :)

Reason: