Machine learning in trading: theory, models, practice and algo-trading - page 837

 
SanSanych Fomenko:

GARCH is called, unlike machine learning, mainstream in financial markets (along with cointegration and portfolios).

The models take into account a bunch of statistical nuances of increments, including thick tails and long memory a la Hurst.

For example, there is a publication about choosing parameters of GARCH models on ALL stocks included in the S&P500 index!

There are a lot of publications on application in Forex. A very well developed toolkit. For example, the rugarch package.

And here, in wikipedia it's about fractal volatility, it's not rugarch but an analogue of some kind.

garch is very heavy for my ace, I love it when everything counts fast
 
Misha's fierce leap is lacking... The Grail has carried him away, and his longing soul is lost....
 
I'll make a volatility forecast first, and then I'll figure out what to do with it :)
 
Maxim Dmitrievsky:

On the subject of predicting volatility. Let's say, predicting volatility is much easier than the quote itself

And there are even all sorts of models like https://en.wikipedia.org/wiki/Markov_switching_multifractal

What does it give, how to use it correctly, has anyone ever done anything with it?

Well, okay, the volatility is predicted, what gives?

I tried all forex strategies that I tested and limited by volatility. Threshold higher or lower, it did not give me anything.

There are many good deals at low volatility and many good deals at high one. I cannot help them with volatility.

 
forexman77:

Well, okay predicted volatility, what does it give?

All the forex strategies that I tested, I tried to limit by volatility. Threshold higher or lower, it did nothing.

There are many good deals at low volatility and many good deals at high one. I cannot separate them according to volatility.

i try to follow the coryphaei's footsteps )) they wrote that i should forecast volatility

 
Maxim Dmitrievsky:

I'm trying to follow in the footsteps of the coryphaei)) they wrote that it's necessary to predict volatility

Do you think that these scaffolds are useful? Maybe I'll try to study them in the future?

 
forexman77:

How do you think scaffolding is useful in some way, probably in the future I will try to study it?

Wait for the article on scaffolding, it will come soon.

As for volatility, it can be used to switch modes of TS, depending on the ox.

And if you can predict the volatility, the TS may switch to another mode earlier, without a lag.

That's how I understand it

 
Dr. Trader:

Here's an example of vtreat as well,

Generally this is preprocessing data, but you can use it as an estimate of each predictor relative to the target. I don't like that the package doesn't take predictor interactions into account, use the code only if you have enough to estimate the predictors one at a time relative to the target.

library(vtreat)


#designTreatmentsC  подходит только для классификации с двумя классами
treatmentsC <- designTreatmentsC(dframe = forexFeatures,
                                 varlist=colnames(forexFeatures)[-ncol(forexFeatures)], #названия  колонок с предикторами (тут - все кроме последней колонки)
                                 outcomename = colnames(forexFeatures)[ncol(forexFeatures)], #названия  колонок с таргетом (тут - последняя колонка)
                                 outcometarget = "1") #текст  или цифра одного из классов
#обработка,  сортировка результата
treatmensC_scores <- treatmentsC$scoreFrame[order(treatmentsC$scoreFrame$sig),]
treatmensC_scores <- treatmensC_scores[!duplicated(treatmensC_scores$origName),]
treatmensC_scores <- treatmensC_scores[,c("origName","sig")] 
treatmensC_scores$is_good <- treatmensC_scores$sig <= 1/nrow(forexFeatures)
treatmensC_scores #вывод на экран таблички результатов. Лучшие предикторы вверху таблицы. Чем оценка sig меньше тем лучше. Желательно (колонка is_good==TRUE) чтоб sig была меньше чем 1/nrow(forexFeatures), всё что больше - плохо 

Experimented with vtreat.

Here's a matrix of results
[,1] [,2]
[1,] 5 8.12444537234629e-196
[2,] 1 1.98504271239423e-144
[3,] 7 2.36022454522949e-109
[4,] 11 5.68901830573741e-102
[5,] 4 6.60631002751930e-96
[6,] 10 2.95535252032342e-73
[7,] 3 2.43324301115409e-71
[8,] 9 4.51329770717951e-67
[9,] 6 3.11264518399281e-37
[10,] 2 5.77058632985908e-13
[11,] 12 3.76158923428915e-12
[12,] 8 8.18815163303239e-01

Formula

 treatmensC_scores$sig <= 1/nrow(forexFeatures)

It doesn't qualify very well. For example it misses the 3rd from the bottom after sorting the input with sig=5.77e-13 at 1/nrow(df)=2e-4. And it is noisy and messes up the training.
I.e. we should tighten selection by several orders of magnitude. And it is desirable to do it automatically.

 
elibrarius:

I experimented with vtreat.

Here is a matrix of results
[,1] [,2]
[1,] 5 8.12444537234629e-196
[2,] 1 1.98504271239423e-144
[3,] 7 2.3602245454522949e-109
[4,] 11 5.68901830573741e-102
[5,] 4 6.60631002751930e-96
[6,] 10 2.95535252032342e-73
[7,] 3 2.4332434301115409e-71
[8,] 9 4.51329770717951e-67
[9,] 6 3.11264518399281e-37
[10,] 2 5.77058632985908e-13
[11,] 12 3.76158923428915e-12
[12,] 8 8.18815163303239e-01

Doesn't qualify very well. For example it misses the 3rd from the bottom after sorting the input with sig=5.77e-13 at 1/nrow(df)=2e-4. And it is noisy and messes up the training.
I.e. we should tighten selection by several orders of magnitude. And it is desirable to do it automatically.

In general, I use this very package to select predictors. It is clear that there are disadvantages, especially lack of interaction of multiple predictors relative to the target. But on the whole it is enough for my optimization so far... So if there are other packages for data preprocessing I'd be glad to consider them...

 
Mihail Marchukajtes:

In general, I use this particular package to select predictors. It is clear that there are disadvantages, especially the lack of interaction of several predictors in relation to the target. But on the whole it's enough for my optimization so far... So if there are other packages for data preprocessing I would be glad to consider them...

Well, Michael, you have recovered from your madness, will you soon start to assess your TS sensibly and without fanaticism? :)