Machine learning in trading: theory, models, practice and algo-trading - page 1200

 
Igor Makanu:

the most sensible post of the last couple of months! I need to think about it, I haven't done it for a long time, I need to prepare my body, the holidays are coming soon!


Well, to get the probability of need research (experiments, in our case testing), otherwise there is no way, only data mining will help, but I do not look at YouTube tutorials, you keep coming back to me, every lesson, it's Python (((

let's start python before talov after january)) in the 2Q maybe

 
mytarmailS:

I don't know if it helps, but! You can try correlation. Imagine the ideal result that you want to get as your desired curve, then in the process of finding the best model, compare the current result to the ideal (your ideal curve) by calculating the correlation. The model with the closest correlation to the ideal will be the closest to the model you are aiming for

Thanks for the idea, but it's not feasible, because the ideal is very abstract - it's not clear what it should be. At first glance each iteration should give improvements, or even some incremental improvement, but this is a trivial idea, and now it is not clear why the MO software developers have not implemented it. Either it is too long to learn, which would make the product creating the model uncompetitive, or there were real tests of the idea and the benefits were not identified.

 
Maxim Dmitrievsky:

It is precisely articles on the search for patterns through a terver for some reason there is little normal information

And there is almost no information on Bayes, and where there is, you can hardly find it out by leaps and bounds.

The main problem with Bayesian is the choice of the correct a priori distribution. In our case everything is complicated by non-stationarity - time dependence can appear.

It seems obvious to build the a priori one on a large history and the posterior one on a small history. The problem is in the correct separation of these parts of history in non-stationarity.

 
Maxim Dmitrievsky:

It is precisely articles on the search for patterns through the terver for some reason there is little normal information


Why not - tons, it's impossible to master. It's called GARCH. There the model consists of three parts:

  • the trend, modeled through ARIMA, or you can do it through FARIMA (fractional integration - mimicking Hearst);
  • Then there is the dispersion form;
  • Then the distribution and what not, with testing, for example, on all 500 shares of the index and laying out the corresponding results.

What else do we need to be happy, in the sense of a theorist?


Maybe that's why it's nowhere to be found, since everything is collected within the framework of various garages (I once posted a link - more than 100 different garages)?

 
Aleksey Nikolayev:

The main problem with Bayesian is choosing the correct a priori distribution. In our case everything is complicated by non-stationarity - time dependence may appear.

It seems obvious to build the a priori one on a large history, and the a posteriori one on a small history. The problem is in correctly distinguishing these portions of history under non-stationarity.

Yes, this is obvious and moreover, already done through MO (at my level of understanding). The second model corrects the signals of the first after each step. It turned out to be very easy, fast and adaptive... but more research is needed. I even customized my theory (Bayesian, in a clever way).

 
SanSanych Fomenko:

Why not - tons, prohibitive, impossible to master. It's called GARCH. There the model consists of three parts:

  • the trend, modeled through ARIMA, or you can do it through FARIMA (fractional integration - mimicking Hearst);
  • Then there is the dispersion form;
  • Then the distribution and what not, with testing, for example, on all 500 shares of the index and laying out the corresponding results.

What else do we need to be happy, in the sense of a theorist?


Maybe that's why I don't have it anywhere, since everything is collected within the framework of various garches (I once posted links - more than 100 different garches)?

or maybe it's hard to match in the mind... for example, conditional probabilities, joints, etc. can be said to be defined through a garch?

that is, if i just want to set the range of search so to speak, look for me patterns from far away, in different combinations of, say, increments, time intervals or whatever

I want to do something similar on python (it will give me practice).

something like this: https://www.mql5.com/ru/articles/3264
Наивный байесовский классификатор для сигналов набора индикаторов
Наивный байесовский классификатор для сигналов набора индикаторов
  • www.mql5.com
Хотим мы того или нет, но статистика в трейдинге играет заметную роль. Начиная с фундаментальных новостей, пестрящих цифрами, и заканчивая торговыми отчетами или отчетами тестирования, от статистических показателей никуда не деться. Вместе с тем, тезис о применимости статистики в принятии торговых решений остается одной из самых дискуссионных...
 
Maxim Dmitrievsky:

yes, it is obvious and moreover, already made through MO (at its level of understanding). The second model corrects the signals of the first after each step. It turned out to be very easy, fast and adaptive... but more research is needed. And, well, I even fitted it with a theory (Bayesian, kind of clever).

There's another obvious way to build an a priori distribution. If one assumes that prices "in the limit/on average" behave like SBs, then one can also build this distribution on SBs. In rare cases it can be done analytically, but usually Monte Carlo. The method is more complicated and not necessarily better than the previous one.

 
Aleksey Nikolayev:

There is another obvious way to construct an a priori distribution. If we assume that prices "in the limit/on average" behave like SBs, then we can also build this distribution on SBs. In rare cases it can be done analytically, but usually it is Monte Carlo. The method is more complicated and not necessarily better than the previous one.

normal, sharite :) or, as another Aleksey showed curves of model signal distributions on a trained sample, normal basis for a priori.

these are all robust things
 
Maxim Dmitrievsky:

normal, sharite :) or, as another Alexei showed curves of model signal distributions on a trained sample, a normal basis for a priori.

it's all robust stuff

Everything is spoiled by non-stationarity, which can be both sharp and creeping.

 
Maxim Dmitrievsky:

normal, sharite :) or, as another Alexei showed curves of model signal distributions on a trained sample, normal basis for a priori.

these are all robust things

If you mean me, I showed curves on a test sample and an exam sample - I don't even look at the sample for training...

Reason: