Machine learning in trading: theory, models, practice and algo-trading - page 1231

 
Maxim Dmitrievsky:

In theory, I can already run thousands of simulations, on samples with certain statistical characteristics... just decide on the window, although it can also be searched.

I always have a sliding window = 3600 values (defined as 6x6x10x10, where 6 is the quantile, covering almost any unimodal Petunin-Vysokovsky distribution). It is possible to povary - to see.

But it does not change the essence of the matter - we must by all means make sure that neural networks with rows with a rigid distribution of probability of returnees work or not. I would like to see a table with the results of our studies. And then we will continue.

 
Alexander_K2:

P.S. You shouldn't listen to Asaulenko - he knows a lot and knows nothing. Amen.

Hi, A_K. Already, I see, recovered from the failed premiere, already kicking your tail). Go back to your branch, people are waiting for you there, and there's nothing before New Year.

And change the concept. Maybe you will be lucky.

 
Maxim Dmitrievsky:

I.e. the statistical characteristics should be taken not from the price but from the retournee, am I right?

We are interested only in the distribution of returns. We are only interested in distributions of returns and nothing else.

 
Yuriy Asaulenko:

Hi, A_K. I see you've already recovered from the failed premiere, you're already beating your tail). Go back to your branch, you've got people waiting for you there, and it's not much time before the New Year.

And change the concept. Maybe you'll get lucky.

Hi!

No, I'm only there with the results back after the New Year, or maybe not. I did not want to give out grails, there in a branch, and so more than necessary written.

 
...toxic:

I don't think Kesha and Misha need to be judged, otherwise the plot will change and there won't be anything to laugh about

Seconded.

Kesha is obviously one of the investors chasing after Alexei. And having despaired of finding him, he comes here to this thread, advertising his grandfather SanSanych's lengthy sheets. Enticing, as it were.

 
and then..:


Yes, diplearn should be thoroughly cracked, for a long time I wanted to do NLP\NLU, but unfortunately there is no time yet, if at least a little better than random to analyze social networks, oh what a bribe could be raised...

Something is unclear. It's like BP is converted into an image, and then...

Pivot Billions and Deep Learning enhanced trading models achieve 100% net profit
Pivot Billions and Deep Learning enhanced trading models achieve 100% net profit
  • pivotteam
  • www.r-bloggers.com
Deep Learning has revolutionized the fields of image classification, personal assistance, competitive board game play, and many more. However, the financial currency markets have been surprisingly stagnant. In our efforts to create a profitable and accurate trading model, we came upon the question: what if financial currency data could be represented as an image? The [...] continue reading »
 
Vizard_:

For mgc, the main thing is alignment, as long as the variation is reasonable. It's used to reduce the dimensionality
and to fight multicollinearity, and also as a reconnaissance one. Float
of course will be a bit, and the better preprocessing, the less. The point is, but not as Fasal, because
He usually eats up useful information. The "buoyancy" can be reduced not only by preprocessing, but also
Postprocessing, as shown by the example again for Fa on "bending" logloss, etc., which in turn
can be used to correct the probabilities before pitching, wherever you wanted... but you shouldn't
but you shouldn't get too excited, it's a small improvement (1-2%). After a couple or three runs, provided the preprocessing is adevate
of preprocessing and sufficient sample size, we take the formula for the necessary components and make a ficha, each time
before rattle rattle rattle rattle rattle is not done... etc... A simple example of how to look (2 chet vernut,
but not the point)... All this crap, as well as other stuff on the amateur, I watched it for myself a long time ago, not much use...


I spent a lot of time in all the different principal components, and then I figured out a very simple thing, and it's general.

Suppose we did a PCA and got the coefficients by which we should multiply the predictors.

Now we shift the window (a new bar comes) and what should we do - recalculate the coefficients? That is how we do it in the tester. And if we don't recalculate them, we still have the main components?

And now let's think of an ordinary linear regression. There are the same coefficients there, but a table is given where we can see that coefficients are random numbers with all that this implies, up to the fact that the error can exceed the coefficient's face value.


How are the principal components any better?

It's not about the principal components. We are not interested in analyzing the past, we take some parameters from the past because there is no place to take them, but these parameters should NOT change. This is the general rule. When constructing a TS, it is necessary to prove constancy/weak variability of the obtained parameters.


Once again, we are faced with stationarity.

 
SanSanych Fomenko:

We are not interested in analyzing the past, we take some parameters from the past because there is no place to take them, but these parameters must NOT change. This is a general rule. When constructing a TS, it is necessary to prove constancy/weak variability of the obtained parameters.


Once again we are faced with stationarity.

It is possible to try to make assumptions about the structure of non-stationarity. For example, the obvious option is the assumption of piecewise stationarity. In this case, we must sometimes discard the obsolete story (finding a discontinuity).

 
Aleksey Nikolayev:

One can try to make assumptions about the structure of nonstationarity. For example, an obvious option is the assumption of piecewise stationarity. In this case, we must sometimes discard the obsolete history (finding a discontinuity).

Not obsolete history, but nonstationary sections.

Well done, Alexey - finally applied mathematics is coming to the fore, not just grail thinking.

 
Alexander_K2:

Not obsolete history, but unsteady plots.

Well done, Alexey - finally, applied mathematics is coming along, not just thoughts about the Grail.

Yeah - like with the trend. It's also a stationary thing while it's going. But when you understand that it is a trend, it is often too late to enter. It's the same with stationary plots, by the time you figure out that it's going to settle, it'll start to unmove again)))
Reason: