Machine learning in trading: theory, models, practice and algo-trading - page 3251

 
Renat Fatkhullin #:

The 3980 implemented Conjugate methods for the types complex, vector<complex> and matrix<complex>. They perform conjugation for complex numbers.

Thanks!

 
Maxim Dmitrievsky #:
Thank you, Teacher.
PSA is not needed there, there are few measurements. The more measurements, the fewer instances of patterns.

If there are a lot of dimensions (features), even more than 5, it is NOT worth looking for direct proximity between lines, it is better to reduce the dimensionality.

 

Two enemies: overtraining and looking ahead.

Much has been written about overtraining - the model is too "similar" to the original series. Everyone is familiar with it, as overtraining is a common tester result.

What is "looking ahead"?

 
fxsaber #:

Is it correct to say that this is the main task that the MoD is engaged in?

The situation is like in modern physics, do you want to ride or drive? Physics used to try to understand how the world works, but now they just stretch formulas over data, invent virtual entities, nobody understands anything, everything is very complicated.

In data processing, it's the same situation. In the past, we took a problem, tried to understand it, then wrote an algorithm by hand, optimised the calculations. To simplify the task, some relationships were neglected, others were reduced to a linear form. When there was enough power and data, the solution of the problem was shifted to an optimiser (roughly speaking, as in MT tester), which selects coefficients of some polynomial. Nobody understands how what is calculated, there is no full confidence in the result, but this approach is able to take into account non-linear and non-obvious relationships, accelerate some scientific calculations by orders of magnitude.

When the solution is obvious, one should use the classical approach. But in conditions of great uncertainty, MO is not a panacea (that's why they add noise to pictures in captcha).

 
mytarmailS #:

If there are many dimensions (attributes), even more than 5, it is NOT worth looking for direct proximity between rows, it is better to reduce the dimensionality

1 value of each feature is not enough.
I do it just for fun, so as not to sit in social networks in the evenings and not to race vidosiki
Though sometimes it's more interesting to play Xbox :)
 
Maxim Dmitrievsky #:
1 value of each characteristic is not enough
.
Just for fun, so I don't have to sit in social networks in the evenings and not to race vidosiki
Though sometimes it's more interesting to play Xbox :)

where did I say one value?

 
mytarmailS #:

and where did I say one value?

I'm saying
 
Maxim Dmitrievsky #:
I'm saying

when reducing a dimension, nobody reduces to one dimension, it's possible, but they don't do it.

 
Maxim Dmitrievsky #:
I said

Do you use convolution or bare-bones predictors spinning through the story?

 
СанСаныч Фоменко #:

Two enemies: overtraining and looking ahead.

Much has been written about overtraining - the model is too "similar" to the original series. Everyone is familiar with it, as overtraining is a common tester result.

What is "looking ahead"?

Obviously, using information in the decision making that was not known at the time.

IMHO, the main reason for the frequent occurrence of "looking ahead" is that by its very nature, TC is an online algorithm, and its creation is an offline algorithm.

Reason: