Machine learning in trading: theory, models, practice and algo-trading - page 2745

 
Maxim Dmitrievsky #:
It all started with people wanting co-op )) started looking into it, found out that everyone denies everything others do. Plus they make up new definitions. In the end no one understood anything. I generally like Sanych's approach, so I asked for specifics. And with designations, too, what is relation and connection, if not correlation.

Obviously he cherishes his know-how and doesn't reveal details.

The way I see it, there are two types of correlation.

The first is causal, which is determined by a priori information about the object of research from knowledge in the given subject area, not by some calculations.

The second type is probabilistic dependence, which can be calculated a posteriori from some data obtained by observing the behaviour of the object. The second type includes correlation, deterministic dependence (as an extreme case) and so on and so forth, including that described by copulas and other methods. The basis for studying this type is the assumption that there is a joint distribution for predictors and target.

 
Aleksey Nikolayev #:

In my mind there are two types of communication.

The first one is causal, which is determined by a priori information about the object of research from knowledge in the given subject area, rather than by some calculations.

The second type is probabilistic dependence, which can be calculated a posteriori from some data obtained by observing the behaviour of the object. The second type includes correlation, deterministic dependence (as an extreme case) and so on and so forth, including that described by copulas and other methods. The basis for studying this type is the assumption that there is a joint distribution for the predictors and the target.

Correct. For lack of a priori assumptions, the second type is used. I wonder how Sanych sees it. For example, how targets are constructed, they can be simply fitted to any trait and vice versa.

My approach is fully customised for such manipulations, perhaps something can be improved.
 
mytarmailS #:

There is a PCA considering the target, it will highlight the components that characterise the target,

it'll never do that. It doesn't account for the relationship to the target. - it will just create an orthogonal projection in the feature space that explains the maximum variance..... (I'm just not sure whether rotation should be done manually [I think there was something about it in the Statistika package - some button] or whether it will figure it out automatically - other libraries may already have rotation built in).


mytarmailS #:

but the sad thing is that the target is a subjective variable and it will "float" as soon as the trace is finished.... and how is it different from normal learning with a teacher?

Yes, sadly, you have your own interpretations of the purpose of this method and your own ways and purposes of using it.... I don't know how it can help you with such views on it... - so you don't need to ask me any more either.
 
JeeyCi #:
1)it will never do that - it doesn't account for links to the target!!!! - it will just create a projection in the feature space that explains the maximum variance.... (I'm just not sure whether rotation should be done manually [I think there was something about it in the Statistika package - some button] or whether it will figure it out automatically - other libraries may already have rotation built in).


2)yes, sad - you have your own interpretations of the purpose of this method and your own ways and purposes of using it... I don't know how it can help you with such views on it..... - so you don't need to ask me any more either.

1) I'm saying that there are rsa's that take the target into account.

2) I didn't ask anything, I asserted it.

 

mytarmailS #:

1) I'm saying that there arersa 's thattake targeting into account.

2) I wasn't asking anything, I was asserting.

these are already algorithms to artificially tie PCs to the target - different algorithms (software implemented sometimes even differently in different libraries)... you might as well use PLS... how you bind the target to the features is your own business, it doesn't make the point of PCA = "with the teacher" (the teacher is bound separately - in fact - in any way you like) -- again we're getting away from the point and into the topic of "taste and colour..." (libraries). (libraries).

I'm not even sure if you know if your library does rotation or if you have to code it yourself... so unverified and incomplete/inaccurate statements are of no interest to anyone here, especially when they change the meanings and essence of phenomena/objects, after which you shout that you don't understand the words... you apparently don't even understand your own words accurately

 
JeeyCi #:

I'm not even sure if you yourself know if your library does rotation or if you have to code it yourself....

don't judge by yourself

(I'm just not sure whether rotation should be done manually [I think there was something about it in the Statistika package - some button] or whether it will figure it out automatically - in the other one.

JeeyCi #:

so unverified and incomplete/inaccurate statements are of no interest to anyone here

And if you are not sure, then there is no right to assert anything that is checked. what is not, who is interested, who is not.... if your head is in order of course


ps And whether there is rotation in the matrix is easy to check if there is understanding, but apparently there is a problem with it.....

 

So check it before throwing yourself at people,

- if you don't understand it from the 1st time, please pour your gurian/inaccurate tones into your code and your results, instead of looking for the extreme ones

 
How is this different from LDA? On the quotes, you get a tight fit that doesn't work on new data. But it reduces the error of the model trained on it to almost zero. A similar situation may arise with Sanych's approach. The principle is the same - to fit the flies to the cutlets

But a sliding window should bring some joy if statistics is collected on it. I can't imagine it yet.
 

Some kind of spike in posts!


Once again.

I rank predictors by Predictive ability.

I use my own algorithm as the algorithms from numerous packages are too slow - mine is less accurate but very fast.

Predictive ability is NOT correlation and NOT the result of the choice of predict ors that the models produce.

Predictive ability is information correlation and NOT:

1. Correlation is the "similarity" of one stationary series to another, and there is always some value and no "no relationship" value. Correlation always has some kind of value, so you can easily use correlation to find the relationship between a teacher and coffee grounds.

2. Fiche selectin is the frequency of using fiches when building models. If we take predictors unrelated to the teacher, we still get a ranking of fiches.

An analogue to my understanding of "predictive power is for example caret::classDist(), which defines Mahalanobis sampling distances for each class of centres of gravity. Or woeBinning. There are many approaches and many packages in R. There are more based on information theory.

 
JeeyCi #:

so check before you do.

Check what? If you have a rotation???

This isn't even funny anymore.

Reason: