Machine learning in trading: theory, models, practice and algo-trading - page 638

 
Mihail Marchukajtes:

To find the cross entropy you first need to find the conditional entropy of the two events, which is actually what I'm doing now....

And the estimation of model entropy is needed at the moment when the model is working on feedback. Having given out a signal we can calculate entropy of this signal and use it to draw conclusions. The entropy of the signal has increased. Well, fuck it, it fell - it's our steam locomotive....

For trend trading, yes, that's right. Michael, let's go faster, because my father-in-law is already twisting his fists in my face in a rush of money from Forex, does not allow me to focus on the entropy/non-entropy...

 
Mihail Marchukajtes:

To find the cross entropy you first need to find the conditional entropy of two events, which is what I'm doing now....

You have some other cross entropy than mine, I can't help you. Although I also have a bicycle, I won't argue which is better :)

Have a little look through R packages on the subject, it looks likehttps://cran.r-project.org/web/packages/EMVC/EMVC.pdf would suit you to find both entropy and cross-entropy and to filter out the predictors.

 
Dr. Trader:


I haven't studied information theory, but I have some experience with entropy in R.

Essentially, the greater the entropy, the more chaos there is in the data. A predictor with high entropy is rather poorly related to the target. Conversely, low entropy indicates that the target is easily determined from the predictor.

Non-entropy is the opposite of entropy, it brings no new knowledge compared to entropy, it is just introduced for convenience. If the predictor has a large entropy, then the nonentropy is small. If the entropy is small, the non-entropy is large. It's like heat and cold, light and darkness, etc., one flows seamlessly into the other.

But that's not all, there's also cross-entropy. This is how the two predictors together are related to the target, high cross-entropy is bad, low cross-entropy is good. In machine learning it often happens that two predictors with high entropy when used together give low cross-entropy, which is what we all need. Even though each of the predictors may be badly related to the target by itself (high entropy for both), but together they can hit the bullseye (low cross-entropy). So you can't just measure the entropy of each predictor separately, and choose a set according to the estimate. You have to pick the whole set of predictors with low cross-entropy, I for one don't look at what their entropy is individually at all.

Here are some examples.

1) Predictor with high entropy. It makes it impossible to predict the targeting class at all.

2) Predictor with low entropy. If you look closely, if the value of the predictor is from 0 to 0.25 or less than 0.4, then the value of the class = 1. Otherwise, class = 2. This is a very handy predictor to use in MO.

3) Two predictors, each has high entropy, and the model will never be able to predict the target using only the first or only the second predictor. But by drawing them together (the X-axis is the value of the first, and the Y-axis is the value of the second) we can immediately see that they together give very good information about the target class (same sign for both predictors = class1, different sign = class2). This is an example of low cross-entropy.


Maybe just the main components?

 
I remember! Dennis Kirichenko was the first to suggest taking entropy/non-entropy into account. I am literally bursting into tears at the impending happiness of money.
 

http://padabum.com/d.php?id=223567

no thanks

Reading again... when will it end?

Обучение с подкреплением
Обучение с подкреплением
  • votes: 1
  • padabum.com
Обучение с подкреплением является одной из наиболее активно развивающихся областей, связанных с созданием искусственных интеллектуальных систем. Оно основано на том, что агент пытается максимизировать получаемый выигрыш, действуя в сложной среде с высоким уровнем неопределенности. Дается исчерпывающее и ясное изложение идей...
 
Maxim Dmitrievsky:

http://padabum.com/d.php?id=223567

no thanks

Reading again...when will it end?

Thanks for the book.

No, such links with the installation process do not download)).

 
Yuriy Asaulenko:

Thanks for the book.

No, such links with the installation process do not download)).

All downloaded normally, without any installation

pdf is blank, I can convert to djvu and send
 
SanSanych Fomenko:

Maybe just the principal components?

The principal components are calculated without a targeting analysis. You can find the principal components, but whether they are useful for predicting the desired target is not known in advance.

And the cross-entropy can be calculated in relation to a specific target, and the result will tell which predictors should be removed because they interfere.
I wanted to try EMCV package, I wish I had noticed it before, if it works, I will post examples of its usage here later.

 
Maxim Dmitrievsky:

All downloaded normally, without any settings

Sorry, pressed the wrong button - "download", and there exeshnik.

All right.

 
Yuriy Asaulenko:

Sorry, pressed the wrong "download" button. It's OK.

It's just an advertising button, it's time to be more experienced on pirate sites))

Reason: