Machine learning in trading: theory, models, practice and algo-trading - page 2411

 
 
mytarmailS:

very cool lecture.


https://www.youtube.com/watch?v=l30ejdQKGBg

In spring I already suggested some approaches of adding/removing of features, including by groups, I hoped to interest Maksim, but alas. As I wrote earlier, this approach works, but now I have it implemented in semi-automatic mode, just for experiments, while I need the implementation in R or Phyton to work in a loop, the essence of which is to create a new task for learning after analysis of training results.

But the FRiS-Stolp method, which is advertised in the video, is interesting to try, but I don't understand if there is an implementation in R or Phyton.

 
Aleksey Vyazmikin:

In the spring, I already suggested approaches to adding/removing signs.

These methods are a dime a dozen, so I don't know what to suggest here.

Aleksey Vyazmikin:

But it's interesting to try method FRiS-Stolp, but I don't understand, if there is its implementation in R or Phyton.

I don't understand it either ))))

There's such a thing as google ;)

 
Aleksey Vyazmikin:

In spring I already suggested approaches for adding/removing features, including in groups, I hoped that I will interest Maxim, but alas. As I wrote earlier - this approach works, but now I have it implemented in semi-automatic mode, purely for experiments, while I need the implementation in R or Phyton to work in a loop, the essence of which is to create a new task for training after analysis of training results.

But the method of FRiS-Stolp, which is advertised in the video it's interesting to try, but I don't understand if there is its implementation in R or Phyton.

There's a standard importance chip, it's quite enough
 
mytarmailS:

These methods are a dime a dozen, so what is there to offer?

Propose to check the effectiveness of these methods for tasks related to trading.

mytarmailS:

I do not understand it either ))))

there is such a thing as google ;)

Well, why all this fuss?

I used search engine and even found some code on git-hub, but it didn't make me realize if it works or not.

This is why it is interesting to hear the interest of those who understand it and work out possible ways to work together to investigate this issue.

I am for constructive rather than cheek-blowing.

 
Maxim Dmitrievsky:
There is a standard feature importance, it is quite enough

Importance is a statistic based on how often the algorithm chooses certain predictors when it builds the tree. This indicator tells you what the model is made of. The enumeration of predictors allows to build other models, finding new dependencies and correlations, which may turn out to be stronger after a few splits.

 
Aleksey Vyazmikin:

Importance is a statistic based on the frequency with which the algorithm selects certain predictors when building the tree. This indicator tells us what the model is made of. The enumeration of predictors allows to build other models, finding new dependencies and relationships, which may turn out to be stronger after a few splits.

Your predictors are a bunch of indicators that have accumulated over their trader's career, hence the strong desire to somehow tidy up the mess. I don't have such a problem, but I have the understanding that this is the road to nowhere.
 
Maxim Dmitrievsky:
Your predictors are a bunch of indicators that you have accumulated during your career as a trader, hence the strong desire to somehow streamline all this mess. I don't have such problems, but I understand that this is the way to nowhere.

Even if I had all indicators based on standard indicators included in the package, which is not the case at all, they are derived from price and can carry useful information, while many indicators are not subject to non-stationarity.

In fact, I solved the problem of selecting predictors in another way, but finding the best combinations is an open and interesting question.

 
Aleksey Vyazmikin:

Offer to test the effectiveness of these methods for tasks related to trading.

Well, why all this nerdiness?

I used the search engine, even found some code on the git hub, but it did not make me clear whether it works or not.

This is why it is interesting to hear the interest of those who understand it and work out possible ways to work together to investigate this issue.

I am for constructive ideas, not puffed up cheeks.

Alexey you may have studied python or r-cu, tried to code there... Believe me a thousand questions would be gone...

What's the point of checking the efficiency of feature selection methods if they're already tested and working?

The problem is not in the trait selection, it's in the traits themselves, if you put 10 indicators on the input, then cull to the blue and you'll get the same result from ANY algorithm of selection...


Did you hear them in the video? They are selecting among tens of thousands of attributes, and mention MSUA where they even talk about creating and enumerating billions of attributes

That's what we should talk about, about systems that generate millions of ideas and check them automatically, that's the essence, that's individual decisions, and the selection of attributes is the small final part of the process and there's nothing interesting in it, you take any algorithm and go ahead, there's nothing to talk about, it's just not interesting.

 
In general, it is useful to study algorithms in Python, read a couple of books with examples. Many questions will disappear on their own.
Reason: