Machine learning in trading: theory, models, practice and algo-trading - page 1695

 
Kesha Rutov:

Yes, it's a reasonable way, a bird in the hand, so to speak, to hell with cranes, it's troublesome)). But IMHO it's easier to get a job than to rob a very small simpleton, selling them a sham "shovel", we both know that the real shovel no one would sell)))

But I'm not judging, in principle I even respect fraud, but big, not petty.

Super! Your answer captures the essence of the position of many. You have to convince yourself that others have shovels to shovel, otherwise you have to leave your comfort zone.

About the job I agree, the resource I spent on this project in the form of a salary would have brought more.

 

Kesha Rutov:

But I am not judging, in principle, I even respect fraud, but big, not small.

It would be cheating even if:
- make an Expert Advisor that shows the neural network in real time, in the real market;
- send neural network signals to a public Telegram channel;
- write an article describing the steps of solving a problem that has 6K views.

 
Kesha Rutov:

Yes, it's a reasonable way, a bird in the hand, so to speak, to hell with cranes, it's troublesome)). But IMHO it's easier to get a job than to rob a very small simpleton, selling them a sham "shovel", we both know that the real shovel no one would sell)))

But I'm not judging, in principle I even respect fraud, but big, not petty.

Kesha, I was impressed by your answer about the trend and flat and I still cannot get it out of my head.

And at last I found the answer.

Why do you think there is a trend and a flat, and how to initiate the appearance of either one or the other?

Judging by your post and intentions, you don't know...
 

Guys, you'll forgive me, but it turns out I'm so stupid. Force majeure with the update Rstudio and its packages that after updating the script began to give a fierce error, which I can not overcome. All because it was originally poorly written, so there was a natural selection. :-( So I thought, if the time came, I'd better get used to it, fiddle with matrices, vectors, etc. and organize the script, so it would save the training file without jumping between Exel. As the saying goes, a beast for the catcher. As a result, I read documentation, created a concrete example from the tutorial, but with my own parameters, and the errors kept appearing. None of the examples and could not apply to their data. Therefore, if you have a link to good tutorials, namely expanded reference commands. That here is a reference book, but for the dummies, do not keep it to yourself. Share!!!!!!

Elementary matrix of vectors I can not create, not because I do not understand the basics, but because he does not like this and that. I get errors all the time..... I am very sad :-(

And the main thing is that I start swearing at one of the variables, I think it's of the wrong type. It used to be of that type, but suddenly it is the wrong one. Although R uses automatic data conversion. What can I say about it :-(

 
Mihail Marchukajtes:

What can I say :-(

lamer ))

 
Aleksey Vyazmikin:

I studied CatBoost, so I will talk about it.

The depth of the tree is recommended 4-6 splits. This is the depth I am trying in general.

Predictor splitting is done by three different algorithms to choose from. A so-called grid is created.

The results of splitting are interesting to pull out and see for yourself. And what does AlgLib divide predictors into equal parts when building a tree for a forest?

I found a way to view trees in python https://github.com/catboost/tutorials/blob/master/model_analysis/visualize_decision_trees_tutorial.ipynb
But I have some problem with graphing, apparently the graphviz module is outdated.

You can look JSON https://github.com/catboost/tutorials/blob/master/model_analysis/model_export_as_json_tutorial.ipynb
This is what it looks like for a symmetric tree of depth 2
"splits": [
{
{ "border": 4.550000190734863,
{ "float_feature_index": 12,
{ "split_index": 15,
{ "split_type": "FloatFeature"
},
{
{ "border": 2.423949956893921,
{ "float_feature_index": 7,
{ "split_index": 7,
"split_type": "FloatFeature"
}


catboost/tutorials
catboost/tutorials
  • catboost
  • github.com
CatBoost tutorials repository. Contribute to catboost/tutorials development by creating an account on GitHub.
 
mytarmailS:

lamer ))

Yes I would even say sucker, not afraid of the word. But not a sucker who ultimately achieves his goal when no means do not matter. Yes I killed a day on this, but made R upload the final file for the optimizer. I used to have to drag and drop data through excel, now I don't. EHHHHHHHHH I'LL LIVE!!!!!!
 
elibrarius:

I found a way to look at trees on a python.

It's beautiful. But I'm interested to see the grid of the predictor ranges, which are further searched.

 
Aleksey Vyazmikin:

It's beautiful. But I'm interested to see the grid breakdown of the predictor ranges, which are further enumerated.

I wonder why you don't use vtreat for R? It just identifies the levels in the input data relative to the target and thus selects those predictors that are significant for the target. In addition to classification there is an option for prediction. To be honest I don't know what I would do without it....
 

Another grudge of the day, for those who are waiting for 2000 on the RTS :-)


Reason: