Machine learning in trading: theory, models, practice and algo-trading - page 1240

 
Maxim Dmitrievsky:

We're not in it for the money, but for the idea)) money is made in much more trivial ways

Yeah, what's the point of this idea? Well, there is machine learning, but miracles do not happen, it's just a more sophisticated indicator and nothing more.

I have a good idea to buy some calculations, but I don't think that it is worth much more money.

I don't know if I should trade in futures or futures markets but I don't know if I should trade in futures or futures.

 
forexman77:

Yeah, that's not a good idea. Well, there is machine learning, but miracles do not happen, it's just a more sophisticated indicator and nothing more.

I have a good idea to use it for the future, but I'm not going to use it for the future.

I have a lot of nerves and forces, while the output is like the meat from a mosquito.

It's not a job, don't even think so, you should keep your ass warm and then suffer... or look for a taxi in this industry

 

To make it even shorter, imagine that forex is a mountain that you have to climb. But, it is practically a smooth mountain with nothing to cling to.

And improvements in the MO of 1-2% will give practically nothing, there are no predictors there one noise and all, all the rest retraining and nothing more.

 
Maxim Dmitrievsky:

In short, classification error and logloss are in the alglib... The logloss does not make any sense at all, the classification error in the forest drops to zero in the training sample>0.8 and the oob 0.2.

That's why I made a small training sample, at least some error, but it's still small. I don't know how to compare with Python's

More like retraining... Trees can completely memorize input data. R can be reduced, and there seems to be nothing else to tweak in Algibe. In xgboost, for example, you can limit tree depth.
That's why I didn't use them in the beginning, and switched to meshes. But grids have their own problems.
 
Vizard_:

hilarious... Maximka that's why he's stoned))) Nah I wrote something here. In short - logloss is a penalty. Shchetay in akuarisi, it is easily interpreted, so it is used.

Logloss can bend a little, and if you're lucky to squeeze a percentage or two out of it. Showed Fa, there's a bible in R...

i always forget half of it, then i'm surprised after a year the same thing. i'll reassess it :) there's some nonsense in alglib, it's not like in serious libs, and the forest is self-made.

 
Vizard_:

hilarious... Maximka that's why he's stoned))) Nah I wrote something here. In short - logloss is a penalty. Change in akuarisi, it is easily interpretable, so it is used.

Logloss mona then bend a little and a percentage or two if you're lucky to squeeze out on accuarisi. I showed Fa, there's a bible in R... What's in the algib, I don't know what's screwed up in your settings...

turn down the greed on the rattle...

Alglib-e only has r for regularization
 

What are you akurasyu at all to markets, in my opinion is not applicable, there initially classes are not balanced.

Will you acurasyu 100% all will fly in the majority class is generally the metric is the worst. It is better to look at the confusion matrix, you can see exactly how the classes are divided.

 
elibrarius:
More like overtraining... Treynes, on the other hand, can completely memorize input data. R can be reduced, and there seems to be nothing else to tweak in Alglib. In xgboost, for example, you can limit tree depth.
That's why I didn't use them in the beginning, and switched to meshes. But grids have their own problems.

there is only the number of trees and r yes, if you put r greater than 0.6 then almost always an error classification on a tray 0.0 :))) in the test can be variations, usually around 0.5

 
forexman77:

What are you akurasyu at all to markets, in my opinion is not applicable, there initially classes are not balanced.

Will you acurasyu 100% all will fly in the majority class is generally the metric is the worst. It's better to look at the confusion matrix, it specifically shows how the classes are divided.

I don't understand Accuracy either. The error matrix or classification error is more understandable.
 
Vizard_:

Nspesh try different and in python. Cat Boost, for example, it does not offer much even out of the box + it eats everything in a row + the visualizer has a point (put it when the cut is not particularly improved) and so on...

I've already set it up, tomorrow I'll try it, along with just GBM, maybe LightGbm... a xgboost is kind of a pain to set up, it takes a long time to figure out

Reason: