Machine learning in trading: theory, models, practice and algo-trading - page 711

 
Alexander_K2:

The Grail is not here, he's sitting next to me, watching this correspondence with bulging eyes.

I'm also slowly getting to all sorts of probability densities and stuff, but a little bit from the other side, from moushilling :) I'm almost there, the only thing left is to understand coolerning more or less in its different manifestations

entropy is used there and different distributions can be set for the agents

 
Maxim Dmitrievsky:

I'm also gradually approaching all sorts of probability densities and stuff, but a little bit from the other side, from moushinlerning :) I'm almost there, the only thing left is to understand coolerning more or less in its various manifestations

That's right.

Just deal with entries - they should not be random, but strictly based on trading intensity. In the long run, in the long run.

If there will be some difficulties with finances just for the sake of living - give me a vote, and I`ll just drop the Grail on the probabilities, in order to have strength to continue the epic.

Show must go on!

 
Vizard_:

Open your eyes)))

opened))))

good luck!

)))

 
Alexander_K2:

That's right.

Just deal with the entries - they should not be random, but strictly tied to the intensity of the trades. For time, in the long run.

If there will be some difficulties with finances just for the sake of living - give me a vote, and I'll just drop the Grail on the probabilities, in order to have strength to continue the epic.

Show must go on!

the main thing would be interest :) while there are interesting unexplored things, you can poke around

I see no sense in supervised and unsupervised lerning, mere approximation and feature search for narrow stationary problems

 

I'm all for boosting too, but I disagree a bit with his fight against overfit.

For him, defeating overfit = specific tree partitioning criterion + minimum number of trees.

For me: gbm + k-fold. I stopped on gbm now, but tried other packages in R - xgboost, catboost - they have built-in overfit protections, but I didn't get as much joy as with gbm.

 

Is it?


 

Anyway, here's the bottom line, the top line I think everyone will figure it out on their own


 
Renat Akhtyamov:

In contrast to the neural networks, I have shown a higher indent, here from the night to now (of course real):


The curve is of course interesting and can not be ignored, but the yield is too low. IMHO

 
Vladimir Gribachev:

Is it?

Yes. 1 в 1.

 
Renat Akhtyamov:

there's the most interesting part, including for DC - the forecast

can not

Teacher, once again you've been figured out by the normal guys)))


Reason: