Machine learning in trading: theory, models, practice and algo-trading - page 1621

 
Aleksey Vyazmikin:

The reality may not be what we imagine - we should try to reproduce the partitioning algorithms from CatBoost and see what really happens there and how correct it is.

Regarding the randomness - there is a randomness on the choice of the predictor grid partitioning, like not the best, but random, if I understand correctly. And, there are algorithms that make the stack unevenly divided by ranges.

I think differently. Each predictor is divided by a random point, but the best resulting division is still chosen.

 
mytarmailS:

I would say not bad, but it is frustrating.

In fact, it works as an ordinary indicator in the overbought/oversold area.

Sometimes it's right, sometimes it's wrong, it shouldn't be like this...

Have you tested this net for trading at all? My experience tells me that it will not make money...

Unless you put a filter on the "certainty" of the net

I will not argue about the adequacy / inadequacy, overnight I accumulated statistics + added a filter of "confidence". This is what the night looks like with the filter set high. If I put zero, the lines won't break at all, only change sides.
I'll give it to you for testing in the near future.

 
Evgeny Dyuka:
The orange area at the top - predicts a downward movement, the green area at the bottom - upward movement, the thickness of the neural network confidence level. It works only on BTCUSD M1 (for now).
Is it cool? ))

It looks like an ordinary trend indicator ))). How often do you retrain it?

 

Based on this picture, training is very rare or not correct enough, because otherwise such zones simply did not exist at the time of application of the trained system.

It probably won't be a revelation to anyone that the strategy tester in MT is, in fact, that very trained neural system.
 
Farkhat Guzairov:

It looks like an ordinary trend indicator ))). How often do you retrain it?

I think it is very important to look at it from a distance.) A closer look shows that everything is not so simple.
I trained once, this is the first trial. Learning area until about the 1st of February, then test dataset until the 24th of February.
All in all this neuron was built on a scratch, so I am surprised it shows anything at all. There is an understanding of where to go next.
 
Farkhat Guzairov:

Based on this picture, training is very rare or not correct enough, because otherwise such zones simply did not exist at the time of application of the trained system.

It probably won't be a surprise for anyone that the strategy tester in MT is in fact that very trained neural system.
Do you have any working examples with proper training? I'm not talking about the secret neural networks that make millions of profit, everyone has them). But the ones that are public.
 
Evgeny Dyuka:
I'd say it's from far away. I have never tried anything, I just do not know what to do and I am not sure what to do.
I tried it once, it was the first time. Learning area until about the 1st of February, then a test dataset until the 24th of February.
All in all this neuron was built on a scratch, so I am surprised it shows anything at all. There is an understanding of where to go next.

That is, in fact, you have not yet developed a system (trade) on it, while you only see a relatively acceptable for you the outcome of the prediction, and try to trade and what rules do you apply?

 
Farkhat Guzairov:

That is, in fact, you have not yet developed a system (trade) on it, while you only see a relatively acceptable for you the outcome of the prediction, but trade tried and what rules do you apply?

A good question.
I don't think about turning into trades, as a matter of principle. As soon as I start to deal with take/stop/backtests, my brain immediately shrinks and I start to fight with mills. I make an indicator, the rest will come together by itself.
 
Evgeny Dyuka:
Do you have any working examples of proper training? I don't mean the secret neural networks which make millions of profit, everybody has them.) I mean the ones which are public.

I have.... I draw conclusions from backtests in the tester, what do you think the result you get if your system is trained correctly? Almost 90% outcome of correct inputs. Previously the same backtests did not give such an outcome, from which I conclude that the training in this case was correct.

Try the same.

 
elibrarius:

I think differently. Each predictor is split by a random point, but the best resulting split is still chosen.

I went to look at their help, but I do not understand it - it's too confusing. I'll try to find this point later in the video, they explain it more clearly.

But I saw that CB added new options for building trees.

--grow-policy

The tree growing policy. Defines how to perform greedy tree construction.

Possible values:
  • SymmetricTree-Atree isbuilt level by level until the specified depth is reached. On each iteration, all leaves from the last tree level are split with the same condition. The resulting tree structure is always symmetric.
  • Depthwise- A tree is built level by level until the specified depth is reached. On each iteration, all non-terminal leaves from the last tree level are split. Each leaf is split by condition with the best loss improvement.

    Note. Models with this growing policy can not be analyzed using thePredictionDiff feature importance and can be exported only tojson andcbm.
  • Lossguide- A tree is built leaf by leaf until the specified maximum number of leaves is reached. On each iteration, the non-terminal leaf with the best loss improvement is split.

Reason: