Machine learning in trading: theory, models, practice and algo-trading - page 654

 
Yuriy Asaulenko:

Yuri, don't worry - your help and wishes are taken into account by me and they are what keeps me from describing the algorithm completely. I think, how to deal with it... So far - I do not know. All - this branch - I'll leave. Don't scold the pianist, he plays as he can.

 
Dr. Trader:


Yes, Doc - if you're still racing neural networks, you should race them on transformed samples. You can read from there either uniformly or exponentially. Now that's really all. I'm going home - the mood here seems to have improved.

 

In general, you're all talking about the wrong thing here...

It's all about the data and then the implementation of the exhaust MO in TC, I think it's better to talk about how to noisy predicate with NS, with just over 50% acuracy, turn into a TC at least above the spread.

 
pantural:

In general, you're all talking about the wrong thing here...

It's all about the data and then the implementation of MO exhaust in the TC, I think it's better to talk about how a noisy predicate with NS, with a little over 50% acuracy, turn into a TC at least above the spread.

***

 
Alexander_K2:

As evidence at this point in time:

Demo?
 
Please don't turn this thread intoRenat Akhtyamov's inner world
 
Renat Akhtyamov:
demo?

So where was the ruble before 14? ))))))


 
pantural:

In general, you're all talking about the wrong thing here...

It's all about the data and then the implementation of the MO exhaust in the TC, I think it's better to talk about how a noisy predicate with NS, with a little over 50% acuracy, turn into a TC at least above the spread.

I've been thinking about this a lot as well.

If the regression model predicts price gains per bar, and the R2 score is above zero on fronttests and backtests, that's already a good start. The problem is that the result, although stable, is small, the spread cannot be beaten.

Analytically, the problem is that R2 penalizes the model more heavily for large errors and ignores small errors and wrong trade directions. If you look at the distribution of gains, most price movements are only a couple of pips. And the model, instead of predicting the correct direction of such small movements, learns to predict long tails of the distribution, for which it will get a higher R2. As a result the model somehow predicts large movements but makes constant errors of direction on small ones and blows the spread.

The conclusion is that standard regression estimates are bad for forex. It is necessary to invent some kind of fitness function that would take into account directions of deals, spread, and accuracy, and the function should be smooth. Then, even with an accuracy of a little over 50% there is a chance for profit.
Accuracy, Sharp ratio, recovery factor, and other functions that analyze trade schedule are too discrete, neuronics with standard backprops won't get out of local minimum, and it won't learn properly.

An alternative conclusion - completely ignore weak signals of the neuron. Trade only on strong ones. The problem here is that we can always find the threshold that gives good results on the backtest, but it will cause bad ones on the fronttest. You'll have to think about it too.

 
Dr. Trader:
An alternative conclusion is to completely ignore weak neuronics signals. Trade only on strong ones. The problem here is that you can always find a threshold that will give good results on the backtest, but will give bad ones on the fronttest. Here, too, we should think of something.

It makes sense to trade on strong ones. And the fact that the frontorder has bad results - perhaps NS has just remembered what was on the backtest, but did not generalize.
Maybe the validation area should be introduced?
But it may turn out to be a fitting for the validation area. And the forward will be bad again.

 
Dr. Trader:

and wrong transaction directions.

It may be of interest: rugarch::DACTest - directional accuracy test. The most interesting thing is that the author is our Russian contemporary Anatolyev.

Anatoliev S. Predictability Testing. Kvantil №1, September 2006, pp. 39-43.