Machine learning in trading: theory, models, practice and algo-trading - page 588

 
Yuriy Asaulenko:

I remember you had about 70% of adequate predictions. I wrote the post above.

Well, 70% of them are correct and that's nothing. Of those 70% correct to enter the transaction is about one-third. That leaves us with 23%. That's nothing against 30% of wrong predictions (and we don't know in advance if they are right or wrong). And the wrong predictions are in the inflection areas (the change in direction), and those areas are exactly the most suitable for trades.

On this basis, I believe it is futile to engage in prediction, and should be engaged in classification. I.e. to determine if a certain moment is suitable for a deal. Using models you will get the entrance error 20-40% More exact figures I gave earlier in this topic.


Classification is prediction of class membership or probability of membership.

It has no differences from regression, from which you can also extract the membership.

 
SanSanych Fomenko:

He convinced me that the problem of non-stationarity has nothing to do with MO. Since I never dealt with NS, I had no arguments to refute his opinion. Moreover, I had an intuitive understanding that various trees and others, except NS, work fine with non-stationary predictors.

I rely on the axiom that there are some permanent patterns in the behavior of prices, it's a non-Markovian process. And I try to find them using MO.

Various models can indeed separate the wheat from the chaff and find regularities in a price flow consisting mostly of noise and its intentional distortion by dealing centers.
The problem is to find such parameters for training the model (for neurons - the number of weights, the learning speed, etc.; and for a forest - the number of trees for example) that the model won't just memorize initial examples, and after conquering non-stationarity it will find some stable regularities in all that noise. I find good parameters for model training by multiple crossvalidations.
As a result, my model shows a very small but positive result on both training and new data (R2 ~0.003). But I haven't beaten the spread yet.

 
Dr. Trader:

I rely on the axiom that there are some constant regularities in the behavior of the price, it is a non-Markovian process. And I try to find them using MO.

Various models can separate the wheat from the chaff, and find the patterns in the price flow that consists largely of noise and intentional distortion by dealing centers.
The problem is to find such parameters for training the model (for neurons - the number of weights, the learning speed, etc.; and for a forest - the number of trees, for example) that the model will not just memorize initial examples, and after conquering non-stationarity it will find some stable patterns in all that noise. I find good parameters for model training by multiple crossvalidations.
As a result, the model shows a very small but positive result on both training and new data (R2 ~0.003). But I haven't beaten the spread yet.

I haven't won either, and there is no light yet. But the system works on FORTS.

SanSanych predicts one hour ahead. He does not care about this spread.)

 

The problem of non-stationarity does not exist for classification problems. It is real for regression problems.

Do not confuse forecast and predict. Forecasting and prediction are different things. Prediction as a result is a numerical value with a confidence interval. Classification predicts the class the example belongs to, the probability of the example belonging to the class, or support for the hypothesis that the example belongs to the class.

Good luck

 
Vladimir Perervenko:

The problem of non-stationarity does not exist for classification problems. It is real for regression problems.

Do not confuse forecast and predict. Forecasting and prediction are different things. Prediction as a result is a numerical value with a confidence interval. Classification predicts the class the example belongs to, the probability of the example belonging to the class, or support for the hypothesis that the example belongs to the class.

Good luck


What do you mean? Where can I read about this nonsense? :)

 
Maxim Dmitrievsky:

What do you mean? Where can I read about this absurdity? :)

What seems absurd to you?
 
Vladimir Perervenko:
What seems absurd to you?

that due to non-stationarity the patterns between predictors\target and class prediction will change just like in the case of forecasting

 
Maxim Dmitrievsky:

that due to non-stationarity the patterns between predictors\target and class prediction will break just like in the case of forecasting

Can you show me an example? Or is this a speculative conclusion?

Nowhere in the abundant literature on classification using NN/DNN mentions non-stationarity as an influencing factor. My numerous experiments tell me the same thing.

Of course you are free to have your own opinion on the matter.

Good luck

 
Vladimir Perervenko:

Can you show me an example? Or is this a speculative conclusion?

Nowhere in the abundant literature on classification using NN/DNN does it mention non-stationarity as an influencing factor. My numerous experiments tell me the same thing.

Of course you are free to have your own opinion on the matter.

Good luck


And classification or regression... what's the difference?

 

There is a fresh, good book on deep learning. Unfortunately I can't openly link to it, it's on rutracker.org.

Deep Learning.
Year of publication: 2018
Author: Nikolenko S. I., Kadurin A. A., Arkhangelskaya E. O.
Genre or theme: Neural networks
Publisher: Peter
Series: Programmer's Library
ISBN: 978-5-496-02536-2
Language: Russian
Format: PDF
Quality: Recognized text with errors (OCR)
Interactive table of contents: None
Number of pages: 479

Reason: