Machine learning in trading: theory, models, practice and algo-trading - page 3092

 
Forester #:

yes

not sure.

And in general. read the article to understand what you are doing, there are limitations. For example, it is necessary to give obviously successful settings, not -1000000 to +1000000. If you give everything in a row, the average OOS will be at the bottom and there is no point in comparing with it. A very narrow range of 0.95...,0.98 is also bad - the results will be very close.

I understand that you should submit a profitable TS and not just anything....


I have already outlined the algorithm for testing this thing, but there is only one nuance with the metrics


Should I optimise all 4 + 1 metrics?

 p_bo      slope       ar^2     p_loss 
 0.1666667  2.1796000 -0.2100000  0.1670000 
+

прибыль


Or only

 p_bp  + прибыль


 
But I don't understand how they do cross-validation without training. They just feed a ready set of returns and then mix it on 12000 variants. It should be trained on each of the 12000 IS and predicted on each corresponding OOS.
 
mytarmailS #:

I understand that you have to file a profitable TC and not just anything.


I have already outlined the algorithm for testing this thing, but there is only one nuance with the metrics.


I need to optimise all 4 + 1 metrics


Or just


I don't know. I guess any of them.
 
Forester #:
But I don't understand how they do cross-validation without training. They just feed a ready set of returns and then mix it on 12000 variants. It should be trained on each of the 12000 IS and predicted on each corresponding OOS.

That's how it's trained.

Maybe it's time to look at the package.

 
mytarmailS #:

That's how it's taught.

Where are the forest/NS hyperparameters? It doesn't, so it doesn't train. Predictors aren't fed either.
I think it's just assessing the stability of the external model predictions.
 
Forester #:
Where are the forest/NS hyperparameters? No - so it's not training. Predictors are not fed either.
I think it just evaluates the stability of the external model predictions.

Estimates stability through linear regression, as I understand it.

Is there anything in the paper about forests/NS?
 

I don't understand a little bit about the competition. Profsreda, not profsreda, there is a task, and the discussion of the correctness of the task is more relevant, and if correct, why not?

I respect the opinions of all participants of the holivar, but I have another))))))

Without external or other parameters everything is very complicated, or rather close to perpetual motion machine))))) But with external parameters the same big problem)

Return to the mean is the easiest to understand and eternal apparently and it is clear that on small tf errors are less, but Saber ticks also give black swans)))))

 
mytarmailS #:

estimates stability through linear regression as I understand it

Is there anything in the article about forests/NS?

Or maybe it's simple? Like Rattle?

We take two files, the first one is large, the second one can be smaller, but with the latest dates relative to the first one.

We divide the first file by random sampling into three parts: train, test, validation in the proportions 70/15/15.

On train we teach with cross validation, for example with 5 folds. If one fold is minimum 1500 bars, then train = 7500 bars. In a circle 15000 bars for two source files will be enough.

We run the trained model on test, validation and get a classification error on each .

Then we run the 1500 bar window on the second file. Collect the classification error.

If ALL obtained classification errors fall within the 5% channel, then everything is fine: we can trust the obtained classification error and there is no retraining.

 
СанСаныч Фоменко #:

How about we keep it simple?

We'll see.


First you should try to run the algorithm and test it, if it doesn't work then throw it away and forget it... 99%.

If it works, then you can delve into the article, delve into the method, try to improve / change / replace 1%.

 
Where is that coveted package that "works" :)