Machine learning in trading: theory, models, practice and algo-trading - page 3273

 
Aleksey Vyazmikin #:

Exactly the point is that it's often difficult to determine the distribution on an automatic.

Often it will be supposedly lognormal, but this is only because of emissions - there is no logic for it to be so.

And if you take a quantile, it means cutting over the entire range, which will be insufficient to remove outliers.


On the second sample I got a very strange result - it was just briskly learning without any manipulations, but after removing the rows with outliers, the effect of learning became almost equal to zero.

Now I have switched on slow learning rate - I will put it on overnight - I will see if it will give anything.

Otherwise, it turns out that the whole learning process is based on memorising outliers, at least with the public predictors that I use in the experiment.

I ran the model up to EA in winter (I posted the results on this thread). I had the opposite result: classification errors, and they were less than 20%, were caught by outliers. As a result, 80% of correct predictions were nullified by these errors.

One thing is clear to me: outliers should be got rid of. And the real result of the model is without outliers.

 
СанСаныч Фоменко #:

I caught up the model to EA in winter (I posted the results on this thread). I had the opposite result: classification errors, and they were less than 20%, were caught by outliers. As a result, 80% of correct predictions were nullified by these errors.

One thing is clear to me: outliers should be got rid of. And the real result of the model is without outliers.

At the edges there is clearly a shift of probability to some class - and this is not bad in itself, it is bad that these observations are not enough to make statistically significant conclusions.

So it is normal that one has more zeros in the outliers and another has more ones - it depends on the set of predictors.

It also happens that if an outlier is observed from two sides, one side is closer to zeros and the other to ones.

 
Aleksey Vyazmikin #:

Where can I see the final code?

I posted everything in this thread.

 
Rorschach #:

The sources are open, you can take a look. Function for calculating correlation, on the right side there is an inscription [source], after clicking on it you will be taken to the code. We are interested in lines 2885-2907. In line 2889 the covariance is used, after clicking on cov, all mentions of cov in the code will appear on the right, after clicking on the line with def cov... will jump to the covariance function, and so on. MQL C-like language, all C-like languages are ~90% similar, you can understand C#, Java, Python, JavaScript without much trouble.

Thanks. I've cooled down a bit on algorithmicisation, I'll have a look at it when the enthusiasm comes back.

 
fxsaber #:

I think I've posted everything in this thread.

I read of course, but according to the chronology Forester found some error, you agreed with it, then some part of the code was corrected.

And in the end I didn't see the full version of the final code here. I don't mean that you are obliged to post the code, I just asked...

 
Aleksey Vyazmikin #:

I read it of course, but according to the chronology Forester found some error, you agreed with it, then some part of the code was corrected.

And in the end I didn't see the full version of the final code here. I don't mean that you are obliged to post the code, I just asked...

Fast and line-by-line (corrected).

Forum on trading, automated trading systems and testing trading strategies.

Machine learning in trading: theory, models, practice and algo-trading

fxsaber, 2023.10.01 09:38

#include <Math\Alglib\statistics.mqh> // https://www.mql5.com/ru/code/11077

const matrix<double> CorrMatrix( const matrix<double> &Matrix )
{
  matrix<double> Res = {};
  
  const CMatrixDouble MatrixIn(Matrix);
  CMatrixDouble MatrixOut;  

  if (CBaseStat::PearsonCorrM(MatrixIn, MatrixIn.Rows(), MatrixIn.Cols(), MatrixOut)) // https://www.mql5.com/ru/code/11077
    Res = MatrixOut.ToMatrix();
  
  return(Res);
}

const matrix<double> CorrMatrix2( const matrix<double> &Matrix )
{
  matrix<double> Res = {};
  Res.Init(Matrix.Cols(), Matrix.Cols());
  
  const CMatrixDouble MatrixIn(Matrix);
  CMatrixDouble Vector(Matrix);
  CMatrixDouble Corr;

  for (int i = 0; i < (int)Matrix.Cols(); i++)
  {
    if (i)
      Vector.SwapCols(0, i);
    
    CBaseStat::PearsonCorrM2(Vector, MatrixIn, MatrixIn.Rows(), 1, MatrixIn.Cols(), Corr);
      
    Res.Col(Corr.Row(0), i);
  }
  
  return(Res);
}
 
fxsaber #:

Quick and line-by-line (corrected).

Thank you!

 
Maxim Dmitrievsky #:

I've put it aside for now, the results are no better than MO, although MO is also lame in terms of smoothness of balance

5 minutes, half training


is nothing more than an analogy of the 4-rosh grail with a short stop.

 
Renat Akhtyamov #:

is nothing more than an analogy of a 4-rosh grail with a short stop.

the analogy of your hedge fund with a negative balance.

 
Maxim Dmitrievsky #:

the analogy of your hedge fund with a negative balance sheet

by the way, what actions result in a negative balance in the course?

You don't have to worry about it.

it is when you withdraw more than your deposit ;)

and since the margin is not zero yet, trading continues (for free !!!), weird huh ? ;)))

Reason: