Machine learning in trading: theory, models, practice and algo-trading - page 3429

 

Please give me your thoughts on the topic of self-deception.

  1. I did Optimisation by Genetics (interrupted) on Sample, sorted the results by optimisation criterion, leaving only the top 50.
  2. Out of these 50, I chose 2-3 that pass OOS left and right perfectly. Length_OOS = 2 * Length_Sample.

How strong is the fit factor in the second point?

 
fxsaber optimisation criterion, leaving only the top 50.
  • Out of these 50, I chose 2-3 that pass OOS left and right perfectly. Length_OOS = 2 * Length_Sample.
  • How strong is the fit factor in the second point?

    100%.
     
    fxsaber optimisation criterion, leaving only the top 50.
  • Out of these 50, I chose 2-3 that pass OOS left and right perfectly. Length_OOS = 2 * Length_Sample.
  • How strong is the fit factor in the second point?

    The greater the differences in the parameters of these 3 sets, the less likely the fit is.

     
    Andrey Dik #:

    The greater the differences in the parameters of these 3 sets, the less likely the fit is.

    Very interesting statement, thank you. True, it was understood a bit differently.


    The greater the differences in the parameters of the 50 sets (genetics is specially interrupted for this purpose), the lower is the probability of fitting in point 2.

     
    ))
     
    mytarmailS #:

    you don't get it.

    Okay, do it.

    The toaster needs to be rewritten for multicurrency first.

    Then I'll add an option to embed fractals.

    Then I will add the option of kozul via cross-learning. So fractals are only a detail.

    That is, you throw in different pairs for better generalisation, and add more patterns to the overall pot.

    If treebanks don't work well with such "embeddings", I might consider a neural network.

    Then I'll switch to LLM. Mbe will need a new comp with a new graphics card. Or I'll have to buy a subscription to colab to get more power.
     
    Maxim Dmitrievsky #:

    The toaster has to be rewritten for multicurrency first.

    Then I will add an option to embed fractals.

    Then I will add the option of kozul via cross-learning. So fractals are only a detail.

    That is, you throw in different pairs for better generalisation, and add more patterns to the overall pot.

    If treebanks don't work well with such "embeddings", I might consider a neural network.

    Then I'll switch to LLM. Mbe will need a new comp with a new graphics card. Or I'll have to somehow buy a subscription to colab to get more power.

    That's not ebding.)

    so colab gives you a free graphics card, it should be enough for your needs.

     
    mytarmailS #:

    it's not yembedding.)

    so colab gives you a vid for free, it should be enough for your needs.

    It may not be enough for "LLM", they are very resource-hungry.

    Embedding is conditional, I don't know what to call it otherwise.

     

    I need a simple but very fast backtester written in C++ or rust with support for limit entry, stops and takers.

    if anyone knows such a backtester, please advise.

     
    mytarmailS #:

    I need a simple but very fast backtester written in C++ or rust with support for limit entry, stops and takes.

    if anyone knows such a backtester, please advise.

    Virtual is written in MQL5. It should be ported to C++ without any problems. I think it is one of the fastest tick backtesters in the world.

    Reason: