Discussing the article: "Self Optimizing Expert Advisor With MQL5 And Python (Part VI): Taking Advantage of Deep Double Descent"

 

Check out the new article: Self Optimizing Expert Advisor With MQL5 And Python (Part VI): Taking Advantage of Deep Double Descent.

Traditional machine learning teaches practitioners to be vigilant not to overfit their models. However, this ideology is being challenged by new insights published by diligent researches from Harvard, who have discovered that what appears to be overfitting may in some circumstances be the results of terminating your training procedures prematurely. We will demonstrate how we can use the ideas published in the research paper, to improve our use of AI in forecasting market returns.

There are many techniques used to detect overfitting when developing AI models. The most trusted method, is to examine the plots of the test and training error of the model. Initially, the two plots may drop together, which is a good sign. As we continue training our model, we will hit an optimal error level, and once we go past that, our training error continues to fall, but our test error only gets worse. Many techniques have been developed to remedy this problem, such as early stopping. Early stopping, terminates the training procedure, if the model's validation error doesn't significantly change, or continually deteriorates. Afterward, the best weights are restored, and it is assumed the best model has been located, as in Fig 1 below.

Overfitting

Author: Gamuchirai Zororo Ndawana