Adaptive Weekly Optimization vs. Fixed Rules in EA Strategies – What's Your Take?

 
I'm curious about the effectiveness of a weekly adaptive optimization approach compared to sticking with a single, static rule for EA (Expert Advisor) strategies. Consider an EA that has already been built with one year of optimization. What if we update it every week with slight adjustments based on the past week's market conditions?

Is this adaptive method more beneficial for navigating current market dynamics, or is it better to maintain a fixed rule without frequent tweaks? I'm interested in hearing about your experiences, insights, and any potential pitfalls you might have encountered with either approach.
 

As with many things in life, it depends. Weekly optimization can be overfitting depending on the parameters. Ex: If you're optimizing every week and you're looking for the optimal TP and SL between 1-300 pip with a 1 pip optimization gap, well you're probably going to overfit. On the other hand, if you're trading daily charts and all you want to do is figure out if a moving average should be 20, 21, or 22 periods long you'd probably be fine. It would also likely be a total waste of your time, but you wouldn't run into overfitting problems.

Weekly optimization would be using, in your case a year lookback period, but what you're actually doing isn't so much as optimization but telling the computer "find the best setting that would have worked in the past using this data" which will to some extent always overfits, it's just how much overfitting are you doing. There's no exact answer to that, we can't tell you that it's always better to optimize than it would be to just keep the system doing what it does.

For all you know the setting you just changed, say SL from 20 to 30 pips, fits the past but 20 pips would have been the optimal setting for the week forward.

 

In general I totally agree with the previous answer but I would like to add a few things. 

The adaptive method theoretically is always appropriate for market perception by allowing the use of dynamic parameters with combined sensitivity. This is where a big brain and a precise understanding are needed, because the question is rather what exactly to optimize and for what. I think apparently you asked too complicated a question which cannot be answered unequivocally ,because the visitors here don't know exactly, probably nobody does, otherwise Elon Musk wouldn't be the richest man in the world (or was).

We can only construct reasonable future predictions based on past data and statistical analysis. In other words, predictions rely on historical statistics to estimate future outcomes. Mathematically, if F represents the future, P represents the past, and S represents predictive statistics derived from P, then: If P = 0 (i.e., no past data is available), then F exists within an uncertainty range: .So, [0,∞] representing complete uncertainty. This means that without any past data (P=0), there is complete uncertainty, making it impossible to establish a reasonable prediction for the future.

Mathematically, the larger the past data set (P), the more reliable and refined the predictive statistics (S) become, which can, in turn, increase the accuracy and scope of future predictions (F). As (P) increases, statistical properties emerge, constraining the possible values of (F) within a narrower and more meaningful range. Larger (P) expands the predictive scope. It means we have more historical data points, which allow us to detect events. This can extend the predictive range of (F), meaning. Where (S) represents the statistical knowledge derived from (P), then a (P) can refine (S), making (F) more precise and potentially larger in scope.

Using the same concept to find the best values within a category: The best learned or optimized values are inversely proportional to the best statistically calculated values. Otherwise, they wouldn’t be the best in their respective category. It’s as simple as that. Whether we use them as a Global Optimum or Local Optimum doesn’t matter. If our current optimum values are no longer the best, a new optimum forms, which is reflected in statistics. We must identify and incorporate these new values as quickly as possible. It is pointless to base logic on a prediction that does not statistically reflect the best optimum or a connection that has achieved statistically significant results- regardless of how those values are calculated or weighted. In retrospect, they must be the best values within their category.

Now imagine if you don't have any new data at all, what do you use to calculate and for what ? Just hope that the distant past will repeat itself ? Doesn't it look like a casino ? So, it is logical to increase the potential for future prediction by amplifying the more recent data, and to discard old data as necessary. In extreme cases, very sensitive meters could be calibrated, even daily. In fact, M1 scalpers should check every hour to make sure that the market hasn't gone to an unprecedented extreme. Using the DOM it is probably a prerequisite that you change your strategies dynamically according to the order blocks generated, optimizing your strategy dynamically even every new minute. It doesn't matter whether you are an m1 scalper or you use more sensitive indicators just to find a better entry point using the same indicators. 

It should be borne in mind that predicting the future is always a thankless task, and even the best statistical relationships from the past may not work in the future. We can also draw parallels here with machine learning, in that if you don't know what to teach and why, you are likely to make the results much worse.

This is a very important issue, knowing what to optimize, when and how, is the best foresight tool we will have before the time machine is invented. I could write hundreds of pages on this subject but no one seems to bother to read .. and to be honest I probably wouldn't bother to write either :D 

But yes, mql5 is very fast language if you used it correctly (for example, compared to pine script where it can't do things like that so fast). Ultra-fast optimization can be done using incremental computing (stream processing) and math using batch processing as little as possible. We can optimize even every hour or minute several systems simultaneously, and there is no need to apply deep learning or mql5 optimizer to update your strategy parameters automatically :)

cheers...