Discussing the article: "Self Optimizing Expert Advisors in MQL5 (Part 14): Viewing Data Transformations as Tuning Parameters of Our Feedback Controller"

 

Check out the new article: Self Optimizing Expert Advisors in MQL5 (Part 14): Viewing Data Transformations as Tuning Parameters of Our Feedback Controller.

Preprocessing is a powerful yet quickly overlooked tuning parameter. It lives in the shadows of its bigger brothers: optimizers and shiny model architectures. Small percentage improvements here can have disproportionately large, compounding effects on profitability and risk. Too often, this largely unexplored science is boiled down to a simple routine, seen only as a means to an end, when in reality it is where signal can be directly amplified, or just as easily destroyed.

Preprocessing is a powerful and yet often overlooked tuning parameter in any machine learning framework or pipeline.

It is an important control knob in the pipeline that is often hidden away in the shadows of its bigger brothers. Commonly, optimizers or shiny model architectures mainly get the focus and research work, and large amounts of academia are poured into those directions. But little time is spent studying the effects of pre-processing techniques.

Silently, the pre-processing that we apply to the data at hand impacts model performance in ways that can be surprisingly large. Even small percentage improvements made in pre-processing can compound over time and materially affect the profitability and risk of our trading applications.

All too often, we rush through the activity of preprocessing without giving much thought or much time to validating whether we have truly identified the best transformation possible for the input data.


Author: Gamuchirai Zororo Ndawana