
Mastering Log Records (Part 10): Avoiding Log Replay by Implementing a Suppression
Introduction
This article arose from a direct request from a user of the Logify library. He pointed out a problem that many people face in practice: when the volume of logs grows too much, repeated or irrelevant messages end up polluting the history, making it difficult to find what really matters. If you have any other ideas, questions or challenges that you'd like me to address, feel free to leave a comment at the end. This is our space, and it's by listening to you that the library evolves.
Before we go any further, it's important to understand what "log suppression" means. In a nutshell, suppression is the process of controlling which log messages are recorded, with the aim of avoiding excess, redundancy or pollution of information. Instead of simply dumping everything the system produces, you filter and limit what appears, ensuring that the log contains only the messages that are useful, relevant and at the right time.
In this article, we'll present a practical implementation of a log suppression system for Logify, designed to be flexible and efficient. You'll see how to combine different forms of control, such as avoiding identical messages repeated in sequence, limiting the frequency with which the same log appears, controlling the maximum number of repetitions, and even filtering by the source or file from which the log came. All this with an intelligent system based on bitwise modes, which allows you to activate several rules at the same time without complications.
After following this article, you'll have mastered the creation of a robust solution to keep your log lean and efficient, capable of suppressing excesses automatically. You will understand how to apply clear rules that facilitate analysis and reduce noise, saving resources and time. This improvement is especially useful for production environments, where excessive logs can hinder performance and make maintenance difficult. Remember that the final version of the library is attached at the end of the article and available for download.
File organization
In your Logify library project, create a new folder called Suppression inside the main Logify folder. This helps keep the code organized and makes it clear that everything that involves "suppressing" logs is concentrated there. Inside the Suppression folder, create a new file called LogifySuppression.mqh. This file will be the starting point for our new suppression class, which will control which messages should actually appear in the log, avoiding repetition and excess.
At first, the class may look simple, with just an empty constructor and destructor, like this:
//+------------------------------------------------------------------+ //| LogifySuppression.mqh | //| Copyright 2023, MetaQuotes Ltd. | //| https://www.mql5.com | //+------------------------------------------------------------------+ #property copyright "Copyright 2023, MetaQuotes Ltd." #property link "https://www.mql5.com" //+------------------------------------------------------------------+ //| | //+------------------------------------------------------------------+ class CLogifySuppression { private: public: CLogifySuppression(void); ~CLogifySuppression(void); }; //+------------------------------------------------------------------+ //| | //+------------------------------------------------------------------+ CLogifySuppression::CLogifySuppression(void) { } //+------------------------------------------------------------------+ //| | //+------------------------------------------------------------------+ CLogifySuppression::~CLogifySuppression(void) { } //+------------------------------------------------------------------+
It's a clean starting point, with nothing implemented yet, just to structure and test the inclusion of the file.
Expanding: imports and definitions
To give this class functionality, we'll need to import the log data model (LogifyModel.mqh ), which defines the format of the messages we receive for processing. We'll also define constants to validate the suppression parameters, guaranteeing minimum and maximum limits to avoid invalid configurations.
//+------------------------------------------------------------------+ //| Include files | //+------------------------------------------------------------------+ #include "../LogifyModel.mqh" //+------------------------------------------------------------------+ //| Validation constants | //+------------------------------------------------------------------+ #define MAX_SUPPRESSION_MODE 255 // Maximum valid mode combination #define MIN_THROTTLE_SECONDS 1 // Minimum interval between messages #define MIN_REPEAT_COUNT 1 // Minimum number of repetitions
Suppression modes with bitwise enum
To understand how to control different ways of avoiding repetition and excess logs, we need to talk about a fundamental concept: the use of bitwise enumerations.
But what exactly is this? In programming, an enum is an organized way of defining a set of named constants. For example, you could have an enum to represent log levels such as DEBUG, INFO, ERROR, etc. Bitwise is an operation that acts directly on the bits that make up an integer. These bits are like switches that can be on (1) or off (0). When we combine enum and bitwise, we create unique values that represent powers of 2, i.e. 1, 2, 4, 8, 16 and so on. Each of these values corresponds to a different bit in the binary number.
Why use bitwise enums? Imagine that we want to apply more than one suppression mode at the same time. For example, limiting consecutive repeated messages and also limiting messages that appear too many times in a row. If we only created simple values in the enum, we could only choose one mode at a time, right? That would limit the system too much. With bitwise enums, we can combine several modes by activating the corresponding bits. The combination is done using the bitwise OR operator (|), which "links" the bits of the desired modes into a single number.
Let's take a practical look at the definition we use:
enum ENUM_LOG_SUPRESSION_MODE { LOG_SUPRESSION_MODE_NONE = 0, // No suppression LOG_SUPRESSION_MODE_CONSECUTIVE = 1 << 0, // 00001 = 1: Identical consecutive messages LOG_SUPRESSION_MODE_THROTTLE_TIME = 1 << 1, // 00010 = 2: Same message within X seconds LOG_SUPRESSION_MODE_BY_REPEAT_COUNT = 1 << 2, // 00100 = 4: After N repetitions LOG_SUPRESSION_MODE_BY_ORIGIN = 1 << 3, // 01000 = 8: Based on message origin LOG_SUPRESSION_MODE_BY_FILENAME = 1 << 4, // 10000 = 16: Based on source filename };
Here, 1 << N means "1 shifted left N times". Each shift creates a single bit:
- 1 << 0 = 1 (binary 00001)
- 1 << 1 = 2 (binary 00010)
- 1 << 2 = 4 (binary 00100)
- 1 << 3 = 8 (binary 01000)
- 1 << 4 = 16 (binary 10000)
If you want to activate more than one mode at the same time, simply combine the values with the OR operator (| ):
int mode = LOG_SUPRESSION_MODE_CONSECUTIVE | LOG_SUPRESSION_MODE_THROTTLE_TIME; // 1 | 2 = 3 (00011)
That number 3, in binary 00011, indicates that the first two modes are active simultaneously. Why does this help with log suppression?
- Flexibility: The system accepts several suppression rules at the same time, without having to create separate enums for each possible combination.
- Efficiency: Checking whether a mode is active is simple and quick, just use the AND bitwise operator (& ) to check whether that mode's bit is on.
- Extensibility: If we want to add new suppression modes in the future, we can simply add new bits to the enum, without breaking anything that already works.
For example, to find out if the "suppression by origin" mode is active, we do:
if((mode & LOG_SUPRESSION_MODE_BY_ORIGIN) == LOG_SUPRESSION_MODE_BY_ORIGIN) { // Apply filter by source }
If the corresponding bit is on, the condition will be true.
Configuration with struct
After defining the possible suppression modes, it's time to encapsulate all the configuration logic in one place, and that's where the MqlLogifySuppressionConfig struct comes in. This struct works like a "control panel" where you define how and when log messages should be suppressed. The idea is simple: to store the parameters that control the suppression behavior and allow a clean and reusable configuration, whether in EAs, indicators or auxiliary libraries.
Let's break down what's in there:
-
mode: combining suppression modes
This is the heart of the configuration. Here we store a combination of suppression modes using an int with bitwise operations. This allows the developer to activate multiple suppression modes at the same time, such as:
- Suppressing identical consecutive logs
- Ignoring repeated messages in a time interval
- Stopping after X number of repetitions
-
throttle_seconds: limiting by time
This field defines an interval (in seconds) that must exist between repeated messages for them to be displayed again. It is useful in cases where a function triggers the same log several times a second, which quickly pollutes the console and makes everything unreadable.
-
max_repeat_count: limiting by quantity
Here we define the maximum number of times the same message can appear before being suppressed. This is useful, for example, to catch an error that has occurred a few times, but without continuing to display it indefinitely.
-
Whitelist/Blacklist by source and file
Often, the developer wants to apply suppression only to messages coming from certain places in the code, or exclude specific origins altogether.
That's why struct includes four arrays:
- allowed_origins[]: if filled in, only these origins will be allowed.
- blocked_origins[]: any origin listed here will always be blocked.
- allowed_filenames[]: same concept, but applied to the name of the source file.
- blocked_filenames []: files are always blocked.
These fields allow for extreme granularity in control. You could, for example, allow logs from your main EA, but suppress all logs coming from a noisy third-party library.
Let's define a constructor with default values, for a log suppression class in trading systems, it's important to have a balanced default setting that prevents log spam but doesn't hide important information. I will suggest and justify a default configuration:
- mode = LOG_SUPRESSION_MODE_THROTTLE_TIME | LOG_SUPRESSION_MODE_CONSECUTIVE | LOG_SUPRESSION_MODE_BY_REPEAT_COUNT: Combines the three basic suppression modes, prevents log spam without losing critical information and maintains a traceable history
- throttle_seconds = 5 : 5 seconds is a good balance for most cases, long enough not to miss important changes, but short enough to maintain traceability
- max_repeat_count = 15 : 15 repetitions allow you to identify patterns, enough to debug if necessary and avoid flooding in case of problems
//+------------------------------------------------------------------+ //| Struct: MqlLogifySuppressionConfig | //+------------------------------------------------------------------+ struct MqlLogifySuppressionConfig { // Basic configuration int mode; // Combination of suppression modes int throttle_seconds; // Seconds between messages int max_repeat_count; // Max repetitions before suppression // Origin whitelist/blacklist string allowed_origins[]; // If not empty, only these are allowed string blocked_origins[]; // Always blocked // Filename whitelist/blacklist string allowed_filenames[]; // If not empty, only these are allowed string blocked_filenames[]; // Always blocked //--- Default constructor MqlLogifySuppressionConfig(void) { mode = LOG_SUPRESSION_MODE_THROTTLE_TIME | LOG_SUPRESSION_MODE_CONSECUTIVE | LOG_SUPRESSION_MODE_BY_REPEAT_COUNT; throttle_seconds = 5; max_repeat_count = 15; ArrayResize(allowed_origins, 0); ArrayResize(blocked_origins, 0); ArrayResize(allowed_filenames, 0); ArrayResize(blocked_filenames, 0); } //--- Destructor ~MqlLogifySuppressionConfig(void) { } }; //+------------------------------------------------------------------+
We will also define two auxiliary methods to prevent the user from manipulating the arrays manually with ArrayResize() and indexes, practical methods such as AddAllowedOrigin(), AddBlockedFilename() etc. have been created. This makes the configuration clear, readable and less likely to make a mistake:
//+------------------------------------------------------------------+ //| Struct: MqlLogifySuppressionConfig | //+------------------------------------------------------------------+ struct MqlLogifySuppressionConfig { //--- Helper methods for array configuration void AddAllowedOrigin(string origin) { int size = ArraySize(allowed_origins); ArrayResize(allowed_origins, size + 1); allowed_origins[size] = origin; } void AddBlockedOrigin(string origin) { int size = ArraySize(blocked_origins); ArrayResize(blocked_origins, size + 1); blocked_origins[size] = origin; } void AddAllowedFilename(string filename) { int size = ArraySize(allowed_filenames); ArrayResize(allowed_filenames, size + 1); allowed_filenames[size] = filename; } void AddBlockedFilename(string filename) { int size = ArraySize(blocked_filenames); ArrayResize(blocked_filenames, size + 1); blocked_filenames[size] = filename; } }; //+------------------------------------------------------------------+
Finally, we've included the ValidateConfig() method. This checks that the values provided make sense, avoiding failures. Among the validations:
- throttle_seconds cannot be less than a minimum threshold (avoids zero or negative values).
- max_repeat_count must be greater than a symbolic value.
- The mode value must be a valid combination.
This method returns false if something is wrong and fills in the error_message with a description of the problem. This is useful both for debugging and for use in applications that want to display user-friendly error messages.
//+------------------------------------------------------------------+ //| Struct: MqlLogifySuppressionConfig | //+------------------------------------------------------------------+ struct MqlLogifySuppressionConfig { //--- Validates configuration parameters bool ValidateConfig(string &error_message) { if(throttle_seconds < MIN_THROTTLE_SECONDS) { error_message = "throttle_seconds must be greater than or equal to " + (string)MIN_THROTTLE_SECONDS; return false; } if(max_repeat_count < MIN_REPEAT_COUNT) { error_message = "max_repeat_count must be greater than or equal to " + (string)MIN_REPEAT_COUNT; return false; } if(mode < LOG_SUPRESSION_MODE_NONE || mode > MAX_SUPPRESSION_MODE) { error_message = "invalid suppression mode"; return false; } return true; } }; //+------------------------------------------------------------------+
With this struct, you can configure log suppression with a few commands, have fine control over behavior and still ensure that everything is within acceptable limits. All this without relying on logic scattered throughout the code, centralizing everything in a clean and extensible way.
Evolving CLogifySuppression
Now that we have our configuration structure well defined, let's build the class responsible for applying this configuration in real time: CLogifySuppression. It will be responsible for deciding, with each log emitted, whether or not that message should appear on the console, based on the active rules.
Before we implement any real suppression logic, it is essential to allow the class to receive external instructions on how it should behave. This means that the configuration, with rules, limits and exception lists, needs to come from outside the class, decoupling the execution logic from the way it will be parameterized. This separation between logic and configuration is what gives the system flexibility and reusability. To do this, we added two methods to the suppression class:
class CLogifySuppression { public: //--- Configuration management void SetConfig(MqlLogifySuppressionConfig &config); MqlLogifySuppressionConfig GetConfig(void) const; }; //+------------------------------------------------------------------+ //| Updates suppression configuration | //+------------------------------------------------------------------+ void CLogifySuppression::SetConfig(MqlLogifySuppressionConfig &config) { m_config = config; string err_msg = ""; if(!m_config.ValidateConfig(err_msg)) { Print("[ERROR] ["+TimeToString(TimeCurrent())+"] Log system error: "+err_msg); } } //+------------------------------------------------------------------+ //| Returns current configuration | //+------------------------------------------------------------------+ MqlLogifySuppressionConfig CLogifySuppression::GetConfig(void) const { return m_config; } //+------------------------------------------------------------------+
The SetConfig() method receives by reference an object from the MqlLogifySuppressionConfig struct, and stores it internally. It then performs an automatic validation by calling the ValidateConfig() method of the struct itself. If any configuration is outside acceptable limits, such as throttle_seconds less than the minimum allowed or invalid mode, an error is printed immediately, signaling the problem. This avoids silent errors in the middle of execution, saves debugging time and keeps the system intact, even when configured dynamically at runtime.
The GetConfig() method allows you to query the current state of the stored configuration. This can be useful for diagnostics, debugging or building interfaces for displaying suppression rules in larger systems. With this, the suppression configuration is now something formal, validated and centralized.
Declaring the private variables
With the configuration in hand, the next step is to store data that allows you to apply this configuration based on the call history. As the suppression logic needs to "remember" what happened before, such as what the last recorded message was or how many times it was repeated, we need to keep this information accessible between calls. We've added the following private fields to the class:
class CLogifySuppression { private: //--- Configuration MqlLogifySuppressionConfig m_config; //--- State tracking string m_last_message; ENUM_LOG_LEVEL m_last_level; int m_repeat_count; datetime m_last_time; };
Let's understand the role of each one:
- m_config: is the current instance of the configuration struct. Every time a message is evaluated, it will be compared to the rules defined here, be it the minimum interval (throttle_seconds ), the number of repetitions tolerated (max_repeat_count ) or the source and file lists.
- m_last_message: stores the content of the last message that passed through the suppression filter. It serves as a reference for whether the new message is the same as the previous one, one of the key criteria for detecting consecutive repetitions.
- m_last_level: stores the level of the last message processed (info, warning, error etc). This is important because the same message string can have different meanings at different levels. For example, an Info: connection lost should not be treated the same as an Error: connection lost.
- m_repeat_count: counts how many times in a row the same message has been found. It is incremented whenever the message and level are identical to the previous call. When this number exceeds the configured limit, suppression can be activated.
- m_last_time: records the timestamp of the last log accepted. It is the basis for calculating the time elapsed since the last message and correctly applying throttle mode, which suppresses logs issued at very short intervals.
These variables together represent the internal state of suppression. They allow the system to apply rules with memory, i.e. with awareness of what happened before, which is essential for deciding, in a reliable and performant way, whether or not a new message should be shown.
Creating the ShouldSuppress() method
With the previous blocks already established, external configuration, internal state and definition of modes, we can now build the heart of the suppression system: the ShouldSuppress() method. This method is called whenever a log is issued. It receives as an argument an MqlLogifyModel, which contains all the data about the log in question: message, level, source, date and file name. The role of ShouldSuppress() is to make a decision, based on the active configuration, as to whether this log should be displayed or suppressed.
We start with the logical basis of the method, dealing with the simplest modes:
class CLogifySuppression { private: //--- Main suppression logic bool ShouldSuppress(MqlLogifyModel &data); }; //+------------------------------------------------------------------+ //| Checks if a message should be suppressed based on active modes | //+------------------------------------------------------------------+ bool CLogifySuppression::ShouldSuppress(MqlLogifyModel &data) { datetime now = data.date_time; //--- Reset counters if message or level changed if(data.msg != m_last_message || data.level != m_last_level) { m_repeat_count = 0; m_last_message = data.msg; m_last_level = data.level; m_last_time = now; return false; } //--- Increment counter once per check m_repeat_count++; //--- Check suppression modes if(((m_config.mode & LOG_SUPRESSION_MODE_BY_REPEAT_COUNT) == LOG_SUPRESSION_MODE_BY_REPEAT_COUNT) && m_repeat_count >= m_config.max_repeat_count) { return true; } if(((m_config.mode & LOG_SUPRESSION_MODE_THROTTLE_TIME) == LOG_SUPRESSION_MODE_THROTTLE_TIME) && (now - m_last_time) < m_config.throttle_seconds) { return true; } if((m_config.mode & LOG_SUPRESSION_MODE_CONSECUTIVE) == LOG_SUPRESSION_MODE_CONSECUTIVE) { return true; } m_last_time = now; return false; } //+------------------------------------------------------------------+
Here, we deal with three modes:
- By consecutiveness: if the same message is logged more than once in a row, only the first one will be displayed
- By repetition count: Instead of suppressing all the same messages in a row, it allows you to set a tolerance - i.e. how many times the same message can be displayed before it starts to be suppressed.
- By time interval: Even if the messages are identical, they will only be suppressed if they are issued with less time interval than the configured value.
The behavior is already functional for most cases. However, a more refined layer is still missing: the ability to ignore logs depending on the origin (origin field) or file (filename ). This feature is valuable when the developer wants to hide messages coming from a specific system component, such as internal logs from an external library, or very verbose debug messages from a single .mq5 or .mqh file.
Adding suppression by source and filename (with intelligent search)
In more complex environments, where several system components or modules produce logs simultaneously, it is common for the developer to want to suppress logs only from certain specific parts of the code, such as messages from an automatic trading system, or logs generated by secondary indicators. To this end, we have added the possibility of suppressing logs based on the origin field (the logical origin of the log) and the filename field (the name of the MQL file that generated it).
The first version of this logic can use direct comparisons with exact strings. But in practice, this proves to be limiting. For example, imagine your source is "Trade.Signal" and you set the string "signal" as blocked. In this exact approach, this wouldn't work, because "Trade.Signal" and "signal" are not identical. That's why we created a helper method called StringContainsIgnoreCase(). This method performs a case-insensitive substring check, making comparisons much more flexible and tolerant.
Here is its implementation:
class CLogifySuppression { private: //--- Helper methods for string comparison bool StringContainsIgnoreCase(string text, string search_term); }; //+------------------------------------------------------------------+ //| Checks if a string contains another string (case insensitive) | //+------------------------------------------------------------------+ bool CLogifySuppression::StringContainsIgnoreCase(string text, string search_term) { string text_lower = text; string term_lower = search_term; StringToLower(text_lower); StringToLower(term_lower); return StringFind(term_lower, text_lower) >= 0; } //+------------------------------------------------------------------+
This function transforms both texts to lower case before looking for the occurrence of the substring. This means that "Trade.Signal" can be identified as related to "signal" or "trade" without you having to write down exactly the full name of the source. This allows you to create a mini semantic hierarchy between sources. For example, by blocking "signal", you automatically suppress logs coming from "Trade.Signal", "Risk.Signal", or even "Execution.Signal ". This strategy drastically reduces the effort required to set up useful filters, while keeping the logic clear and efficient.
Applying this to our suppression logic:
//+------------------------------------------------------------------+ //| Checks if a message should be suppressed based on active modes | //+------------------------------------------------------------------+ bool CLogifySuppression::ShouldSuppress(MqlLogifyModel &data) { datetime now = data.date_time; //--- Check origin-based suppression if((m_config.mode & LOG_SUPRESSION_MODE_BY_ORIGIN) == LOG_SUPRESSION_MODE_BY_ORIGIN) { //--- Check blacklist first if(ArraySize(m_config.blocked_origins) > 0) { for(int i = 0; i < ArraySize(m_config.blocked_origins); i++) { if(StringContainsIgnoreCase(data.origin, m_config.blocked_origins[i])) { return true; } } } //--- Then check whitelist if(ArraySize(m_config.allowed_origins) > 0) { bool origin_allowed = false; for(int i = 0; i < ArraySize(m_config.allowed_origins); i++) { if(StringContainsIgnoreCase(data.origin, m_config.allowed_origins[i])) { origin_allowed = true; break; } } if(!origin_allowed) { return true; } } } //--- Check filename-based suppression if((m_config.mode & LOG_SUPRESSION_MODE_BY_FILENAME) == LOG_SUPRESSION_MODE_BY_FILENAME) { //--- Check blacklist first if(ArraySize(m_config.blocked_filenames) > 0) { for(int i = 0; i < ArraySize(m_config.blocked_filenames); i++) { if(StringContainsIgnoreCase(data.filename, m_config.blocked_filenames[i])) { return true; } } } //--- Then check whitelist if(ArraySize(m_config.allowed_filenames) > 0) { bool filename_allowed = false; for(int i = 0; i < ArraySize(m_config.allowed_filenames); i++) { if(StringContainsIgnoreCase(data.filename, m_config.allowed_filenames[i])) { filename_allowed = true; break; } } if(!filename_allowed) { return true; } } } //--- Reset counters if message or level changed if(data.msg != m_last_message || data.level != m_last_level) { m_repeat_count = 0; m_last_message = data.msg; m_last_level = data.level; m_last_time = now; return false; } //--- Increment counter once per check m_repeat_count++; //--- Check suppression modes if(((m_config.mode & LOG_SUPRESSION_MODE_BY_REPEAT_COUNT) == LOG_SUPRESSION_MODE_BY_REPEAT_COUNT) && m_repeat_count >= m_config.max_repeat_count) { return true; } if(((m_config.mode & LOG_SUPRESSION_MODE_THROTTLE_TIME) == LOG_SUPRESSION_MODE_THROTTLE_TIME) && (now - m_last_time) < m_config.throttle_seconds) { return true; } if((m_config.mode & LOG_SUPRESSION_MODE_CONSECUTIVE) == LOG_SUPRESSION_MODE_CONSECUTIVE) { return true; } m_last_time = now; return false; } //+------------------------------------------------------------------+
We do the same for the allowed_origins and allowed_filenames fields, also allowing the creation of a whitelist, i.e. a filter that allows only certain logs through and blocks everything else, which is the opposite of the traditional blacklist.
This combination of filters by origin, filename and case-insensitive textual pattern results in an incredibly powerful selective suppression system. It can be configured to be permissive or extremely strict, depending on what the developer needs for that context, whether it's developing a robot, backtesting a strategy or analyzing a real-time system in production.
Creating the Reset() method
As the log suppression system is used, it accumulates internal information, such as the last message that was logged, how many times it was repeated and when it last appeared. This internal state is essential for applying the suppression rules correctly. However, at certain times it makes sense to "reset" this state.
A classic example is when the execution context changes, such as when changing charts, symbols or timeframes. In these cases, keeping the previous history can lead to wrong suppression decisions, hiding relevant messages in a new context.
To solve this, we created the Reset() method. It clears the internal data that the class has stored so far, as if we were starting from scratch:
//+------------------------------------------------------------------+ //| Resets all internal state tracking | //+------------------------------------------------------------------+ void CLogifySuppression::Reset(void) { m_last_message = ""; m_repeat_count = 0; m_last_time = 0; m_last_level = LOG_LEVEL_INFO; } //+------------------------------------------------------------------+
This method is extremely simple, but very important to ensure that the suppression logic is accurate. As soon as the class constructor is called, we also make a point of invoking Reset() internally. This ensures that every new instance of the class starts "clean", without carrying any previous history of messages, repetitions or timestamps.
Later on, you can also choose to call Reset() manually if you are implementing some more advanced control, such as restarting suppression between executions or after specific events in your code.
Creating the auxiliary Getters
Even with all the suppression automation, at many times the developer needs to know what's going on behind the scenes. If the system is hiding logs that you expected to see, it's useful to be able to inspect the internal state of the class and understand why.
To do this, we've added some simple public methods, called getters, which allow you to access the main control variables of the suppression. They don't change the state of the class, they just return values that are useful for diagnostics, logs or debugging tools:
class CLogifySuppression { public: //--- Monitoring getters int GetRepeatCount(void) const { return m_repeat_count; } datetime GetLastMessageTime(void) const { return m_last_time; } string GetLastMessage(void) const { return m_last_message; } ENUM_LOG_LEVEL GetLastLevel(void) const { return m_last_level; } };
- GetRepeatCount() returns how many times in a row the same message appeared. This can help to understand why a message was or wasn't suppressed.
- GetLastMessageTime() tells you when a message last passed through the filter. This is essential for validating whether suppression by time is working as expected.
- GetLastMessage() shows the literal content of the last message that was processed.
- GetLastLevel() tells you the level (info, warning, error etc.) of the last message, allowing you to cross-reference it with your system's severity control.
These methods are optional in the general use of the class, but they become extremely valuable when some unexpected behavior arises, after all, suppressing messages automatically is a double-edged sword: it saves noise, but it can hide problems. Having ways of inspecting the logic from the inside drastically reduces investigation time when this happens.
Integrating log suppression into the CLogify main class
With CLogifySuppression up and running, it's time to integrate it into the core of our library: the CLogify class. This is where everything happens, from message routing to handler control. And now, it will also decide when to silence repeated or irrelevant logs. The first step is to import the suppression file and declare an instance of the class as a private member of CLogify:
//+------------------------------------------------------------------+ //| Imports | //+------------------------------------------------------------------+ #include "LogifyModel.mqh" #include "Suppression/LogifySuppression.mqh" #include "Handlers/LogifyHandler.mqh" #include "Handlers/LogifyHandlerComment.mqh" #include "Handlers/LogifyHandlerConsole.mqh" #include "Handlers/LogifyHandlerDatabase.mqh" #include "Handlers/LogifyHandlerFile.mqh" #include "Error/LogifyError.mqh" //+------------------------------------------------------------------+ //| class : CLogify | //| | //| [PROPERTY] | //| Name : Logify | //| Heritage : No heritage | //| Description : Core class for log management. | //| | //+------------------------------------------------------------------+ class CLogify { private: CLogifySuppression*m_suppression; }; //+------------------------------------------------------------------+
This instance will be used to decide, internally, whether a particular message deserves to be logged or not. In CLogify 's constructor, we initialize this instance:
//+------------------------------------------------------------------+ //| Constructor | //+------------------------------------------------------------------+ CLogify::CLogify() { m_suppression = new CLogifySuppression(); } //+------------------------------------------------------------------+
And of course, in the destructor, we ensure that memory is freed:
//+------------------------------------------------------------------+ //| Destructor | //+------------------------------------------------------------------+ CLogify::~CLogify() { //--- Delete handlers int size_handlers = ArraySize(m_handlers); for(int i=0;i<size_handlers;i++) { if(CheckPointer(m_handlers[i]) != POINTER_INVALID) { m_handlers[i].Close(); delete m_handlers[i]; } } delete m_suppression; } //+------------------------------------------------------------------+
Now, within the Append() method, which is responsible for logging, we check for suppression right after the log template has been created. If the message is deemed unnecessary, it is ignored right there:
//+------------------------------------------------------------------+ //| Generic method for adding logs | //+------------------------------------------------------------------+ bool CLogify::Append(ENUM_LOG_LEVEL level,string msg, string origin = "", string args = "",string filename="",string function="",int line=0,int code_error=0) { //--- Ensures that there is at least one handler this.EnsureDefaultHandler(); //--- Textual name of the log level string levelStr = ""; switch(level) { case LOG_LEVEL_DEBUG: levelStr = "DEBUG"; break; case LOG_LEVEL_INFO : levelStr = "INFO"; break; case LOG_LEVEL_ALERT: levelStr = "ALERT"; break; case LOG_LEVEL_ERROR: levelStr = "ERROR"; break; case LOG_LEVEL_FATAL: levelStr = "FATAL"; break; } //--- Creating a log template with detailed information datetime time_current = TimeCurrent(); MqlLogifyModel data("",levelStr,msg,args,time_current,time_current,level,origin,filename,function,line,m_error.Error(code_error)); //--- Supression if(m_suppression.ShouldSuppress(data)) { return(true); } //--- Call handlers int size = this.SizeHandlers(); for(int i=0;i<size;i++) { data.formated = m_handlers[i].GetFormatter().Format(data); m_handlers[i].Emit(data); } return(true); } //+------------------------------------------------------------------+
This simple check is what prevents unnecessary messages from being processed. If the mechanism identifies that the current log is redundant, either because it occurred too many times in a row, because it came from the same place in the code, or because of any other configured criteria, it simply stops there. All done!
Testing
Now that we understand how the suppression system works internally, it's time to put it to the test with some practical tests. The idea here is to validate, in practice, whether each of the suppression modes is behaving as expected - suppressing duplicate or unwanted logs according to the chosen configuration. Let's go step by step.
Test 1. Suppression by consecutive messages (LOG_SUPPRESSION_MODE_CONSECUTIVE)This is the most basic mode of suppression. The logic is simple: if the same message is logged more than once in a row, only the first one will be displayed. This is useful to avoid spamming the console when the same log is repeated within a loop, for example. Let's test this behavior with a code that sends 11 exactly the same logs, all with the same content and the same origin.
//+------------------------------------------------------------------+ //| Import | //+------------------------------------------------------------------+ #include <Logify/Logify.mqh> CLogify Logify; //+------------------------------------------------------------------+ //| Expert initialization function | //+------------------------------------------------------------------+ int OnInit() { MqlLogifySuppressionConfig config; config.mode = LOG_SUPRESSION_MODE_CONSECUTIVE; Logify.Suppression().SetConfig(config); for(int i=0;i<11;i++) { Logify.Info("Check signal buy", "Signal"); } //--- return(INIT_SUCCEEDED); } //+------------------------------------------------------------------+
When you run the code above, the result on the console will be:
2025.07.31 04:34:26 [INFO]: Check signal buy
That's it. No repetition. Even if the message was logged 11 times, as they were all identical and consecutive, the system understood that it was enough to show it once. This shows that the mode is working correctly.
Test 2. Suppression by repetition count (LOG_SUPPRESSION_MODE_BY_REPEAT_COUNT)This mode offers a little more flexibility than the previous one. Instead of suppressing all the same messages in a row, it allows you to set a tolerance - that is, how many times the same message can be displayed before it starts to be suppressed. This option is useful when you want to see a limited number of repetitions before the system silences the rest.
Let's configure the system to allow a maximum of 2 repetitions of the same message.
//+------------------------------------------------------------------+ //| Import | //+------------------------------------------------------------------+ #include <Logify/Logify.mqh> CLogify Logify; //+------------------------------------------------------------------+ //| Expert initialization function | //+------------------------------------------------------------------+ int OnInit() { MqlLogifySuppressionConfig config; config.mode = LOG_SUPRESSION_MODE_BY_REPEAT_COUNT; config.max_repeat_count = 2; Logify.Suppression().SetConfig(config); for(int i=0;i<11;i++) { Logify.Info("Check signal buy", "Signal"); } //--- return(INIT_SUCCEEDED); } //+------------------------------------------------------------------+
Expected result on the console:
2025.07.31 04:40:49 [INFO]: Check signal buy 2025.07.31 04:40:49 [INFO]: Check signal buy
Exactly as we configured: only the first two messages appear. The rest were automatically discarded because they exceeded the repetition limit. This control is especially useful in environments that generate very verbose logs, but where we still want to capture the first warning signals.
Test 3. Suppression by time interval (LOG_SUPPRESSION_MODE_THROTTLE_TIME)Here suppression is based on the time between messages. Even if the messages are identical, they will only be suppressed if they are sent less than the configured time interval.
Let's configure the system to allow the same message every 1 second. To simulate this, we will print the same message 11 times with a Sleep(200) between them (i.e. 200 milliseconds between each log). So, every second, we'll have 5 messages - and the system should only display one per second, discarding the rest.
//+------------------------------------------------------------------+ //| Import | //+------------------------------------------------------------------+ #include <Logify/Logify.mqh> CLogify Logify; //+------------------------------------------------------------------+ //| Expert initialization function | //+------------------------------------------------------------------+ int OnInit() { MqlLogifySuppressionConfig config; config.mode = LOG_SUPRESSION_MODE_THROTTLE_TIME; config.throttle_seconds = 1; Logify.Suppression().SetConfig(config); for(int i=0;i<11;i++) { Logify.Info("Check signal buy", "Signal"); Sleep(200); } //--- return(INIT_SUCCEEDED); } //+------------------------------------------------------------------+
When running, the console should display something like:
2025.07.31 04:45:26 [INFO]: Check signal buy 2025.07.31 04:45:27 [INFO]: Check signal buy 2025.07.31 04:45:28 [INFO]: Check signal buy
Three logs appear - one every second - while the rest were discarded because they were outside the allowed range. This mode is particularly interesting for events that can fluctuate in frequency, such as price updates or market condition checks.
Test 4. Suppression by origin (LOG_SUPPRESSION_MODE_BY_ORIGIN)In this mode, the system blocks messages based on the origin entered in the log. If a particular origin is marked on the blacklist, any message coming from it will be ignored, regardless of the content or the time between them. In the example below, we blocked the "Signal" source and only let the "Trade" source through:
//+------------------------------------------------------------------+ //| Import | //+------------------------------------------------------------------+ #include <Logify/Logify.mqh> CLogify Logify; //+------------------------------------------------------------------+ //| Expert initialization function | //+------------------------------------------------------------------+ int OnInit() { MqlLogifySuppressionConfig config; config.mode = LOG_SUPRESSION_MODE_BY_ORIGIN; config.AddBlockedOrigin("signal"); Logify.Suppression().SetConfig(config); for(int i=0;i<11;i++) { Logify.Info("Check signal buy", "Signal"); } Logify.Info("Purchase order sent successfully", "Trade"); //--- return(INIT_SUCCEEDED); } //+------------------------------------------------------------------+
Result in the console:
2025.07.31 04:48:36 [INFO]: Purchase order sent successfully
Only the message with origin "Trade" appeared. All the others were suppressed because they belong to an explicitly blocked source.
The file-based suppression mode works almost identically, but the blocking criterion is the name of the file from which the log was triggered, and not the logical origin defined in the log. For this reason, the suppression tests by origin and by file share the same code structure. The difference would be in the call in the check, one checks the origin and the other the file name. We therefore consider that this test also validates the operation of suppression by file.
Automatically detecting and adjusting the language
Until now, the CLogifyError class always started with the error messages in English. This worked well at first, but it had a major problem: the language of the errors was always the same, even if the user's terminal was configured in another language, such as Spanish, French or Portuguese. With the growth of multilingual support within Logify, it made sense to go one step further: to automatically adapt the default language of the error messages based on the language configured in the MetaTrader terminal. To do this, we made a small but powerful change to the class constructor. Instead of directly starting the error set in English, we now let the terminal itself tell us which language to use:
CLogifyError::CLogifyError() { SetLanguage(GetLanguageFromTerminal()); }
The GetLanguageFromTerminal() method uses the native TerminalInfoString(TERMINAL_LANGUAGE) function to capture the current language configured in MetaTrader. This value is a string with the name of the language, such as "French", "Korean" or "Portuguese (Brazil)". We then map it to our ENUM_LOG_LANGUAGE, which represents the languages supported by Logify's error system:
ENUM_LOG_LANGUAGE CLogifyError::GetLanguageFromTerminal(void) { string lang = TerminalInfoString(TERMINAL_LANGUAGE); if(lang == "German") return LOG_LANGUAGE_DE; if(lang == "Spanish") return LOG_LANGUAGE_ES; if(lang == "French") return LOG_LANGUAGE_FR; if(lang == "Italian") return LOG_LANGUAGE_IT; if(lang == "Japanese") return LOG_LANGUAGE_JA; if(lang == "Korean") return LOG_LANGUAGE_KO; if(lang == "Portuguese (Brazil)" || lang == "Portuguese (Portugal)") return LOG_LANGUAGE_PT; if(lang == "Russian") return LOG_LANGUAGE_RU; if(lang == "Turkish") return LOG_LANGUAGE_TR; if(lang == "Chinese (Simplified)" || lang == "Chinese (Traditional)") return LOG_LANGUAGE_ZH; //--- Default language: English return LOG_LANGUAGE_EN; }
This automatic adaptation is especially useful for distributors, traders or companies working with international audiences. The library now "speaks the language of the terminal", without the need for manual configuration. This reduces friction and saves less technical users from having to figure out how to set the right language for errors. What if for some reason the terminal language isn't recognized or mapped? No problem, the system automatically switches back to English, guaranteeing a functional experience even in exceptional cases.
This change, although simple to implement, drastically improves the usability of the library and aligns Logify's behavior with the principle of automatic convenience: the system adapts to the user, not the other way around.
Conclusion
With everything we've seen so far, Logify has become even smarter. It is now able to understand when it is being too repetitive in the logs and it also speaks automatically in the language of your terminal, without you having to configure anything.
Several ways of suppressing repeated messages have been created, which you can use alone or together, whichever makes the most sense for your project:
- Repeated messages in a row: cuts down on that flood of identical logs one after the other.
- Minimum time between logs: prevents the same message from appearing several times in a few seconds.
- Repetition only after a certain number: only shows again if the repetition exceeds a limit.
- Same code snippet: blocks duplicate logs coming from the same place.
- Same content coming from different files: blocks identical repetitions, even if they come from another file.
All of this can be activated simply, straight from the configuration, without complicating your code. What's more, with the new automatic language detection, the library already chooses the ideal language based on your terminal configuration, which helps a lot when working in international environments or with other teams.
If you have any new ideas, want to suggest a new suppression mode or have found something that could be improved, just leave it in the comments. Logify is always open to changes and improvements. As it evolves, we'll bring you new articles to keep you up to date.
File Name | Description |
---|---|
Experts/Logify/LogiftTest.mq5 | File where we test the library's features, containing a practical example |
Include/Logify/Error/Languages/ErrorMessages.XX.mqh | Count the error messages in each language, where X represents the language acronym |
Include/Logify/Error/Error.mqh | Data structure for storing errors |
Include/Logify/Error/LogifyError.mqh | Class for getting detailed error information |
Include/Logify/Formatter/LogifyFormatter.mqh | Class responsible for formatting log records, replacing placeholders with specific values |
Include/Logify/Handlers/LogifyHandler.mqh | Base class for managing log handlers, including level setting and log sending |
Include/Logify/Handlers/LogifyHandlerComment.mqh | Log handler that sends formatted logs directly to the comment on the terminal chart in MetaTrader |
Include/Logify/Handlers/LogifyHandlerConsole.mqh | Log handler that sends formatted logs directly to the terminal console in MetaTrader |
Include/Logify/Handlers/LogifyHandlerDatabase.mqh | Log handler that sends formatted logs to a database (Currently it only contains a printout, but soon we will save it to a real sqlite database) |
Include/Logify/Handlers/LogifyHandlerFile.mqh | Log handler that sends formatted logs to a file |
Include/Logify/Suppression/LogifySuppression.mqh | Responsible for applying intelligent log message suppression rules, filtering out unnecessary repetitions |
Include/Logify/Utils/IntervalWatcher.mqh | Checks if a time interval has passed, allowing you to create routines within the library |
Include/Logify/Logify.mqh | Core class for log management, integrating levels, models and formatting |
Include/Logify/LogifyBuilder.mqh | Class responsible for creating a CLockify object, simplifying configuration |
Include/Logify/LogifyLevel.mqh | File that defines the log levels of the Logify library, allowing for detailed control |
Include/Logify/LogifyModel.mqh | Structure that models log records, including details such as level, message, timestamp, and context |
Warning: All rights to these materials are reserved by MetaQuotes Ltd. Copying or reprinting of these materials in whole or in part is prohibited.
This article was written by a user of the site and reflects their personal views. MetaQuotes Ltd is not responsible for the accuracy of the information presented, nor for any consequences resulting from the use of the solutions, strategies or recommendations described.





- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
You agree to website policy and terms of use
Thank you for your suggestions for improvement!
Hello author, I have a suggestion. Create some macros. You only need to include one file. No configuration is required. You can use the library with the default configuration. When logging is disabled, the macro does not generate any actual code into the final compiled ex5.