Deutsch 日本語
preview
Mastering Log Records (Part 6): Saving logs to database

Mastering Log Records (Part 6): Saving logs to database

MetaTrader 5Examples |
1 215 0
joaopedrodev
joaopedrodev

Introduction

Imagine a bustling marketplace of digital trades and financial wizardry where every move is tracked, recorded, and meticulously analyzed for success. What if you could not only access a chronicle of every decision and error made by your Expert Advisors (EAs) but also wield a powerful tool to optimize and refine these bots in real-time? Welcome to Part 1 of Mastering Log Records (Part 1): Fundamental Concepts and First Steps in MQL5, where we began crafting a sophisticated logging library tailored for MQL5 development.

Here, we cracked the constraints of MetaTrader 5’s default logging interface to forge a robust, adjustable, and dynamic logging solution that enhances the MQL5 landscape. Our journey started by embedding critical requirements: a reliable Singleton structure for consistent coding, advanced database logs for comprehensive audit trails, versatile output flexibility, classification of log levels, and customizable formats for diverse project needs.

Join us as we now delve deeper into how you can transform raw data into actionable insights to better understand, control, and elevate your EAs' performance like never before.

In this article, we will explore everything from fundamental concepts to practical implementation of a log handler that writes and queries data directly from a database.


What are databases

Logs are the pulse of a system, capturing everything that happens behind the scenes. But storing them efficiently is another story. Up until now, we have saved logs in text files, a simple and functional solution for many cases. The problem arises when the volume of data grows: searching for information among thousands of lines becomes a performance and management nightmare.

That's where databases come in. They offer a structured and optimized way to store, query and organize information. Instead of manually going through files, we can run quick queries and find exactly what we need. But after all, what is a database and why is it so essential?


The structure of a database

A database works as an intelligent storage system, where data is organized logically to facilitate searches and manipulations. Think of it as a well-cataloged file, where each piece of information has its defined place. In the context of logs, instead of saving records in scattered files, we can store them in a structured way, quickly filtering by date, error type or any other relevant criteria.

To better understand, let's break down the structure of a database into its three fundamental components: tables, columns and rows.

  • Tables: the foundation of the database, a table works like a spreadsheet, where we group related data. In the case of logs, we could have a table called "logs", dedicated exclusively to storing these records.

    Each table is designed for a specific type of data, ensuring organization and efficiency in accessing information.

Columns: the data fields, within a table, we have columns, which represent the different categories of information stored. Each column is equivalent to a data field and defines a specific type of information. For example, in a log table, we can have columns like:

  • id → Unique log identifier
  • timestamp → Date and time of the record
  • level → Log level (DEBUG, INFO, ERROR...)
  • message → Log message
  • source → Log origin (which system or module generated the record)

Each column has a well-defined role. The timestamp, for example, stores dates and times, while message contains text. This structure avoids redundancies and improves search performance.

  • Rows: the records stored, if the columns define what information will be stored, the rows represent the individual records within the table. Each row contains a complete set of values filling the corresponding columns. See a practical example of a log table:

    ID Timestamp Level Message Origin
    1
     2025-02-12 10:15 DEBUG RSI indicator value calculated: 72.56
    Indicators
    2  2025-02-12 10:16 INFO Buy order sent successfully
    Order Management
    3  2025-02-12 10:17 ALERT Stop Loss adjusted to breakeven level
    Risk Management
    4  2025-02-12 10:18 ERROR Failed to send sell order
    Order Management
    5  2025-02-12 10:19 FATAL Failed to initialize EA: Invalid settings
    Initialization

    Each line is a single record, describing a specific event.

Now that we understand the structure of the database, we can explore how to apply it in practice within MQL5 to store and query logs efficiently. Let's see it in action.


Databases in MQL5

MQL5 allows you to store and retrieve data in a structured manner, but its database support has some peculiarities that need to be understood before we move on to implementation.

Unlike languages aimed at web or corporate applications, MQL5 does not offer native support for robust relational databases such as MySQL or PostgreSQL. But that doesn't mean we're stuck with text files! We can get around this limitation in two ways:

Using SQLite, a lightweight file-based database that is natively supported in MQL5 (see database functions in MQL5), or establishing external connections via APIs, allowing integration with more powerful databases. For our purpose of storing and querying logs efficiently, SQLite is the ideal choice. It is simple, fast and does not require a dedicated server, making it perfect for what we need. Before we move on to implementation, let's better understand the characteristics of a database based on a .sqlite file.

  • Advantages
    • No server required: SQLite is an "embedded" database, meaning it does not require installation or configuration of a server.
    • Ready to use: Just create the .sqlite file and start storing data.
    • Fast reading: Since it is stored in a single file, SQLite can be extremely fast for reading small to medium data.
    • Low latency: For simple queries, it can be faster than traditional relational databases.
    • High compatibility: Compatible with several programming languages
  • Disadvantages
    • Risk of file corruption: If the file is damaged, recovering the data can be complicated.
    • Manual backup: Backup needs to be done by copying the .sqlite file, since SQLite does not have native support for automatic replication.
    • Does not scale well: For large data volumes and concurrent accesses, SQLite is not the best option. But since our goal is to store logs locally, this will not be a problem.

Now that we know the possibilities and limitations of databases in MQL5, we can delve into the basic operations that we will need to implement to store and retrieve logs effectively.


The basics of database operations we need

Before we implement our handler, we need to understand the fundamental operations we will use on the database. These operations include creating tables, inserting new records, retrieving data, and possibly deleting or updating logs when necessary.

In a logging context, we typically need to store information such as the date and time, log level, message, and possibly the name of the file or component that generated the entry. To do this, we must structure our table in a way that facilitates fast and efficient queries.

First, let's create a simple testing Expert Advisor (EA) called DatabaseTest.mq5 inside the Experts/Logify folder. With the file created, we will have something similar to this:

//+------------------------------------------------------------------+
//|                                                 DatabaseTest.mq5 |
//|                                                     joaopedrodev |
//|                       https://www.mql5.com/en/users/joaopedrodev |
//+------------------------------------------------------------------+
#property copyright "joaopedrodev"
#property link      "https://www.mql5.com/en/users/joaopedrodev"
#property version   "1.00"
//+------------------------------------------------------------------+
//| Import CLogify                                                   |
//+------------------------------------------------------------------+
#include <Logify/Logify.mqh>
//+------------------------------------------------------------------+
//| Expert initialization function                                   |
//+------------------------------------------------------------------+
int OnInit()
  {
//---
   
//---
   return(INIT_SUCCEEDED);
  }
//+------------------------------------------------------------------+


Creating and connecting to the database

The first step is to create and connect to the database. To do this, we use the DatabaseOpen() function, which takes two parameters:

  • filename : Name of the database file, relative to the "MQL5\\\\Files" folder.
  • flags : Combination of flags from the ENUM_DATABASE_OPEN_FLAGS enumeration. These flags determine how the database will be accessed. The available flags are:
    • DATABASE_OPEN_READONLY - Read-only access.
    • DATABASE_OPEN_READWRITE - Allows reading and writing.
    • DATABASE_OPEN_CREATE - Creates the database on disk if it does not exist.
    • DATABASE_OPEN_MEMORY - Creates a temporary database in memory.
    • DATABASE_OPEN_COMMON - The file will be stored in the folder common to all terminals.

For our example, we will use DATABASE_OPEN_READWRITE | DATABASE_OPEN_CREATE . This way, we ensure that the database will be created automatically if it does not already exist, avoiding manual checks.

The DatabaseOpen() function returns a database handle, which we store in a variable to use it in future operations. In addition, it is essential to close the connection at the end of use, which we do with the DatabaseClose() function.

Our code now looks like this:

//+------------------------------------------------------------------+
//| Expert initialization function                                   |
//+------------------------------------------------------------------+
int OnInit()
  {
   //--- Opening a database connection
   int dbHandle = DatabaseOpen(path,DATABASE_OPEN_READWRITE|DATABASE_OPEN_CREATE);
   if(dbHandle == INVALID_HANDLE)
     {
      Print("[ERROR] ["+TimeToString(TimeCurrent())+"] Database error (Code: "+IntegerToString(GetLastError())+")");
      return(INIT_FAILED);
     }
   Print("Open database file");
   
   //--- Closing database after use
   DatabaseClose(handle_db);
   Print("Closed database file");
   return(INIT_SUCCEEDED);
  }
//+------------------------------------------------------------------+

Now that we have managed to open and close the database, it is time to structure the stored data. Let's start by creating our first table: logs.


Creating a table

But before we start creating tables without any criteria, we need to check if it already exists. To do this, we use the DatabaseTableExists() function. If the table is not already present in the database, then we create it with a simple SQL command. Speaking of SQL (Structured Query Language), this is the language used to interact with databases, allowing you to insert, query, modify or delete data. Think of SQL as a kind of "restaurant menu" for databases: you place an order (SQL query) and receive exactly what you asked for — as long as the order was well formulated, of course!

Now, we will see this in practice and structure our logs table, ensuring that it is created correctly whenever necessary.

For our purpose, we only need to know a few SQL commands, the first of which is used to Create a table:

CREATE TABLE {table_name} ({column_name} {type_data}, …);
  • {table_name}: Name of the table to be created.
  • {column_name} {type_data}: Definition of the columns, where {type_data} indicates the data type (text, number, date, etc.).

Now, we will use the DatabaseExecute() function to execute the table creation command. The table structure will be based on the MqlLogifyModel structure, containing the following fields:

  • id: Unique identifier of the row.
  • formated: Formatted message.
  • levelname: Name of the log level.
  • msg: Original message.
  • args: Message arguments.
  • timestamp: Date and time in numeric format.
  • date_time: Formatted date and time.
  • level: Severity level of the log.
  • origin: Log source.
  • filename: Source file name.
  • function: Function where the log was generated.
  • line: Line of code where the log was generated.

Our code now looks like this:

//+------------------------------------------------------------------+
//| Expert initialization function                                   |
//+------------------------------------------------------------------+
int OnInit()
  {
   //--- Open the database connection
   int dbHandle = DatabaseOpen("db\\logs.sqlite", DATABASE_OPEN_READWRITE | DATABASE_OPEN_CREATE);
   if(dbHandle == INVALID_HANDLE)
     {
      Print("[ERROR] [" + TimeToString(TimeCurrent()) + "] Unable to open database (Error Code: " + IntegerToString(GetLastError()) + ")");
      return(INIT_FAILED);
     }
   Print("[INFO] Database connection opened successfully");
   
   //--- Create the 'logs' table if it does not exist
   if(!DatabaseTableExists(dbHandle, "logs"))
     {
      DatabaseExecute(dbHandle,
         "CREATE TABLE logs ("
         "id INTEGER PRIMARY KEY AUTOINCREMENT," // Auto-incrementing unique ID
         "formated TEXT,"     // Formatted log message
         "levelname TEXT,"    // Log level (INFO, ERROR, etc.)
         "msg TEXT,"          // Main log message
         "args TEXT,"         // Additional details
         "timestamp BIGINT,"  // Log event timestamp (Unix time)
         "date_time DATETIME,"// Human-readable date and time
         "level BIGINT,"      // Log level as an integer
         "origin TEXT,"       // Module or component name
         "filename TEXT,"     // Source file name
         "function TEXT,"     // Function where the log was recorded
         "line BIGINT);");    // Source code line number
      Print("[INFO] 'logs' table created successfully");
     }
   
   //--- Close the database connection
   DatabaseClose(dbHandle);
   Print("[INFO] Database connection closed successfully");
   
   return(INIT_SUCCEEDED);
  }
//+------------------------------------------------------------------+

This completes the step of creating the database and the ‘logs’ table. After creating the table, the database file should appear in the Files folder in the file explorer:

When you click on the file, the metaeditor supports this file, it should open in a screen similar to this:

Here we have an interface where we can view the database data and execute different SQL commands, as highlighted in red. We will use this feature a lot to view the data in the editor.


How to insert data into the database

In SQL, the command used to insert data into a table is:

INSERT INTO {table_name} ({column}, ...) VALUES ({value}, ...)

In the context of MQL5, this statement can be simplified with the help of specific functions that make the process more intuitive and less error-prone. The main functions you will use are:

  • DatabasePrepare() - This function creates an identifier for the SQL query, preparing it for subsequent execution. It serves as the first step for the query to be interpreted by the database.
  • DatabaseBind() - Using this function, you associate real values with the query parameters. In the SQL command, values are represented by placeholders (for example, ?1 , ?2 , etc.), which will be replaced by the data provided at the time of execution.
  • DatabaseRead() - Responsible for executing the prepared query. In the case of commands that do not return records (such as INSERT), this function ensures the execution of the instruction and the transition to the next record, if necessary.
  • DatabaseFinalize() - After use, it is important to release the resources associated with the query. This function finalizes the previously prepared query, avoiding memory leaks.

When creating the query for data insertion, we can use placeholders to indicate where the values will be linked later. Consider the following example, which inserts a new record into the logs table, following the columns we created earlier:

INSERT INTO logs (formated, levelname, msg, args, timestamp, date_time, level, origin, filename, function, line) VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7, ?8, ?9, ?10, ?11);

Note that all the fields in the table have been listed, except for the id field, which is automatically generated by the database. Also, the values to be inserted are indicated by ?1 , ?2 , etc., each placeholder corresponds to an index that will later be used to associate the real value through the DatabaseBind() function.

//--- Prepare SQL statement for inserting a log entry
string sql = "INSERT INTO logs (formated, levelname, msg, args, timestamp, date_time, level, origin, filename, function, line) "
             "VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7, ?8, ?9, ?10, ?11);";
int sqlRequest = DatabasePrepare(dbHandle, sql);
if(sqlRequest == INVALID_HANDLE)
  {
   Print("[ERROR] Failed to prepare SQL statement for log insertion");
  }

//--- Bind values to the SQL statement
DatabaseBind(sqlRequest, 0, "06:24:00 [INFO] Buy order sent successfully"); // Formatted log message
DatabaseBind(sqlRequest, 1, "INFO");                                        // Log level name
DatabaseBind(sqlRequest, 2, "Buy order sent successfully");                 // Main log message
DatabaseBind(sqlRequest, 3, "Symbol: EURUSD, Volume: 0.1");                  // Additional details
DatabaseBind(sqlRequest, 4, 1739471040);                                     // Unix timestamp
DatabaseBind(sqlRequest, 5, "2025.02.13 18:24:00");                          // Readable date and time
DatabaseBind(sqlRequest, 6, 1);                                              // Log level as integer
DatabaseBind(sqlRequest, 7, "Order Management");                             // Module or component name
DatabaseBind(sqlRequest, 8, "File.mq5");                                     // Source file name
DatabaseBind(sqlRequest, 9, "OnInit");                                       // Function name
DatabaseBind(sqlRequest, 10, 100);                                           // Line number
After binding all values, the DatabaseRead() function is used to execute the prepared query. If the execution is successful, a confirmation message is printed; otherwise, an error is reported:
//+------------------------------------------------------------------+
//| Expert initialization function                                   |
//+------------------------------------------------------------------+
int OnInit()
  {
   //--- Open the database connection
   int dbHandle = DatabaseOpen("db\\logs.sqlite", DATABASE_OPEN_READWRITE | DATABASE_OPEN_CREATE);
   if(dbHandle == INVALID_HANDLE)
     {
      Print("[ERROR] [" + TimeToString(TimeCurrent()) + "] Unable to open database (Error Code: " + IntegerToString(GetLastError()) + ")");
      return(INIT_FAILED);
     }
   Print("[INFO] Database connection opened successfully");
   
   //--- Create the 'logs' table if it does not exist
   if(!DatabaseTableExists(dbHandle, "logs"))
     {
      DatabaseExecute(dbHandle,
         "CREATE TABLE logs ("
         "id INTEGER PRIMARY KEY AUTOINCREMENT," // Auto-incrementing unique ID
         "formated TEXT,"     // Formatted log message
         "levelname TEXT,"    // Log level (INFO, ERROR, etc.)
         "msg TEXT,"          // Main log message
         "args TEXT,"         // Additional details
         "timestamp BIGINT,"  // Log event timestamp (Unix time)
         "date_time DATETIME,"// Human-readable date and time
         "level BIGINT,"      // Log level as an integer
         "origin TEXT,"       // Module or component name
         "filename TEXT,"     // Source file name
         "function TEXT,"     // Function where the log was recorded
         "line BIGINT);");    // Source code line number
      Print("[INFO] 'logs' table created successfully");
     }
   
   //--- Prepare SQL statement for inserting a log entry
   string sql = "INSERT INTO logs (formated, levelname, msg, args, timestamp, date_time, level, origin, filename, function, line) "
                "VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7, ?8, ?9, ?10, ?11);";
   int sqlRequest = DatabasePrepare(dbHandle, sql);
   if(sqlRequest == INVALID_HANDLE)
     {
      Print("[ERROR] Failed to prepare SQL statement for log insertion");
     }
   
   //--- Bind values to the SQL statement
   DatabaseBind(sqlRequest, 0, "06:24:00 [INFO] Buy order sent successfully"); // Formatted log message
   DatabaseBind(sqlRequest, 1, "INFO");                                        // Log level name
   DatabaseBind(sqlRequest, 2, "Buy order sent successfully");                 // Main log message
   DatabaseBind(sqlRequest, 3, "Symbol: EURUSD, Volume: 0.1");                  // Additional details
   DatabaseBind(sqlRequest, 4, 1739471040);                                     // Unix timestamp
   DatabaseBind(sqlRequest, 5, "2025.02.13 18:24:00");                          // Readable date and time
   DatabaseBind(sqlRequest, 6, 1);                                              // Log level as integer
   DatabaseBind(sqlRequest, 7, "Order Management");                             // Module or component name
   DatabaseBind(sqlRequest, 8, "File.mq5");                                     // Source file name
   DatabaseBind(sqlRequest, 9, "OnInit");                                       // Function name
   DatabaseBind(sqlRequest, 10, 100);                                           // Line number
   
   //--- Execute the SQL statement
   if(!DatabaseRead(sqlRequest))
     {
      Print("[ERROR] SQL insertion request failed");
     }
   else
     {
      Print("[INFO] Log entry inserted successfully");
     }
   
   //--- Close the database connection
   DatabaseClose(dbHandle);
   Print("[INFO] Database connection closed successfully");
   
   return(INIT_SUCCEEDED);
  }
//+------------------------------------------------------------------+
When running this Expert Advisor, the following messages will be displayed in the console:
[INFO] Database file opened successfully
[INFO] Table 'logs' created successfully
[INFO] Log entry inserted successfully
[INFO] Database file closed successfully

Additionally, when you open the database in the editor, you will be able to view the logs table with all the data entered, as shown in the image below:



How to read data from the database

Reading data from a database involves a process very similar to inserting records, but with the goal of retrieving information already stored. In MQL5, the basic flow for reading data consists of:

  1. Prepare the SQL query: Using the DatabasePrepare() function, you create an identifier for the query that will be executed.
  2. Execute the query: With the prepared identifier, the DatabaseRead() function executes the query and positions the cursor to the first record of the result.
  3. Extract the data: From the current record, you use specific functions to obtain the values of each column, according to the expected data type. These functions include:

With these steps, you have a simple and effective flow to retrieve information and use it as needed by your application.

For example, let's assume you want to retrieve all records from the logs table. The SQL query for this operation is quite straightforward:

SELECT * FROM logs

This query selects all columns from all records in the table. In MQL5, we use the DatabasePrepare() function to create the query identifier, just as we did when inserting data.

In the end, the code looks like this:

//+------------------------------------------------------------------+
//| Expert initialization function                                   |
//+------------------------------------------------------------------+
int OnInit()
  {
   //--- Open the database connection
   int dbHandle = DatabaseOpen("db\\logs.sqlite", DATABASE_OPEN_READWRITE | DATABASE_OPEN_CREATE);
   if(dbHandle == INVALID_HANDLE)
     {
      Print("[ERROR] [" + TimeToString(TimeCurrent()) + "] Unable to open database (Error Code: " + IntegerToString(GetLastError()) + ")");
      return INIT_FAILED;
     }
   Print("[INFO] Database connection opened successfully.");

   //--- Create the 'logs' table if it doesn't exist
   if(!DatabaseTableExists(dbHandle, "logs"))
     {
      string createTableSQL =
         "CREATE TABLE logs ("
         "id INTEGER PRIMARY KEY AUTOINCREMENT,"    // Auto-incrementing unique ID
         "formated TEXT,"                           // Formatted log message
         "levelname TEXT,"                          // Log level name (INFO, ERROR, etc.)
         "msg TEXT,"                                // Main log message
         "args TEXT,"                               // Additional arguments/details
         "timestamp BIGINT,"                        // Timestamp of the log event
         "date_time DATETIME,"                      // Human-readable date and time
         "level BIGINT,"                            // Log level as an integer
         "origin TEXT,"                             // Module or component name
         "filename TEXT,"                           // Source file name
         "function TEXT,"                           // Function where the log was recorded
         "line BIGINT);";                           // Line number in the source code

      DatabaseExecute(dbHandle, createTableSQL);
      Print("[INFO] 'logs' table created successfully.");
     }

   //--- Prepare SQL statement to retrieve log entries
   string sqlQuery = "SELECT * FROM logs";
   int sqlRequest = DatabasePrepare(dbHandle, sqlQuery);
   if(sqlRequest == INVALID_HANDLE)
     {
      Print("[ERROR] Failed to prepare SQL statement.");
     }

   //--- Execute the SQL statement
   if(!DatabaseRead(sqlRequest))
     {
      Print("[ERROR] SQL query execution failed.");
     }
   else
     {
      Print("[INFO] SQL query executed successfully.");

      //--- Bind SQL query results to the log data model
      MqlLogifyModel logData;
      DatabaseColumnText(sqlRequest, 1, logData.formated);
      DatabaseColumnText(sqlRequest, 2, logData.levelname);
      DatabaseColumnText(sqlRequest, 3, logData.msg);
      DatabaseColumnText(sqlRequest, 4, logData.args);
      DatabaseColumnLong(sqlRequest, 5, logData.timestamp);

      string dateTimeStr;
      DatabaseColumnText(sqlRequest, 6, dateTimeStr);
      logData.date_time = StringToTime(dateTimeStr);

      DatabaseColumnInteger(sqlRequest, 7, logData.level);
      DatabaseColumnText(sqlRequest, 8, logData.origin);
      DatabaseColumnText(sqlRequest, 9, logData.filename);
      DatabaseColumnText(sqlRequest, 10, logData.function);
      DatabaseColumnLong(sqlRequest, 11, logData.line);

      Print("[INFO] Data retrieved: Formatted = ", logData.formated, " | Level = ", logData.level, " | Origin = ", logData.origin);
     }

   //--- Close the database connection
   DatabaseClose(dbHandle);
   Print("[INFO] Database connection closed successfully.");

   return INIT_SUCCEEDED;
  }
//+------------------------------------------------------------------+

Okay, with that, when executing the code we have this result

[INFO] Database file opened successfully
[INFO] SQL request successfully
[INFO] Data read! | Formated: 06:24:00 [INFO] Buy order sent successfully | Level: 1 | Origin: Order Management
[INFO] Database file closed successfully

With these basic operations in mind, we are ready to configure our database handler. Let's prepare the environment necessary to integrate the database with our logging library.


Configuring the database handler

In order to use a database to store logs, we need to properly configure our handler. This involves defining the attributes of the configuration structure, similar to what we did with the file handler. Let’s create a configuration structure called “MqlLogifyHandleDatabaseConfig” and copy that structure and make a few changes:

struct MqlLogifyHandleDatabaseConfig
  {
   string directory;                         // Directory for log files
   string base_filename;                     // Base file name
   ENUM_LOG_FILE_EXTENSION file_extension;   // File extension type
   ENUM_LOG_ROTATION_MODE rotation_mode;     // Rotation mode
   int messages_per_flush;                   // Messages before flushing
   uint codepage;                            // Encoding (e.g., UTF-8, ANSI)
   ulong max_file_size_mb;                   // Max file size in MB for rotation
   int max_file_count;                       // Max number of files before deletion
   
   //--- Default constructor
   MqlLogifyHandleDatabaseConfig(void)
     {
      directory = "logs";                    // Default directory
      base_filename = "expert";              // Default base name
      file_extension = LOG_FILE_EXTENSION_LOG;// Default to .log extension
      rotation_mode = LOG_ROTATION_MODE_SIZE;// Default size-based rotation
      messages_per_flush = 100;              // Default flush threshold
      codepage = CP_UTF8;                    // Default UTF-8 encoding
      max_file_size_mb = 5;                  // Default max file size in MB
      max_file_count = 10;                   // Default max file count
     }
  };

I marked in red the attributes such as rotation, file type, maximum number of files, encoding mode among others that will be removed, as they do not make sense for the database context. With the attributes defined, we will adjust the ValidityConfig() method, in the end the code looks like this:

//+------------------------------------------------------------------+
//| Struct: MqlLogifyHandleDatabaseConfig                            |
//+------------------------------------------------------------------+
struct MqlLogifyHandleDatabaseConfig
  {
   string directory;                         // Directory for log files
   string base_filename;                     // Base file name
   int messages_per_flush;                   // Messages before flushing
   
   //--- Default constructor
   MqlLogifyHandleDatabaseConfig(void)
     {
      directory = "logs";                    // Default directory
      base_filename = "expert";              // Default base name
      messages_per_flush = 100;              // Default flush threshold
     }
   
   //--- Destructor
   ~MqlLogifyHandleDatabaseConfig(void)
     {
     }

   //--- Validate configuration
   bool ValidateConfig(string &error_message)
     {
      //--- Saves the return value
      bool is_valid = true;
      
      //--- Check if the directory is not empty
      if(directory == "")
        {
         directory = "logs";
         error_message = "The directory cannot be empty.";
         is_valid = false;
        }
      
      //--- Check if the base filename is not empty
      if(base_filename == "")
        {
         base_filename = "expert";
         error_message = "The base filename cannot be empty.";
         is_valid = false;
        }
      
      //--- Check if the number of messages per flush is positive
      if(messages_per_flush <= 0)
        {
         messages_per_flush = 100;
         error_message = "The number of messages per flush must be greater than zero.";
         is_valid = false;
        }
   
      //--- No errors found
      return(is_valid);
     }
  };

With the configuration ready, we can finally start implementing the handler.


Implementing the database handler

Now that we have structured our configuration, let's move on to the practical part: implementing the database handler. I will detail each part of the implementation, explaining the choices made and ensuring that the handler is flexible for future improvements.

We start by defining the CLogifyHandlerDatabase class, which extends CLogifyHandler . This class needs to store the handler configuration, a time control utility ( CIntervalWatcher ) and a log message cache. This cache serves to avoid excessive writing to the database, temporarily storing the messages before writing them.

class CLogifyHandlerDatabase : public CLogifyHandler
  {
private:
   //--- Config
   MqlLogifyHandleDatabaseConfig m_config;
   
   //--- Update utilities
   CIntervalWatcher  m_interval_watcher;
   
   //--- Cache data
   MqlLogifyModel    m_cache[];
   int               m_index_cache;
   
public:
                     CLogifyHandlerDatabase(void);
                    ~CLogifyHandlerDatabase(void);
   
   //--- Configuration management
   void              SetConfig(MqlLogifyHandleDatabaseConfig &config);
   MqlLogifyHandleDatabaseConfig GetConfig(void);
   
   virtual void      Emit(MqlLogifyModel &data);         // Processes a log message and sends it to the specified destination
   virtual void      Flush(void);                        // Clears or completes any pending operations
   virtual void      Close(void);                        // Closes the handler and releases any resources
  };

The constructor initializes the attributes, ensuring that the handler name is "database", sets an interval for the m_interval_watcher , and clears the cache. In the destructor, we call Close() , ensuring that all pending logs are written before finalizing the object.

Another important method is SetConfig() , which allows you to configure the handler, storing the configuration and validating it to ensure there are no errors. The GetConfig() method simply returns the current configuration.

CLogifyHandlerDatabase::CLogifyHandlerDatabase(void)
  {
   m_name = "database";
   m_interval_watcher.SetInterval(PERIOD_D1);
   ArrayFree(m_cache);
   m_index_cache = 0;
  }
CLogifyHandlerDatabase::~CLogifyHandlerDatabase(void)
  {
   this.Close();
  }
void CLogifyHandlerDatabase::SetConfig(MqlLogifyHandleDatabaseConfig &config)
  {
   m_config = config;
   
   string err_msg = "";
   if(!m_config.ValidateConfig(err_msg))
     {
      Print("[ERROR] ["+TimeToString(TimeCurrent())+"] Log system error: "+err_msg);
     }
  }
MqlLogifyHandleDatabaseConfig CLogifyHandlerDatabase::GetConfig(void)
  {
   return(m_config);
  }

Now let's get to the heart of the database handler, saving the log records directly. To do this, we will implement the three basic methods of every handler:

  • Emit(MqlLogifyModel &data): Processes a log message and sends it to the cache.
  • Flush(): Ends or clears any operations by adding information to the correct destination (file, console, database, etc.).
  • Close(): Closes the Handler and releases any associated resources.

Starting with the Emit() method, which is responsible for adding the data to the cache, and if it has reached the defined limit, it calls Flush() .

//+------------------------------------------------------------------+
//| Processes a log message and sends it to the specified destination|
//+------------------------------------------------------------------+
void CLogifyHandlerDatabase::Emit(MqlLogifyModel &data)
  {
   //--- Checks if the configured level allows
   if(data.level >= this.GetLevel())
     {
      //--- Resize cache if necessary
      int size = ArraySize(m_cache);
      if(size != m_config.messages_per_flush)
        {
         ArrayResize(m_cache, m_config.messages_per_flush);
         size = m_config.messages_per_flush;
        }
      
      //--- Add log to cache
      m_cache[m_index_cache++] = data;
      
      //--- Flush if cache limit is reached or update condition is met
      if(m_index_cache >= m_config.messages_per_flush || m_interval_watcher.Inspect())
        {
         //--- Save cache
         Flush();
         
         //--- Reset cache
         m_index_cache = 0;
         for(int i=0;i<size;i++)
           {
            m_cache[i].Reset();
           }
        }
     }
  }
//+------------------------------------------------------------------+

Continuing with the Flush() method, reading the data from the cache and adding it to the database, following the same structure that I taught at the beginning of the article, in the “How to insert data into the database” session using the DatabasePrepare() function.

//+------------------------------------------------------------------+
//| Clears or completes any pending operations                       |
//+------------------------------------------------------------------+
void CLogifyHandlerDatabase::Flush(void)
  {
   //--- Get the full path of the file
   string path = m_config.directory+"\\"+m_config.base_filename+".sqlite";
   
   //--- Open database
   ResetLastError();
   int handle_db = DatabaseOpen(path,DATABASE_OPEN_CREATE|DATABASE_OPEN_READWRITE);
   if(handle_db == INVALID_HANDLE)
     {
      Print("[ERROR] ["+TimeToString(TimeCurrent())+"] Log system error: Unable to open log file '"+path+"'. Ensure the directory exists and is writable. (Code: "+IntegerToString(GetLastError())+")");
      return;
     }
   
   if(!DatabaseTableExists(handle_db,"logs"))
     {
      DatabaseExecute(handle_db,
         "CREATE TABLE logs ("
         "id INTEGER PRIMARY KEY AUTOINCREMENT,"
         "formated TEXT,"
         "levelname TEXT,"
         "msg TEXT,"
         "args TEXT,"
         "timestamp BIGINT,"
         "date_time DATETIME,"
         "level BIGINT,"
         "origin TEXT,"
         "filename TEXT,"
         "function TEXT,"
         "line BIGINT);");
     }
   
   //--- 
   string sql="INSERT INTO logs (formated, levelname, msg, args, timestamp, date_time, level, origin, filename, function, line) VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7, ?8, ?9, ?10, ?11);"; // parâmetro de consulta
   int request = DatabasePrepare(handle_db,sql);
   if(request == INVALID_HANDLE)
     {
      Print("Erro");
     }
   
   //--- Loop through all cached messages
   int size = ArraySize(m_cache);
   for(int i=0;i<size;i++)
     {
      if(m_cache[i].timestamp > 0)
        {
         DatabaseBind(request,0,m_cache[i].formated);
         DatabaseBind(request,1,m_cache[i].levelname);
         DatabaseBind(request,2,m_cache[i].msg);
         DatabaseBind(request,3,m_cache[i].args);
         DatabaseBind(request,4,m_cache[i].timestamp);
         DatabaseBind(request,5,TimeToString(m_cache[i].date_time,TIME_DATE|TIME_MINUTES|TIME_SECONDS));
         DatabaseBind(request,6,(int)m_cache[i].level);
         DatabaseBind(request,7,m_cache[i].origin);
         DatabaseBind(request,8,m_cache[i].filename);
         DatabaseBind(request,9,m_cache[i].function);
         DatabaseBind(request,10,m_cache[i].line);
         DatabaseRead(request);
         DatabaseReset(request);
        }
     }
   
   //--- 
   DatabaseFinalize(request);
   
   //--- Close database
   DatabaseClose(handle_db);
  }
//+------------------------------------------------------------------+

Finally, Close() ensures that all pending logs are written before exiting.

void CLogifyHandlerDatabase::Close(void)
  {
   Flush();
  }

With this, we have implemented a robust handler, ensuring that logs are stored efficiently and without data loss. The next step, now that we have our database handler ready to record logs, we need to create efficient methods to query these records. The idea is to have a generic base method, called Query() , that receives an SQL command in string format and returns the data in an array of type MqlLogifyModel . From it, we can build specific methods to facilitate recurring queries. Our Query() method will be responsible for opening the database, executing the query and storing the results in the log structure, see the implementation below:

class CLogifyHandlerDatabase : public CLogifyHandler
  {
public:
   //--- Query methods
   bool              Query(string query, MqlLogifyModel &data[]);
  };
//+------------------------------------------------------------------+
//| Get data by sql command                                          |
//+------------------------------------------------------------------+
bool CLogifyHandlerDatabase::Query(string query, MqlLogifyModel &data[])
  {
   //--- Get the full path of the file
   string path = m_config.directory+"\\"+m_config.base_filename+".sqlite";
   
   //--- Open database
   ResetLastError();
   int handle_db = DatabaseOpen(path,DATABASE_OPEN_READWRITE);
   if(handle_db == INVALID_HANDLE)
     {
      Print("[ERROR] ["+TimeToString(TimeCurrent())+"] Log system error: Unable to open log file '"+path+"'. Ensure the directory exists and is writable. (Code: "+IntegerToString(GetLastError())+")");
      return(false);
     }
   
   //--- Prepare the SQL query
   int request = DatabasePrepare(handle_db,query);
   if(request == INVALID_HANDLE)
     {
      Print("Erro query");
      return(false);
     }
   
   //--- Clears array before inserting new data
   ArrayFree(data);
   
   //--- Reads query results line by line
   for(int i=0;DatabaseRead(request);i++)
     {
      int size = ArraySize(data);
      ArrayResize(data,size+1,size);
      
      //--- Maps database data to the MqlLogifyModel model
      DatabaseColumnText(request,1,data[size].formated);
      DatabaseColumnText(request,2,data[size].levelname);
      DatabaseColumnText(request,3,data[size].msg);
      DatabaseColumnText(request,4,data[size].args);
      DatabaseColumnLong(request,5,data[size].timestamp);
      string value;
      DatabaseColumnText(request,6,value);
      data[size].date_time = StringToTime(value);
      DatabaseColumnInteger(request,7,data[size].level);
      DatabaseColumnText(request,8,data[size].origin);
      DatabaseColumnText(request,9,data[size].filename);
      DatabaseColumnText(request,10,data[size].function);
      DatabaseColumnLong(request,11,data[size].line);
     }
   
   //--- Ends the query and closes the database
   DatabaseFinalize(handle_db);
   DatabaseClose(handle_db);
   return(true);
  }
//+------------------------------------------------------------------+

This method gives us complete flexibility to perform any SQL query within the log database. However, to make it easier to use, we will create auxiliary methods that encapsulate common queries.

To prevent developers from having to write SQL every time they want to query logs, I created methods that already include the most commonly used SQL commands. They serve as shortcuts to search logs by filtering by severity level, date, source, message, arguments, file name, and function name. Below are the SQL commands corresponding to each of these filters:

SELECT * FROM 'logs' WHERE level=1;
SELECT * FROM 'logs' WHERE timestamp BETWEEN '{start_time}' AND '{stop_time}';
SELECT * FROM 'logs' WHERE origin LIKE '%{origin}%';
SELECT * FROM 'logs' WHERE msg LIKE '%{msg}%';
SELECT * FROM 'logs' WHERE args LIKE '%{args}%';
SELECT * FROM 'logs' WHERE filename LIKE '%{filename}%';
SELECT * FROM 'logs' WHERE function LIKE '%{function}%';

Now, we implement the specific methods that use these commands:

class CLogifyHandlerDatabase : public CLogifyHandler
  {
public:
   //--- Query methods
   bool              Query(string query, MqlLogifyModel &data[]);
   bool              QueryByLevel(ENUM_LOG_LEVEL level, MqlLogifyModel &data[]);
   bool              QueryByDate(datetime start_time, datetime stop_time, MqlLogifyModel &data[]);
   bool              QueryByOrigin(string origin, MqlLogifyModel &data[]);
   bool              QueryByMsg(string msg, MqlLogifyModel &data[]);
   bool              QueryByArgs(string args, MqlLogifyModel &data[]);
   bool              QueryByFile(string file, MqlLogifyModel &data[]);
   bool              QueryByFunction(string function, MqlLogifyModel &data[]);
  };
//+------------------------------------------------------------------+
//| Get logs filtering by level                                      |
//+------------------------------------------------------------------+
bool CLogifyHandlerDatabase::QueryByLevel(ENUM_LOG_LEVEL level, MqlLogifyModel &data[])
  {
   return(this.Query("SELECT * FROM 'logs' WHERE level="+IntegerToString(level)+";",data));
  }
//+------------------------------------------------------------------+
//| Get logs filtering by start end stop time                        |
//+------------------------------------------------------------------+
bool CLogifyHandlerDatabase::QueryByDate(datetime start_time, datetime stop_time, MqlLogifyModel &data[])
  {
   return(this.Query("SELECT * FROM 'logs' WHERE timestamp BETWEEN '"+IntegerToString((ulong)start_time)+"' AND '"+IntegerToString((ulong)stop_time)+"';",data));
  }
//+------------------------------------------------------------------+
//| Get logs filtering by origin                                     |
//+------------------------------------------------------------------+
bool CLogifyHandlerDatabase::QueryByOrigin(string origin, MqlLogifyModel &data[])
  {
   return(this.Query("SELECT * FROM 'logs' WHERE origin LIKE '%"+origin+"%';",data));
  }
//+------------------------------------------------------------------+
//| Get logs filtering by message                                    |
//+------------------------------------------------------------------+
bool CLogifyHandlerDatabase::QueryByMsg(string msg, MqlLogifyModel &data[])
  {
   return(this.Query("SELECT * FROM 'logs' WHERE msg LIKE '%"+msg+"%';",data));
  }
//+------------------------------------------------------------------+
//| Get logs filtering by args                                       |
//+------------------------------------------------------------------+
bool CLogifyHandlerDatabase::QueryByArgs(string args, MqlLogifyModel &data[])
  {
   return(this.Query("SELECT * FROM 'logs' WHERE args LIKE '%"+args+"%';",data));
  }
//+------------------------------------------------------------------+
//| Get logs filtering by file name                                  |
//+------------------------------------------------------------------+
bool CLogifyHandlerDatabase::QueryByFile(string file, MqlLogifyModel &data[])
  {
   return(this.Query("SELECT * FROM 'logs' WHERE filename LIKE '%"+file+"%';",data));
  }
//+------------------------------------------------------------------+
//| Get logs filtering by function name                              |
//+------------------------------------------------------------------+
bool CLogifyHandlerDatabase::QueryByFunction(string function, MqlLogifyModel &data[])
  {
   return(this.Query("SELECT * FROM 'logs' WHERE function LIKE '%"+function+"%';",data));
  }
//+------------------------------------------------------------------+

We now have a set of efficient and flexible methods to access logs directly from the database. The Query() method allows you to execute any SQL command, even allowing you to pass a more complex SQL command with more filters depending on your specific needs, while the auxiliary methods encapsulate common queries, making it more intuitive to use and reducing errors.

Now that our handler is implemented, it's time to test if everything is working correctly. Let's visualize the results and ensure that the logs are being stored and retrieved as expected.


Visualizing the result

After implementing the handler, the next step is to verify if it is working as expected. We need to test the insertion of logs, validate that the records are correctly stored in the database and ensure that the queries are fast and accurate.

In the tests, I will use the same LogifyTest.mq5 file, and just add some log messages at the beginning. We will also add some operations without a complex strategy, just open a position if there is no open position, defining takeprofit and stoploss on the position to make the exit.

//+------------------------------------------------------------------+
//| Import CLogify                                                   |
//+------------------------------------------------------------------+
#include <Logify/Logify.mqh>
#include <Trade/Trade.mqh>
CLogify logify;
CTrade trade;
//+------------------------------------------------------------------+
//| Expert initialization function                                   |
//+------------------------------------------------------------------+
int OnInit()
  {
   //--- Configs
   MqlLogifyHandleDatabaseConfig m_config;
   m_config.directory = "db";
   m_config.base_filename = "logs";
   m_config.messages_per_flush = 5;
   
   //--- Handler Database
   CLogifyHandlerDatabase *handler_database = new CLogifyHandlerDatabase();
   handler_database.SetConfig(m_config);
   handler_database.SetLevel(LOG_LEVEL_DEBUG);
   handler_database.SetFormatter(new CLogifyFormatter("hh:mm:ss","{date_time} [{levelname}] {msg}"));
   
   //--- Add handler in base class
   logify.AddHandler(handler_database);
   
   //--- Using logs
   logify.Info("Expert starting successfully", "Boot", "",__FILE__,__FUNCTION__,__LINE__);
   
   return(INIT_SUCCEEDED);
  }
//+------------------------------------------------------------------+
//| Expert deinitialization function                                 |
//+------------------------------------------------------------------+
void OnDeinit(const int reason)
  {
//---
  }
//+------------------------------------------------------------------+
//| Expert tick function                                             |
//+------------------------------------------------------------------+
void OnTick()
  {
   //--- No positions
   if(PositionsTotal() == 0)
     {
      double price_entry = SymbolInfoDouble(_Symbol,SYMBOL_ASK);
      double volume = 1;
      if(trade.Buy(volume,_Symbol,price_entry,price_entry - 100 * _Point, price_entry + 100 * _Point,"Buy at market"))
        {
         logify.Debug("Transaction data | Price: "+DoubleToString(price_entry,_Digits)+" | Symbol: "+_Symbol+" | Volume: "+DoubleToString(volume,2), "CTrade", "",__FILE__,__FUNCTION__,__LINE__);
         logify.Info("Purchase order sent successfully", "CTrade", "",__FILE__,__FUNCTION__,__LINE__);
        }
      else
        {
         logify.Debug("Error code: "+IntegerToString(trade.ResultRetcode(),_Digits)+" | Description: "+trade.ResultRetcodeDescription(), "CTrade", "",__FILE__,__FUNCTION__,__LINE__);
         logify.Error("Failed to send purchase order", "CTrade", "",__FILE__,__FUNCTION__,__LINE__);
        }
     }
  }
//+------------------------------------------------------------------+

When testing the strategy tester for 1 day on EURUSD, it was enough to generate 909 log records. As we configured, they were saved in the .sqlite file. To access them, simply access the terminal folder or press “Ctrl/Cmd + Shift + D” and the file browser will appear. Follow the path “MQL5/Files/db/logs.sqlite”. With the file in hand, we can open it directly in the metaeditor, as we did previously:


This completes another step forward in our log library. Our logs can now be efficiently stored and retrieved in a database, providing greater scalability and organization.


Conclusion

Throughout this article, we have explored the integration of databases into our log library, from the fundamental concepts to the practical implementation of a dedicated handler. We first discussed the importance of databases as a more scalable and structured alternative for storing logs, highlighting their advantages over conventional text files. We then examined the specifics of using databases in the context of MQL5, addressing their limitations and the solutions available to overcome them.

Finally, we analyzed the results of our implementation, ensuring that the logs were stored correctly and could be accessed quickly and efficiently. In addition, we discussed ways to view these logs, either through direct queries to the database or through specific tools for monitoring logs. This validation process was essential to ensure that the implemented solution was functional and effective in real-world scenarios.

With this, we have completed another stage in the development of our log library. The adoption of databases to store logs has brought significant benefits, making log management more organized, accessible, and scalable. This approach allows us to handle large volumes of data more efficiently, in addition to facilitating the analysis and monitoring of information recorded by the system.

Attached files |
LogifyePart6p.zip (22.67 KB)
Manual Backtesting Made Easy: Building a Custom Toolkit for Strategy Tester in MQL5 Manual Backtesting Made Easy: Building a Custom Toolkit for Strategy Tester in MQL5
In this article, we design a custom MQL5 toolkit for easy manual backtesting in the Strategy Tester. We explain its design and implementation, focusing on interactive trade controls. We then show how to use it to test strategies effectively
Developing a Replay System (Part 64): Playing the service (V) Developing a Replay System (Part 64): Playing the service (V)
In this article, we will look at how to fix two errors in the code. However, I will try to explain them in a way that will help you, beginner programmers, understand that things don't always go as you expect. Anyway, this is an opportunity to learn. The content presented here is intended solely for educational purposes. In no way should this application be considered as a final document with any purpose other than to explore the concepts presented.
Feature Engineering With Python And MQL5 (Part IV): Candlestick Pattern Recognition With UMAP Regression Feature Engineering With Python And MQL5 (Part IV): Candlestick Pattern Recognition With UMAP Regression
Dimension reduction techniques are widely used to improve the performance of machine learning models. Let us discuss a relatively new technique known as Uniform Manifold Approximation and Projection (UMAP). This new technique has been developed to explicitly overcome the limitations of legacy methods that create artifacts and distortions in the data. UMAP is a powerful dimension reduction technique, and it helps us group similar candle sticks in a novel and effective way that reduces our error rates on out of sample data and improves our trading performance.
Developing a multi-currency Expert Advisor (Part 18): Automating group selection considering forward period Developing a multi-currency Expert Advisor (Part 18): Automating group selection considering forward period
Let's continue to automate the steps we previously performed manually. This time we will return to the automation of the second stage, that is, the selection of the optimal group of single instances of trading strategies, supplementing it with the ability to take into account the results of instances in the forward period.