Discussing the article: "Tuning LLMs with Your Own Personalized Data and Integrating into EA (Part 5): Develop and Test Trading Strategy with LLMs(I)-Fine-tuning"

 

Check out the new article: Tuning LLMs with Your Own Personalized Data and Integrating into EA (Part 5): Develop and Test Trading Strategy with LLMs(I)-Fine-tuning.

With the rapid development of artificial intelligence today, language models (LLMs) are an important part of artificial intelligence, so we should think about how to integrate powerful LLMs into our algorithmic trading. For most people, it is difficult to fine-tune these powerful models according to their needs, deploy them locally, and then apply them to algorithmic trading. This series of articles will take a step-by-step approach to achieve this goal.

In our previous article, we introduced how to use GPU acceleration to train large language models, but we did not use it to formulate trading strategies or perform backtesting. However, the ultimate goal of training our model is to use it and let it serve us. So, starting from this article, we will step by step use the trained language model to formulate trading strategies and test our strategies on foreign exchange currency pairs. Of course, this is not a simple process. It requires us to adopt corresponding technical means to achieve this process. So let's implement it step by step.

The whole process may take several articles to complete.

  • The first step is to formulate a trading strategy; 
  • The second step is to create a dataset according to the strategy and fine-tune the model (or train the model), so that the input and output of the large language model conform to our formulated trading strategy. There are many different methods to achieve this process, and I will provide as many examples as possible;
  • The third step is the inference of the model and the fusion of output with the trading strategy and create EA according to our trading strategy. Of course, we still have some work to do in the inference phase of the model (choosing the appropriate inference framework and optimization methods: e.g. flash-attention, model quantization, speedup etc.); 
  • The fourth step is to use historical backtesting to test our EA on the client side.


    Author: Yuqiang Pan

     
    Hello
    What is the primary difference between the training process and the fine-tuning process when working with language models?
     
    Christian Benjamin #:
    Hello
    What is the primary difference between the training process and the fine-tuning process when working with language models?

    Hello, from the examples in this article:

    1. The weights of the pre-trained GPT2 model we use in this example do not have any content related to our data, and the input time series will not be recognized without fine-tuning, but the correct content can be output according to our needs after fine-tuning.

    2. As we said in our article, it is very time-consuming to train a language model from scratch to make it converge, but fine-tuning will make a pre-trained model converge quickly, saving a lot of time and computing power. Because the model used in our example is relatively small, this process is not very obvious.

    3. The fine-tuning process requires much less data than the pre-training process. If the amount of data is not sufficient, fine-tuning the model with the same amount of data is much better than directly training a model.

     

    Hello, thanks for the amazing articles.


    Looking forward to seeing how we will integrate the fine-tuned model into MT5