Discussion of article "Neural networks made easy (Part 29): Advantage Actor-Critic algorithm" - page 2

 
         double reward = Rates[i - 1].close - Rates[i - 1].open;
         switch(action)
           {
            case 0:
               if(reward < 0)
                  reward *= -20;
               else
                  reward *= 1;
               break;
            case 1:
               if(reward > 0)
                  reward *= -20;
               else
                  reward *= -1;
               break;
            default:
               if(batch == 0)
                  reward = -fabs(reward);
               else
                 {
                  switch((int)vActions[batch - 1])
                    {
                     case 0:
                        reward *= -1;
                        break;
                     case 1:
                        break;
                     default:
                        reward = -fabs(reward);
                        break;
                    }
                 }
               break;
           }

Can you explain more about the code to calculate reward. Because in Part 27, the reward policy as below, it differ with the code above :

  1. A profitable position receives a reward equal to the candlestick body size (analyze the system state at each candlestick; we are in a position from the candlestick opening to its closing).
  2. The "out of the market" state is penalized in the size of the candlestick body (the candlestick body size with a negative sign to indicate lost profits).
  3. A losing position is penalized by the double candlestick body size (loss + lost profit).
Discussion of article "Neural networks made easy (Part 29): Advantage Actor-Critic algorithm"
Discussion of article "Neural networks made easy (Part 29): Advantage Actor-Critic algorithm"
  • 2022.11.25
  • MetaQuotes
  • www.mql5.com
New article Neural networks made easy (Part 29): Advantage Actor-Critic algorithm has been published: Author: Dmitriy Gizlyk...
Files:
Capture.PNG  15 kb