Rejoignez notre page de fans
- Vues:
- 6034
- Note:
- Publié:
- 2018.11.20 12:54
-
Besoin d'un robot ou d'un indicateur basé sur ce code ? Commandez-le sur Freelance Aller sur Freelance
The idea and the simplest algorithm are provided in the article "Random decision forest in reinforcement learning"
The library has advanced functionality allowing you to create an unlimited number of "Agents".
In addition, variations of the "Arguments group accounting method" are used
Using the library:
#include <RL gmdh.mqh> CRLAgents *ag1=new CRLAgents("RlExp1iter",1,100,50,regularize,learn); //created 1 RL agent accepting 100 entries (predictor values) and containing 50 trees
An example of filling input values with normalized close prices:
void calcSignal() { sig1=0; double arr[]; CopyClose(NULL,0,1,10000,arr); ArraySetAsSeries(arr,true); normalizeArrays(arr); for(int i=0;i<ArraySize(ag1.agent);i++) { ArrayCopy(ag1.agent[i].inpVector,arr,0,0,ArraySize(ag1.agent[i].inpVector)); } sig1=ag1.getTradeSignal(); }
Training takes place in the tester in one pass with the parameter learn=true. After training, we need to change it to false.
Demonstrating the trained "RL gmdh trader" EA operation on training and test samples.
Traduit du russe par MetaQuotes Ltd.
Code original : https://www.mql5.com/ru/code/22915

Working by iMA (Moving Average, MA) and OHLC of W1 timeframe

Exp_XFisher_org_v1 Expert Advisor based on XFisher_org_v1 oscillator signals.