Thank you.
tried this, and I was able to successfully compile everything with no error.
testing on eurusd and getting the error below.
any idea?
2022.11.30 11:51:46.689 Core 08 genetic pass (0, 286) tested with error "OnInit returned non-zero code 1" at 0:00:00.000
thanks
Hi, for running EA in tester you need to copy nnw file to "MetaQuotes\Terminal\Common\Files" directory.
Thank you!
Your "productivity" is astounding. Don't stop!
It's people like you that keep everything going!
P.S..
I've been reading the NeuroNet news....
"Нейросети тоже нуждаются в состояниях, напоминающих сны.
This is the conclusion reached by researchers at Los Alamos National Laboratory..."
Good day.
Using your code I made a similar "Sleep" of NeuroNetwork.
The percentage of "predicted" increased by 3%. For my "Supercomp" it is a flight to space!
//+------------------------------------------------------------------+ //| dream| //+------------------------------------------------------------------+ int Dream(int dream = 0) { Comment("!!! Dream !!! "); int sleep = (dream==0 ? 7 : dream); for(int j=0;j<sleep;j++) { TempData.Clear(); for(int b=0; b<(int)HistoryBars; b++) { if( !TempData.Add(0.0) || !TempData.Add(0.0) || !TempData.Add(0.0) || !TempData.Add(0) || !TempData.Add(0) || !TempData.Add(0) || !TempData.Add(0.0) || !TempData.Add(0.0) || !TempData.Add(0.0) || !TempData.Add(0.0) || !TempData.Add(0.0) || !TempData.Add(0.0) ) break; } if(TempData.Total()<(int)HistoryBars*12) return(0); Net.feedForward(TempData); Net.getResults(TempData); //-- You can look at NeuroNet's "Dreams". switch(TempData.Maximum(0,3)) { case 0: dPrevSignal=TempData[0]; break; case 1: dPrevSignal=-TempData[1]; break; default: dPrevSignal=0; break; } //-- ... but it's not essential. //--??? TempData.Clear(); TempData.Add(0.0); TempData.Add(0.0); TempData.Add(0.0); Net.backProp(TempData); //--??? } return(0); }
Applied this feature at the end of each training epoch:
if(add_loop) count++; if(!stop) { dError=Net.getRecentAverageError(); if(add_loop) { Net.Save(FileName+".nnw",dError,dUndefine,dForecast,dtStudied,true); printf("Era %d -> error %.2f %% forecast %.2f",count,dError,dForecast); } ChartScreenShot(0,(string)FileName+(string)IntegerToString(count)+".png",750,400); } Dream(SleepPeriod); //-- Sleep. printf("Dream period = %.2f !",SleepPeriod); }
Could you test and then comment on how you do it? Suddenly "Dreams" could help the AI?
P.S..
SleepPerriod=1;
I added to
SleepPeriod
SleepPeriod + (Delta++)
where Delta=0. But my computer is Very, very weak.... :-(
- 2022.11.29
- www.mql5.com
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
You agree to website policy and terms of use
New article Neural networks made easy (Part 34): Fully Parameterized Quantile Function has been published:
We continue studying distributed Q-learning algorithms. In previous articles, we have considered distributed and quantile Q-learning algorithms. In the first algorithm, we trained the probabilities of given ranges of values. In the second algorithm, we trained ranges with a given probability. In both of them, we used a priori knowledge of one distribution and trained another one. In this article, we will consider an algorithm which allows the model to train for both distributions.
This approach enables the training of a model that is less sensitive to the 'number of quantiles' hyperparameter. Their random distribution allows the expansion of the range of approximated functions to non-uniformly distributed ones.
Before the data is input into the model, an embedding of randomly generated quantiles is created according to the formula below.
There are different options in combining the resulting embedding with the tensor of the original data. This can be either a simple concatenation of two tensors or a Hadamard (element-by-element) multiplication of two matrices.
Below is a comparison of the considered architectures, presented by the authors of the article.
The model effectiveness is confirmed by tests carried out on 57 Atari games. Below is a comparison table from the original article [8]
Hypothetically, given the unlimited size of the model, this approach allows learning any distribution of the predicted reward.
Author: Dmitriy Gizlyk