Errors, bugs, questions - page 2659

 
Igor Makanu:

checked by outputting to OnTester()

the picture is the same as above

here's the question of confidence that if you optionally pass NS through GA tester - it's not sure that all local processor cores won't work with the same initial configuration of NS weights - it spoils training results, imho

you can try to use some function of pass number as a generation base.

If there are some input parameters that change during optimization, their combination (sum, product, degree...) can be used to set up a pseudo-random number generation base. Probably (but not for sure :D ), a side effect would be to be able to reproduce on a single pass the code with "random" numbers inside.

 
Igor Zakharov:

you can try to use some function of the pass number as a generation base.

If there are some input parameters, which change during optimization, their combination (sum, product, degree...) can be used to set pseudo-random number generation base. Probably (but not definitely :D ), a side effect would be to be able to reproduce on a single pass the code with "random" numbers inside.

One can generate/find an array of characters, relatively random, by a few kilobytes (Pushkin's text:) ). All comes down to cunning implementation of randomness to satisfy task requirements, if proposed is not checked, I remember such problems in many libraries.

Colleagues, please advise, if I open chart ChartOpen(), how can I bring back focus (activity, visibility) on the chart the Expert Advisor works from without closing it.

And who knows why Bid, Ask does not coincide with chart price Close[0]https://www.mql5.com/ru/forum/160683/page1082#comment_15152111

Любые вопросы новичков по MQL4 и MQL5, помощь и обсуждение по алгоритмам и кодам
Любые вопросы новичков по MQL4 и MQL5, помощь и обсуждение по алгоритмам и кодам
  • 2020.02.25
  • www.mql5.com
В этой ветке я хочу начать свою помощь тем, кто действительно хочет разобраться и научиться программированию на новом MQL4 и желает легко перейти н...
 
Aleksey Mavrin:

It all comes down to having your own cleverly crafted implementation of randomisation to meet the requirements of the tasks

I don't think that's the problem... the pseudo-random itself works quite well, the problem is in the generation base - with constant inputs (during optimization) the base is the same, so the results are always close. i suggested how to maximize the dispersion of the generation base

 
Igor Zakharov:

it seems to me that this is not the problem... the pseudo-random itself works fine, the problem is in the generation base of the series - with constant inputs (during optimization) the base is the same, so the results are always close. i suggested how to maximize the dispersion of the generation base

I agree. I just meant the fact, if memory serves, that often found (are) libraries that do not work initialization, ie for no reason the same base, if not checked and used, many do not know about it.

 
Igor Makanu:

picture as above

crusty solution directly - set number for srand in EA properties. or take from hash of parameters, then at least randoms will be different initialization.
 
TheXpert:
crusty solution directly - set number for srand in EA properties. or take from hash of parameters, then at least randoms will be different initialization.

I came up with this solution yesterday, it works correctly

input int param1 =  2147483647;
//+------------------------------------------------------------------+
int OnInit()
   {
   srand(param1);
   return(INIT_SUCCEEDED);
   }
//+------------------------------------------------------------------+
void OnTick()
   {

   }
//+------------------------------------------------------------------+
double OnTester()
   {
   return(rand());
   }

my questions are in general about the correctness of rand() in teter, now i searched alglib, it usesMathRand() once in randomreal() , which will then often be used inside the alglib library

 
Igor Makanu:

my questions are generally about the correctness of rand() in the teter

The questions are exactly about initialization. Apparently, in the tester one cannot initialize by time to get a good rand().
 
TheXpert:
Apparently, to get a good randomness in the tester, you can't initialise with time.

that's exactly what it is

wrote yesterday that this contradicts the documentationhttps://www.mql5.com/ru/forum/1111/page2657#comment_15165819 and the MathSrand() example from the help will give a different result than expected in the tester


UPD: my problem is a bit deeper - I want to use tester for deep learning and save best possible NS configurations via agent to agent exchange - I've planned everything in general, but I wanted to handle collisions via agent start (or stop) pause with random value.... but alas not all random values are random in tester agents )))

 
In most cases, the tester needs to ensure reproducibility of results. The requirement that the results are randomised without changing the input parameters and the state of the environment (all functions related to virtual time) is contrary to the task and implementation of the tester.
 
Stanislav Korotky:
In most cases in the tester you need to ensure reproducibility of results. The requirement that results are randomized without changing input parameters and environment state (all functions related to virtual time) contradicts the task and implementation of the tester.

it's all right what you write, but I need to separate the order of access of agents during saving in the database

in essence you need a local agent ID in the tester, it's not clear why there is no such functionality

Reason: