optimizing an expert, profiling the time spent - page 2

 
Mikko Siltanen:

Thanks. Good to know.

But it still bothers me how that processing time behaves or changes between open prices, control points ane every tick. My gut feeling says that the processing time should be smaller in every tick case as on average there's less things to do for MT4: less new orders, less orders to close. Contribution should go down. But it stays roughly the same (now my latest run) or increases (the comment I gave few days ago).    

I am confused now

void startrectime(int i) {

   timesData[i][0] = GetMicrosecondCount();

}

void stoprectime(int i) {

   ulong latestcount = GetMicrosecondCount();

   timesData[i][1]+=latestcount-timesData[i][0];

   timesData[i][2]++;

}

Are you using GetMicrosecondCount() or TBB tick_count. Are they giving the same results or different ?

GetMicrosecondCount() resolution should not be enough actually.
 

I'm using now only TBB tick_count. Originally because it gave me seconds directly (even though that sounds ridiculous reason). Also I had a trouble measuring that time originally with microsecond count. Maybe it was because of the resolution. Now from TBB counts I can count from my latest run 50.03s / 34709364 = 1.441297774us per tick. So, that makes me to think that 1us resolution isn't quite enough for measuring short times on a quick machine.

Despite my negative comments on this thing I'm quite pleased with this platform. As far as I know there's nothing comparable, no other let's your program your system as freely as this does. And even though my parellization trial failed there's GPU option yet to be tried. Would like to get some meaningful work for GPU as well. Noticed your earlier discussion on this forum on that and downloaded Nvidia Cuda SDK. They have made it really easy to program for it. Just install SDK, Visual Studio and take their examples and run (though I had some issues and couldn't get those built yet).  

 
Mikko Siltanen:

I'm using now only TBB tick_count. Originally because it gave me seconds directly (even though that sounds ridiculous reason). Also I had a trouble measuring that time originally with microsecond count. Maybe it was because of the resolution. Now from TBB counts I can count from my latest run 50.03s / 34709364 = 1.441297774us per tick. So, that makes me to think that 1us resolution isn't quite enough for measuring short times on a quick machine.

Yes exactly. I didn't think about that at first. But as I understand TBB tick_count gives you a better resolution.

Despite my negative comments on this thing I'm quite pleased with this platform. As far as I know there's nothing comparable, no other let's your program your system as freely as this does. And even though my parellization trial failed there's GPU option yet to be tried. Would like to get some meaningful work for GPU as well. Noticed your earlier discussion on this forum on that and downloaded Nvidia Cuda SDK. They have made it really easy to program for it. Just install SDK, Visual Studio and take their examples and run (though I had some issues and couldn't get those built yet).  

Just for your information MT5 natively allow profilling with the Strategy Tester.

The above topic could interest you.

Используете ли вы возможности OpenCL для ускорения расчетов? (и обсуждение применения в торговле)
Используете ли вы возможности OpenCL для ускорения расчетов? (и обсуждение применения в торговле)
  • 2017.12.10
  • www.mql5.com
Не знаю, что это такое, На моём компьютере это невозможно, Знаю, что это есть в МТ5, но как использовать не знаю, Пробовал, но дальше примитивных п...