GetTickCount() in tester? How to inject mili-second delays in tester... - page 2

 
cloudbreaker:

Okay, I think what you are saying is that in reality, there are occasions (such as file access) which introduce delays.

Let's say that this particular file access takes about 3 ticks.

In the tester, in a period of say 10 ticks, you may find that ticks 6 and 7 are spread apart by the file access, but all 10 ticks are executed.

In reality ticks 7,8 and 9 are ignored whilst the file access operation is being performed.

What you could do is to set a tick counter when you do a labour intensive function. Then when start() is next executed that counter is decremented by one before a return() call. In effect you'd be guessing how many ticks a process would take and causing that number of ticks to be ignored.

Would that do what you want?

Well, the main delays I'm talking about is the delays that occur during any interaction with the MT4 server (opening, closing, modifying, deleting orders... etc.). These delays are in the order of hundreds of milliseconds (rarely more than a second), while my entire start() run takes much less than that when there is no interaction... So it has nothing to do with labor intensive functions, it has to do with the fact that any active interaction with the MT4 server takes time. As a result there are certain situations were Tester greatly differs than demo/live account results. I know this cause for every demo/live session I do, I take the history data from that session (the exact data saved to disk in M1) and run my expert in the Tester using the same inputs, starting from the exact same minute on the same data. One of the main differences that could be seen is that the Tester does everything instantaneously. For example if it took the expert ~10 seconds to close about 15 orders in Demo, it tool the Tester only 1 second. In other situation when the Demo opened only 1 order, the tester opened more... The reason is that the criterion for opening orders in the expert evaluated true more times in the Tester, simply because it finished opening the last order in zero time, while in the Demo by the time it finished opening an order the criterion evaluated false and so no other orders were opened.


Your solution would not achieve my goal. The reason is simple - ticks are asynchronous... some come after 100 milliseconds, some come after a few minutes... So skipping a fixed number of ticks is not a good solution. At different times of day or at different situations the average tick count rate greatly differs. Not to mention that I still don't know how the tester decides how many ticks to interpolate in one minute. Thats why i want to do it in milliseconds. I have a lot of statistics about how much time interaction with the server takes (during different hours, situations, before news, etc.). But I can't "inject" these delays into the tester... I'm just about at the point of giving up. I don't really think there is any solution to this the way the Tester is built. It's a shame, cause it makes it much less acuurate than it could have been. Anyway, any more ides?

 
gordon:

Well, the main delays I'm talking about is the delays that occur during any interaction with the MT4 server (opening, closing, modifying, deleting orders... etc.). These delays are in the order of hundreds of milliseconds (rarely more than a second), while my entire start() run takes much less than that when there is no interaction... So it has nothing to do with labor intensive functions, it has to do with the fact that any active interaction with the MT4 server takes time. As a result there are certain situations were Tester greatly differs than demo/live account results. I know this cause for every demo/live session I do, I take the history data from that session (the exact data saved to disk in M1) and run my expert in the Tester using the same inputs, starting from the exact same minute on the same data. One of the main differences that could be seen is that the Tester does everything instantaneously. For example if it took the expert ~10 seconds to close about 15 orders in Demo, it tool the Tester only 1 second. In other situation when the Demo opened only 1 order, the tester opened more... The reason is that the criterion for opening orders in the expert evaluated true more times in the Tester, simply because it finished opening the last order in zero time, while in the Demo by the time it finished opening an order the criterion evaluated false and so no other orders were opened.


Your solution would not achieve my goal. The reason is simple - ticks are asynchronous... some come after 100 milliseconds, some come after a few minutes... So skipping a fixed number of ticks is not a good solution. At different times of day or at different situations the average tick count rate greatly differs. Not to mention that I still don't know how the tester decides how many ticks to interpolate in one minute. Thats why i want to do it in milliseconds. I have a lot of statistics about how much time interaction with the server takes (during different hours, situations, before news, etc.). But I can't "inject" these delays into the tester... I'm just about at the point of giving up. I don't really think there is any solution to this the way the Tester is built. It's a shame, cause it makes it much less acuurate than it could have been. Anyway, any more ides?


Sleep(200);

 
n8937g:


Sleep(200);


Sleep doesnt work in tester.

 
gordon:

Well, the main delays I'm talking about is the delays that occur during any interaction with the MT4 server (opening, closing, modifying, deleting orders... etc.). These delays are in the order of hundreds of milliseconds (rarely more than a second), while my entire start() run takes much less than that when there is no interaction... So it has nothing to do with labor intensive functions, it has to do with the fact that any active interaction with the MT4 server takes time. As a result there are certain situations were Tester greatly differs than demo/live account results. I know this cause for every demo/live session I do, I take the history data from that session (the exact data saved to disk in M1) and run my expert in the Tester using the same inputs, starting from the exact same minute on the same data. One of the main differences that could be seen is that the Tester does everything instantaneously. For example if it took the expert ~10 seconds to close about 15 orders in Demo, it tool the Tester only 1 second. In other situation when the Demo opened only 1 order, the tester opened more... The reason is that the criterion for opening orders in the expert evaluated true more times in the Tester, simply because it finished opening the last order in zero time, while in the Demo by the time it finished opening an order the criterion evaluated false and so no other orders were opened.


Your solution would not achieve my goal. The reason is simple - ticks are asynchronous... some come after 100 milliseconds, some come after a few minutes... So skipping a fixed number of ticks is not a good solution. At different times of day or at different situations the average tick count rate greatly differs. Not to mention that I still don't know how the tester decides how many ticks to interpolate in one minute. Thats why i want to do it in milliseconds. I have a lot of statistics about how much time interaction with the server takes (during different hours, situations, before news, etc.). But I can't "inject" these delays into the tester... I'm just about at the point of giving up. I don't really think there is any solution to this the way the Tester is built. It's a shame, cause it makes it much less acuurate than it could have been. Anyway, any more ides?

Yes I see what you're saying. Its all about simulating latency - not in real time, but in terms of its actual impact.

Agreed about the asynchronous nature of incoming ticks, I was thinking that as I posted - ie. that you'd only ever get a very crude improvement, if any, that way I mentioned about arbitrarily disregarding a number of ticks after performing certain types of calls.

However, I don't think anything more scientific is possible. If it is, I'd like to hear about it.

 
cloudbreaker:

Yes I see what you're saying. Its all about simulating latency - not in real time, but in terms of its actual impact.

Agreed about the asynchronous nature of incoming ticks, I was thinking that as I posted - ie. that you'd only ever get a very crude improvement, if any, that way I mentioned about arbitrarily disregarding a number of ticks after performing certain types of calls.

However, I don't think anything more scientific is possible. If it is, I'd like to hear about it.

Well, I think it might be possible if it was known EXACTLY how the ticks are interpolated within a minute.


Let's say we have a function with M1 inputs: open,close,high,low,volume. Function output is x interpolated ticks per minute. Then we know that each tick takes an average of 60/x seconds in that specific M1 time frame (does the Tester assume that ticks in that minute are synchronous or not??? also a good question). Then we can indeed skip ticks to simulate time delays.


In theory even if we don't know the function we can run a tick counting expert that just outputs number of ticks per minute to a file... and then later we can run our expert that reads that data to know how many ticks to skip in a minute to simulate a certain delay...


Here we ASSUME that the Tester interpolates ticks regardless of the expert being run and it only depends on the M1 data (which I guess is an acceptable assumption). But this method is simply ridiculously complected and unpractical. And we still don't know if within a minute the interpolated ticks are synchronous... If they are not then this might not be worth the trouble. Well, I'm not doing it... I think it's NOT worth the trouble.


Main problem is lack of info... How the $^*@# does the Tester work technically?!

 
gordon wrote >>

No, that's not accurate. The backtester interpolates M1 data into more data and simulates actual ticks. As proof, try to run this simple expert advisor:

int tick;
datetime time;

int init()
{
tick = 0;
time = TimeCurrent();
}

int start()
{
tick++;
Print("Current price = ", DoubleToStr(Bid,5));

if( TimeMinute(TimeCurrent()) >= TimeMinute(time) )
{
Print( "Number of ticks in minute = ", tick );

tick = 0;
time = TimeLocal();
}
}

All the expert does is count ticks and print current price as proof that price actually changed. It also outputs and resets the tick counter once a minute. Example of output (log file):

10:37:54 2009.05.11 18:58 test USDJPY,M1: Current price = 97.39900
10:37:54 2009.05.11 18:58 test USDJPY,M1: Current price = 97.40200
10:37:54 2009.05.11 18:58 test USDJPY,M1: Current price = 97.40100
10:37:54 2009.05.11 18:58 test USDJPY,M1: Current price = 97.40000
10:37:54 2009.05.11 18:58 test USDJPY,M1: Current price = 97.39900
10:37:54 2009.05.11 18:58 test USDJPY,M1: Current price = 97.40100
10:37:54 2009.05.11 18:58 test USDJPY,M1: Current price = 97.40300
10:37:54 2009.05.11 18:58 test USDJPY,M1: Current price = 97.40600
10:37:54 2009.05.11 18:58 test USDJPY,M1: Current price = 97.40800
10:37:54 2009.05.11 18:58 test USDJPY,M1: Current price = 97.41000
10:37:54 2009.05.11 18:58 test USDJPY,M1: Current price = 97.40900
10:37:54 2009.05.11 18:58 test USDJPY,M1: Current price = 97.40800
10:37:54 2009.05.11 18:58 test USDJPY,M1: Current price = 97.40700
10:37:54 2009.05.11 18:58 test USDJPY,M1: Current price = 97.40900
10:37:54 2009.05.11 18:58 test USDJPY,M1: Current price = 97.41100
10:37:54 2009.05.11 18:58 test USDJPY,M1: Current price = 97.41400
10:37:54 2009.05.11 18:58 test USDJPY,M1: Current price = 97.41600
10:37:54 2009.05.11 18:58 test USDJPY,M1: Current price = 97.41800
10:37:54 2009.05.11 18:58 test USDJPY,M1: Current price = 97.41700
10:37:54 2009.05.11 18:58 test USDJPY,M1: Current price = 97.41600
10:37:54 2009.05.11 18:58 test USDJPY,M1: Current price = 97.41500
10:37:54 2009.05.11 18:58 test USDJPY,M1: Current price = 97.41400
10:37:54 2009.05.11 18:58 test USDJPY,M1: Current price = 97.41300
10:37:54 2009.05.11 18:58 test USDJPY,M1: Current price = 97.41400
10:37:54 2009.05.11 18:58 test USDJPY,M1: Current price = 97.41300
10:37:54 2009.05.11 18:58 test USDJPY,M1: Current price = 97.41500
10:37:54 2009.05.11 18:58 test USDJPY,M1: Current price = 97.41400
10:37:54 2009.05.11 18:58 test USDJPY,M1: Current price = 97.41500
10:37:54 2009.05.11 18:58 test USDJPY,M1: Current price = 97.41300
10:37:54 2009.05.11 18:58 test USDJPY,M1: Current price = 97.41200
10:37:54 2009.05.11 18:58 test USDJPY,M1: Current price = 97.41000
10:37:54 2009.05.11 18:58 test USDJPY,M1: Current price = 97.40900
10:37:54 2009.05.11 18:58 test USDJPY,M1: Current price = 97.40700
10:37:54 2009.05.11 18:59 test USDJPY,M1: Current price = 97.40800
10:37:54 2009.05.11 18:59 test USDJPY,M1: Number of ticks in minute = 2064

You can clearly see that prices change many times within a minute (price change is the definition of a tick, no?) and that the backtester has up to about 3000 ticks per minute (from what I've seen anyway).

Ok, so let's say I want to simulate code that closes all open orders. Let's say we have 10 of those. In Live/Demo, even if we have excelent internet connection, a very fast broker with very fast servers, this would still take at least 1 second to perform; a more realistic figure is ~2-5 seconds. In the backtester this always takes almost zero time. Obviously this is a big difference which would affect simulation accuracy... So how do I "inject" mili-second delays after each close operation into the backtester?

Gordon, From what I understand this data is only attainable through use of an EA. Do you know of an EA I can install to get access to this data on my Meta? Thanks for the help.

 
gordon:

As far as I can tell GetTickCount() does not work in Tester (if I'm wrong then PLEASE correct me). So my question is, how can I "inject" a delay into Tester? For example, let's I want a more realistic OrderClose() function - obviously in Live/Demo closing an order takes time - a few 100 mili-seconds. Is there a way to put this delay in the Tester? I've tried using GetTickCount() to do this, but it doesn't seem to work in Tester (and Sleep() doesn't work as mentioned in the documentation). Any way to achieve this?


Anybody know a better source of history data than downloading it from MetaQuotes in History Center (in the MT4 program)...?

HI Gordan,

Have you tried any of the various 'Tick collecor' rutines/code/apps?

I have a related problem in that I need to caluote th er aboslute angle or slope of a perice va time relationship wit the finest resolution that I can on MANY charts simultaneuosl that HAS to stay consistant regadrless of the [auto] scaling of the cart &/or variabl latency, lags, dropouts, skipped/missed tinks etc

 
DougRH4x:

HI Gordan,

Have you tried any of the various 'Tick collecor' rutines/code/apps?

I am currently collecting ticks and building FXT files based on those ticks to optimize. Obviously these are more accurate since ticks are not interpolated but are actual ticks. But this method does not solve any of the problems this thread discusses (which is an old thread - I know a lot more then I did back then)... There is no way to inject milli-second delays since the Tester itself works on seconds only. On the other hand there is a way to inject second delays, but it's inaccurate. U can't actually decide how much to inject, all u can do is skip a tick and by doing that u simulate a "delay" of the number of seconds between that tick and the next tick. Since u can't control how many seconds are between ticks, this is an inaccurate method, and will only work on average. Still, I have done some error simulation tests and it does lead to some useful results, at least in theory.


DougRH4x:

I have a related problem in that I need to caluote th er aboslute angle or slope of a perice va time relationship wit the finest resolution that I can on MANY charts simultaneuosl that HAS to stay consistant regadrless of the [auto] scaling of the cart &/or variabl latency, lags, dropouts, skipped/missed tinks etc

Sorry, I don't understand what u want to do or how it relates to this thread. Maybe u can explain more clearly...

 
gordon:

I am currently collecting ticks and building FXT files based on those ticks to optimize. Obviously these are more accurate since ticks are not interpolated but are actual ticks. But this method does not solve any of the problems this thread discusses (which is an old thread - I know a lot more then I did back then)... There is no way to inject milli-second delays since the Tester itself works on seconds only. On the other hand there is a way to inject second delays, but it's inaccurate. U can't actually decide how much to inject, all u can do is skip a tick and by doing that u simulate a "delay" of the number of seconds between that tick and the next tick. Since u can't control how many seconds are between ticks, this is an inaccurate method, and will only work on average. Still, I have done some error simulation tests and it does lead to some useful results, at least in theory.


Sorry, I don't understand what u want to do or how it relates to this thread. Maybe u can explain more clearly...

Perhaps u can reiterate your (original) thread topic &/or just what it is that you are trying to accomplish & the problem? This seems like an obvious answer but u say the maximum resolution is 1 second & you want more. Will yhe algortym accept portions of secod while still being stated in secinds? Eg: 0.25 seconds?

Another approch would be to tke your data points and multiply tme all by a constant factor such as 10, 100 or 1,000 then do you calculations then divde you resltant answer by your original scaling factor. This may stil leave u with the original prblem of a minimum resolution of 1 second, so perhaps you can just wor in data sets that are multiplied by 1,000 as you are wanting to use milli-seconds.

 
DougRH4x:

Perhaps u can reiterate your (original) thread topic &/or just what it is that you are trying to accomplish & the problem? This seems like an obvious answer but u say the maximum resolution is 1 second & you want more. Will yhe algortym accept portions of secod while still being stated in secinds? Eg: 0.25 seconds?

Nope, there is no way around it.


DougRH4x:

Another approch would be to tke your data points and multiply tme all by a constant factor such as 10, 100 or 1,000 then do you calculations then divde you resltant answer by your original scaling factor. This may stil leave u with the original prblem of a minimum resolution of 1 second, so perhaps you can just wor in data sets that are multiplied by 1,000 as you are wanting to use milli-seconds.

That might be possible in theory but quite impractical. I would rather just switch to another testing platform.

Reason: