Why not just keep it simple and let int division do its thing? There's no need to over-complicate it.
Because standard integer division results in the equivalent of the "floor()" function being applied.
So, in order to get the equivalent of "round()", you add half the value of the divisor, just as @whroeder1 explained.
It is not "over-complicating" things, but in fact the correct way to handle it.
Under what circumstances would this ever be true?
((TimeCurrent()-TimeGMT())%3600 != 0)
Edit: other than a closed market
Under what circumstances would the modulo return anything other than 0? What am I missing? Server-time always runs the increment of hours ahead or behind GMT, not minutes. Therefore...
((TimeCurrent()-TimeGMT())%3600 == 0)
Should always return true... unless they used different clocks. Is that what I'm missing do they somehow use different clocks?
As @whroeder1 explained, "there will be jitter on reading both timestamps". That slight delay can cause the two readings to be off on the very transition between for example 23h59 to 00h00.
Lets say that you were able to read both functions simultaneously without any jitter, so that one would read 23h59 and other would read 22h59 (1 hour difference between the two).
However due to a slight delay (called "jitter" in several fields) between reading the two functions, what you actually get is 23h59 and 23h00 due the time changing during that slight delay.
How would YOU handle that jitter?
PS! The clocks being slightly out of sync can be resolved this way as well.
EDIT: Please note that jitter is caused mainly by the network delays but can also be caused by the execution delays (or even other reasons).
Actually I would say : almost always.
((TimeCurrent()-TimeGMT())%3600 == 0)
TimeCurrent is broker server time.
TimeGMT is based on local computer time.
It's very unlikely they will always be exactly synchronized. Anyway while coding you need to take into account the worst case scenario.
The main reason is the "out of sync" one. Highly probable.
This "jitter" is completely negligible, we are working with datetime, precision to second, calling a function like TimeGMT() is a matter of microseconds. 1 chance on 1,000,000 to have a time shift using "second" precision. (Ok maybe 2 or 3 chances, I didn't measure).
I disagree. Network jitter, especially in high latency connections, is the main problem!
The Server time update can even be delayed several minutes behind (not just seconds) when network problems are more severe!
Worse even, especially when no ticks are arriving, the server time does not even update at all sometimes (I have seen that happen many times).
EDIT: Just watch the "Market Watch" panel closely for a minute and see how irregularly it keeps time!
EDIT2: Anyway, that is not really the issue here. The rounding being applied in the calculations was in fact the main point in this discussion!
That's an other point entirely.
Calling TimeCurrent() doesn't implied any network or "high latency". It's executed locally not on the server.
Of course your last point is valid (no ticks, or delay,etc). That's why TimeCurrent() should not be used at all for such usage. But we are far from the OP question.
Better read the documentation again:
You could say that it is most accurate during the initial stages of the OnTick() event, but it becomes more stale the longer you are in the code execution of that event.
Besides, @whroeder1 calculations have to be valid no matter where it runs - be it in OnInit, OnTimer, OnStart, OnChartEvent, etc.
EDIT: I say "most accurate" when jitter is low, but if the packet was delayed, by the time it arrives and the OnTick() event occurs, it could be really out of sink with the current time on the PC.