You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
Recall that the protracted discussion was born out of a proposed method for creating optimal mini-algorithms.
For example, in this code the lines that were obtained through this mechanism are highlighted.
Forum on trading, automated trading systems and testing trading strategies
Peculiarities of mql5 language, subtleties and techniques of work
fxsaber, 2024.05.06 23:50
I agree in general, but just the other day there was a task when TimeToStruct was used about 1000 times per second. A small thing, of course, but a nice little thing.
I use caching, for example, when calculating the HMAC SHA-256 key for REST-API, which takes a long time to calculate and is used several times a minute at random moments.
But a function that has 1 sec precision and is called many times per second, I would calculate once per second (i.e. external control). Your solution is also an option, but doing internal caching in every such case seems like crutches.
1,000 times a second? I can't imagine how that's possible?
Well maybe I exaggerated a bit about 1000, but a few hundred times per second is for sure. For 4K screens, maybe 1000.
Time Scale for Dynamic Peak Charting
I think there are still tasks outside of the GUI. But for now, I agree, it's hard to imagine it.
I use caching for example when calculating the HMAC SHA-256 key for the REST-API, which takes a long time to calculate and is applied several times a minute at random times.
But a function that has 1 sec precision and is called many times per second, I would calculate once per second (i.e. external control). Your solution is also an option, but doing internal caching in every such case seems like crutches.
where are the crutches?
I don't see crutches in not counting again what has already been counted before. Even if it costs 20 bytes of extra memory allocated for static variables
where is the crutch?
I mean that the cache will have to be implemented for each case separately, where it is applicable. Instead of not running calculations when it is not necessary.
I don't think a cache is not a useful thing. Files in the OS are cached, and can be accessed very quickly with frequent operations. But in a programme you can and should keep frequently needed data in memory, not in files. I gave it as an example or analogy.
I have already said that your method is also applicable. But you can come at it from another angle.
I mean that the cache will have to be implemented for each case separately, where applicable. Instead of not running calculations when it is not necessary.
I don't think cache is not useful. Files in the OS are cached, and can be accessed very quickly with frequent operations. But in a programme you can and should keep frequently needed data in memory, not in files. I gave it as an example or analogy.
I have already said that your method is also applicable. But you can come at it from another angle.
I don't understand which side you're talking about. It seems that you have not got into the essence of the code. In this case, there are always calculations, only part of them, the most costly, uses stored values in static variables, if the input time is the same day as the previous input time.
It is precisely because you are the author of the idea that you will stand on the defence to the last. There's no reason to get worked up, you're the one who misunderstood me.
I'm not talking about the fact that the calculation is the same day, but about the fact that you calculate hundreds of times per second, although the result is accurate to 1 second, and it is enough to call the function once per second (that's the other way round).
I'm not saying that a once-a-day cache isn't useful, I'm talking about an unfortunate example of calling a function many times a second.
I'll leave you to it. I don't like protracted debates and even less so holivars.I use caching for example when calculating the HMAC SHA-256 key for the REST-API, which takes a long time to calculate and is applied several times a minute at random times.
But a function that has 1 sec precision and is called many times per second, I would calculate once per second (i.e. external control). Your solution is also an option, but doing internal caching in every such case seems like crutches.
Exactly because you are the author of the idea, you will stand on the defence to the last. There is no reason to get wound up, it's you who misunderstand me.
I'm not talking about the fact that the calculation goes on the same day, but about the fact that you calculate hundreds of times per second, although the result is accurate to 1 second, and it's enough to call the function once per second (that's the other way round).
I'm not saying that a once-a-day cache isn't useful, I'm talking about an unfortunate example of calling a function many times a second.
I'll leave you to it. I don't like protracted debates and even more so holivars.It is hard to understand what you are talking about
I am not the author of the idea, I only tried to speed up and extend existing functions. Ah, if we are talking about hashing, then yes. It's my idea.
I didn't start it at all.
It seems that you somehow think that the current time is input to this function, if you talk about accuracy of 1 second. The input of this function is not the current time. To demonstrate this, I recorded the animated gif above, in which you can clearly see that this function is used to form the lower date-time scale. And you can see that the frequency of accessing the TimeToStruct2100() function is hundreds of times per second to form time scales of different scales. In the case of hashing, this function is on average about twice as fast.
This function inputs different non current time. To demonstrate this, I have recorded the animated gif above, in which you can clearly see that the lower date-time scale is formed with the help of this function.
Well, that's it. Thank you all, everyone is free ))
where's the crutch?
I don't see crutches in not counting again what has already been counted before. Even if it costs 20 bytes of extra memory allocated for static variables
Please post a file with the final versions of these functions.