You are missing trading opportunities:
- Free trading apps
- Over 8,000 signals for copying
- Economic news for exploring financial markets
Registration
Log in
You agree to website policy and terms of use
If you do not have an account, please register
In the last variant I made a check for it as well. Whoever needs it can use it.
Right. But how do we know that the algorithms provided do not leave any blank spaces? The checksum doesn't prove it. Neither does the number of elements. After all, the function counts the elements that were there before the array was resized.
In almost all presented this is implemented without problems by sending a query with NULL.
There is another requirement for the algorithm - the correct placement of items inside the array after removing unnecessary items. This check must have been performed first. Then there is a check for speed.
The code explains which ones return what. There are shuffled arrays. There are in another array.
P.S. And just for the record. My function has some checks, but not used. But it already has all this and more.
I am not questioning the professionalism of the participants and their places. I merely pointed out a defect in the checksum check and also, the need for additional verification of the correctness of the arrangement of elements in the new array.
If all this is correct, I have deservedly got the penultimate place.
In my practice, I rarely think about the speed of specific operations. I'm more concerned with conciseness and clarity of the solution. It came as a surprise to me that this entry
if(Arr[q]==val){deleted++; q--;}
could be slow.
But if you add one more criterion of algorithm evaluation - Solution compactness - I'm probably in the first place.
If you combine the two criteria - Speed and Compression, and calculate the average algorithm score, then I take a higher place in the table.
Fedoseyev's version, though, is even more condensed than mine.However, if you add another criterion for evaluating algorithms, Solution Compression, I'm probably in first place.
Your version of the main loop:
and this is Fedoseyev's:
Both variants do the same thing. Who has the most conciseness?
Your version of the main cycle:
and this is Fedoseyev's:
Both do the same thing. Who's got the most concise?
Him. He's got it instead of...
for(;i<sz;i++)
It's more succinct.)
He has. He's got it instead of...
This is more succinct.))
It's just the same for the compiler.
You simply have a lot of unnecessary things, so it works slower than everyone else (I'm not even considering the first option from the topic).
Fedoseyev's code has one check, one assignment and one increment in one loop pass (I don't count checks and increments for loop organization).
In contrast, you have two checks, one sum of two variables, three increments and one assignment.
All hope is for the compiler to optimise, otherwise ArraySize is executed at every iteration.
All hope is for the compiler to optimise, otherwise ArraySize is executed at each iteration.
Yes, the compiler seems to check that if the array's size doesn't change in the loop, it independently replaces this function with one value and calculates this function only once.
At any rate, if you do this:
the execution time of the function will not change.
So for more compactness it makes sense to write it exactly the way Peter did it :)
But I agree, personally it stings my eyes too. It feels like the function will be called every time.
But I agree, personally it cuts me off as well. It feels like the function will be called every time.
imho it's better not to give the compiler a choice)
Let me ask you a question,
if(count>6) { ArrayCopy
more than six - is the value obtained by scientific gut feeling, or is there a justification?)imho better not to give the compiler a choice)
Let me ask you a question,
if(count>6) { ArrayCopy
more than six - the value is obtained by scientific gut feeling, or what's the reasoning behind it?)Yes, that's exactly it. The method of scientific gut feeling. Actually from 5 to 8 according to my observations. You may automate this process by auto-tuning this number each time. After all, this number may be different for different processors and systems.
For example, if you change all ArrayCopy and ArrayFill in CCanvas class by this principle, you can get a nice gain in speed of canvas.