AMD or Intel as well as the memory brand - page 13

 
joo >> :

You are right, mmm, although I am a supporter of the AMD camp. The first two tests in the script do a good job of showing exactly the computing power of a single core in MT compiled applications. Both of these tests are very small in size and small in order not to "touch" the RAM for greater purity of the experiment. The third test is designed as a write memory test and has no special calculations.

In order to use all the processor cores, the test logic must allow paralleling the calculations and also, the compiler and the MT platform itself must allow this. As far as I know, I must be mistaken, the MQL5 language, like MQL4, does not provide features for building parallelism in code. I am sorry for that.

Later I will post the updated script working more "smoothly" with outputting the final performance index relative to my processor (AMD Atlon X2 3800).

To Kombat -> graphical objects processing doesn't really need iron power, because even processing tens of thousands of objects per second won't affect performance, and in the tester it's not really needed. IMHO.



Speed is only critical when optimising. That is what the tests should be conducted on. Well, I've already said it all: standard Expert Advisor, quotes in the archive with it, *.set file with a set of parameters to be optimized. Everything.

 

Yes. I'm not a supporter of anyone's camp. I still like the Zilog Z80! ))) Because of my childhood: I designed its board myself, assembled and soldered it...

By the way, the new AMD is very attractive. The cache is fine. The frequency is also normal. Well, working with memory has always been very good for AMD. Intel just recently put a controller on the processor while AMD was the first.

But I don't remember, can the cache be redistributed at partial load, when the loaded core is allocated more.

 
Svinozavr >> :

Speed is only critical when optimising. That is what the tests should be conducted on. Well, I've already said it all: standard Expert Advisor, quotes in the archive with it, *.set file with a set of parameters to be optimized. Everything.

May be you will make a test? And whoever is interested, would love to run it. ;)

 
Svinozavr >> :

You may check it by "cutting" the script to one section - it may fit to your 256KB.

Amazing!

Took your advice. Tried on the same computer all kinds of operations separately. This is what I got:

Total: just over 37 milliseconds!

So what conclusion should we draw from this? That code size affects performance? But why so much? You remove just a couple of lines of code and it has this effect.

 


 
begemot61 >> :


Man, I don't get it at all. L2 cache size is 2 Mb, processor frequency is 3.8 GHz. Why is it lagging behind Svinozavr's Celeron?

The only thing that's lame is the RAM, at 266 MHz. Versus his 400.

begemot61, could you please test it individually as I did above. It will be very interesting to see the results of the 3.8GHz processor.

 
begemot61 >> :


Mmmm... Well, memory's slow, okay. But firstly, not that slow, and secondly, it's not supposed to have anything to do with it. (>> then what?

Can you like benik run it section by section? Well very much the picture resembles a quarter mb cache...

 
Mathemat >> :

Oh, boy. I'm surprised to the extreme. I didn't realise the old Celeron was so fast...


I guess that's the point, it's not that old. It's 45nm technology. I'm the old one with 90 nm technology. And begemot61's is 90nm. Could that be the reason for the lag?
 
There is another thing: multitrading (virtual two cores) may interfere. If the bios allows it or there is special software, you can disable it.
 
benik >> :


I think that's the point: it's not that old. It's 45nm technology. I'm the old one with 90nm technology. And begemot61's is 90nm. Maybe that's what's causing the lag.

All three models are different. Maybe it's because of the logic of the transition predictor - it's constantly improving, and the 4th pentium in question is the oldest. And this predictor directly affects cache efficiency. It's still not very clear though.

Here's another thing: the entire cache is divided into an instruction cache and a data cache. Maybe, again, this division is "crooked" in 4?

In short, though it is not clear to the end, but it is very similar to cache usability problems. However, there is also such parameter, as quantity of operations per clock. But it should be the same here. No. I don't think so.

Riddle, however!!!))

Reason: