AMD or Intel as well as the memory brand - page 42

 
Docent >> :

If we are talking specifically about SSDs (not just USB Flash Drives or various SD/MMC/CF etc.), modern controllers schedule write operations in such a way as to evenly distribute the load on all cells. I.e. even if the same file is overwritten all the time, the cells are almost always different. So even ~10 000 cycles (for MLC) will last for a very long time.

Particularly Intel for its M-series SSDs guarantees that the drive will be able to write up to 100GB per day for 5 years. True, they are "only" guaranteed for 3 years, but we're unlikely to have 100GB per day in testing anytime soon.

As for the speed of accessing ANY (big/small) files, modern SSDs can be inferior to ANY modern HDD only on linear (sequential) operations. But on such operations their speed is good enough as it is. At the same time during casual access (main "brake") even server "15 thousandths" HDDs are not capable to approach them at least somehow in speed. Well, access time differs by orders of magnitude! The same is in the number of input-output operations per second.

So the only limiting factor is price.

Wow, how detailed... thanks for the lick without - didn't know) I read about it out of the corner of my eye, but don't have such deep knowledge... Thanks, maybe now I'll save up for an SSD without fear =)


by the way, according to the reviews, the first eee pc had a rubbish SSD... so in terms of speed and quality I think they were not as good as....

 

Tested the laptop. Intel Mobile Core 2 Duo P8600 @ 2.4 GHz, cache 3072 KB L2, DDR2 4GB PC-6400

Script test 44.99s, optimization 201s, on ticks 29:46

Terminal in RAM will show 45.14, optimisation 185s and 29:38 on ticks.


to Olga_trader. For testing use SuperSpeed RamDisk 9.0.1.0, first available. In both cases I created a 800 Mb disk, because the tester creates the history in advance. 1 year is about 524 Mb. For 5 years it's 2.5 gb, plus history folder, which I've already bloated to 1.85 gb, and I still need to leave the terminal and the Linux 3-4 gb of RAM. Total 8 gb. Increase of productivity from these actions is questionable, but I can't get comfortable working with this stuff anyway.


If I run 2 terminals on my atlonchik, test per minutes will be 188s and 166s - choose the higher result. I've got 1:23:24 and 1:18:23 ticks (it was 1:09:06 on one core) - my RAM was low and computer was actively using hard drive.

On laptop optimisation by minutes 113s and 96s, by ticks 20:26 and 16:48.

 

Yep, Imp120, put the laptop test in the table.

Если на моем атлончике запустить сразу 2 терминала, то тест по минутам будет 188с и 166с - выбираем больший результат. По тикам 1:23:24 и 1:18:23 (было 1:09:06 на одном ядре) - не хватило RAM и комп стал активно пользоваться хардом.

The laptop is optimised by minutes 113s and 96s, by ticks 20:26 and 16:48.

This is where I don't get it. Which results should I leave in the table? And how is it that running two terminals improves the results?

 
Mathemat, I'm sorry I didn't express myself accurately. I split the optimisation into two terminals. 102 loops in one and the same number in the other. The values of the second variable were 1 and 4 in the first, and 7 and 10 in the second. I thought it would be 2 times faster and it turned out 1.5. ) It just makes me think that optimization is faster on two computers at equal cost than on one powerful one.
 

Imp120, by the way, I noticed that your laptop and my desktop are very close in terms of tabular performance. And the results are similar (yours is faster, but not by much).

 
Imp120 >>:
Mathemat, извините я не точно выразился. Я разделил оптимизацию на два терминала. 102 цикла в одном и столько же в другом. Значения второй переменной в первом были 1 и 4, а во втором 7 и 10. Думал будет в 2 раза быстрее, а получилось в 1.5 ) Просто наводит на размышления, что при равной цене оптимизация быстрее будет на двух компах, чем на одном, но мощном.

It would be faster, beep-beep, if the developers would stop beep-beeping and make a normal multithreaded tester. Otherwise, beep-beep, we have to multiply terminals as beep-beep in order to engage cores.

And what is characteristic, the main problem of multicore involvement is bad parallelizability of the algorithm. A tester is perfectly parallelizable, which we idiotically do by running copies of it. But everyone has it, not our dear developers! They have another problem - judging by the reaction - qualified staff. OOP, clearly the pioneer, is more important! (By the way, not a single published code on 5 using OOP - all procedural!)

Pardon me - it's been 20 hours of optimization going on, that's what's making me nervous.

 

I have this stupid and naive hypothesis that this is what the developers are trying to do with the tester. The strategy tester button is still disabled, and there's a reason for that, isn't there?

 
Mathemat >> :

I have this stupid and naive hypothesis that this is what the developers are trying to do with the tester. The strategy tester button is still disabled, and there's a reason for that...

Optimism is the essence of a cheerful, life-affirming outlook.

>> It has been said repeatedly and unequivocally that it will not.

 

In the beginning, the developers also said that there would be no inheritance.

And that there would be no objects in turkeys.

But it does not work like that...

P.S. Well, it's true what you say: the tester (or rather the optimizer) is the easiest to parallelize. I wonder if it is possible to control this process, i.e. to set directly how to distribute parameter sets among cores?

 

God willing...

With the objects, or rather their absence in the indicators, it's clear - the idiocy of the situation was obvious.

By the way, the new build already allowed?

Reason: