Global recession over the end of Moore's Law - page 7

 
Vladimir:

GPU = graphics processing unit (produced mainly by Nvidia)

CPU = central processing unit (manufactured by Intel or AMD)

Both are processors. Don't you get it? Call GPU a card or whatever you want but it is a processor with 3000 cores if you have the latest model. If you have a computer it's a GPU, read in your documentation what model you have and how many cores it has.

CPU and GPU are different CPUs, I mean CPU.
 
Vladimir:

No melancholy, but fear for the future, both my own and others'...

Why the fear? What's the problem?

There won't be new computers so what? Everything will be as it was before and we won't see another Windows with no longer transparent, but shiny interface that eats up all the newly added resources.

 
Vladimir:

GPU = graphics processing unit (produced mainly by Nvidia)

CPU = central processing unit (made by Intel or AMD)

Both are processors. Don't you get it? Call GPU a card or whatever you want but it is a processor with 3000 cores if you have the latest model. If you have a computer it's a GPU, read in your documentation what model you have and how many cores it has.

Talking about the CPU. The card can be this, it can be a simple card, and the CPU will still be there. Where are the CPUs with 3000 cores?
 
Dmitry Fedoseev:
The talk is about the CPU. A card can be this, a simple card can be that, and there will still be a CPU. Where are the CPUs with 3000 cores?
I am curious about that as well.
 
Dmitry Fedoseev:
The talk is about the CPU. A card can be this, a simple card can be that, and there will still be a CPU. Where are the CPUs with 3000 cores?
I don't understand the point of the argument. You argued that CPUs have evolved to have more cores. I agree and gave an example of a GPU with 3000 cores. And you want to know why there is no CPU with the same number of cores, am I right? And what is it that you don't like about GPUs? The CPU may be the same or different. If you need many cores then buy corresponding GPU and program on it. But if you don't need many cores, then there is no point in arguing about it. My point of view is that development of processors on the way of multicore was started in 2004-2005. So it's unlikely to be a novelty after Moore's law is over. If you don't need cores now, you won't need them after 2020 either.
 
Vladimir:
I don't understand the point of the argument. You argued that processors have evolved by increasing the number of cores. I agree and gave an example of GPU with 3000 cores. And you want to know why there is no CPU with the same number of cores, am I right? And what is it that you don't like about GPUs? The CPU may be the same or different. If you need many cores then buy corresponding GPU and program on it. But if you don't need many cores, then there is no point in arguing about it. My point of view is that the development of processors on the way of multicore was started in 2004-2005. So it's unlikely to be a novelty after Moore's law is over. If you don't need cores now, you won't need them after 2020 either.
Reminds me of the anecdote about cutting out your tonsils... ...with an autogen... through a zp.
 
Dmitry Fedoseev:
Reminds me of the anecdote about cutting out tonsils... with an autogenome... through a zp.

Is it easier to write a program for parallel CPU cores than for GPU? The problem is the same: the programmer has to rack his brains and decide which pieces of a program can be paralleled, write special paralleling code and so on. Most programmers do not suffer and write single-core programs without twists. What is the problem here: lack of cores or programs using multi-core? I think it's the latter. Even if I give you a CPU with 3000 cores you will still write single core programs since there is no difference in difficulty of writing programs for 3000 cores and for 4 cores. What is needed is a new compiler that can automatically detect pieces of code that can be paralleled. But again the progress in creating such a compiler depends not on the hardware but on the programmers' willingness to write such a compiler. Throughout this thread I am stating that the possibility of creating new hardware after 2020 is diminishing due to the advances in semiconductor technology and the reduction in the size and power consumption of transistors. New materials and transistors are still over the horizon. Intel tried to create Knight Hill generation of processors on 10nm technology in 2016 and postponed that generation until late 2017. Samsung too has problems with their 10nm technology for their app processors. Already at 10nm size, the transistors only give a small reduction in size and power compared to 14nm. Heat dissipation becomes a big problem. A leap in technology is needed. One of the indicators of technology is the price per transistor. So, that price was dropping before 28nm, and after that it started rising exponentially. Many companies stopped at 28nm because of the price. So progress to 10nm technology and on to 7nm and the last 5nm will be accompanied not only by heat issues but also by high price.

 
Now there is no problem for paralleling, but only when using a CPU. In C# it is done in 3 seconds. Rather, the point is that there is no need in a large number of cores. A usual, average program is not much of a problem to parallelize. If we do many cores, the only benefit will be that many different programs may be run, but it's not really needed.
 
Vladimir:

Is it easier to write a program for parallel CPU cores than for GPU?

It's really hard to write efficient programs on GPUs.

In fact, there is still a whole untapped area in terms of interaction between devices and between devices and humans.

There is the area of clouds, stuffing everything and everyone into them, decentralization

There is the area of intelligent assistants and devices in general.

There is the area of augmented and virtual reality.

In short exactly because of the Moore's law there will be no recession, there will be a search of new ways of development. There will be a recession for another reason

 
Vladimir:

Is it easier to write a program for parallel CPU cores than for GPU? The problem is the same: the programmer has to rack his brains and decide which pieces of a program can be paralleled, write special paralleling code and so on. Most programmers do not suffer and write single-core programs without twists. What is the problem here: lack of cores or programs using multi-core? I think it's the latter. Even if I give you a CPU with 3000 cores you will still write single core programs since there is no difference in difficulty of writing programs for 3000 cores and for 4 cores. What is needed is a new compiler that can automatically detect pieces of code that can be paralleled. But again the progress in creating such a compiler depends not on the hardware but on the programmers' wish to write this compiler.

R doesn't have the described problems.

1. Most likely, there is no need to write programs for computationally complex algorithms. Everything is written and if a computationally complex algorithm (for example, optimization or boosting) admits parallelism, this is already implemented. And you don't have to search for it somewhere. A developer starts from the content of the problem and fits the tool to this content, and everything is implemented in the tool to the maximum. The tool has a very powerful help system intended for making the tool less labor-intensive.

2. If it is necessary for you, for example, to execute parallel sections of a loop, there are paralleling tools available. When describing the syntax of such constructions and conditions of using them you don't need to waste your time.

3. The program may load not only the core of the given computer but neighbouring computers as well. This is without any clouds, so to speak, from available computers.

The problems you have described are due to the fact that you are going from the iron, through universal algorithmic languages and do not reach the problem. If you go on the contrary - from the problem to the choice of tool, you may not have to discuss the hardware at all.

That's why I repeat the conclusion I drew above: all these gigahertz have very little relation to program execution efficiency. Just like we haven't noticed the increase of gigahertz for the last 15 years, we won't notice the termination of these very gigahertz growth.

Reason: