2
submitted 1 year ago* (last edited 1 year ago) by Benjamin@jlai.lu to c/compsci@lemmy.ml

Hello, I'm trying to understand what has slowed down the progress of CPUs. Is it a strategic/political choice, or a real technical limitation?

you are viewing a single comment's thread
view the rest of the comments
[-] BartyDeCanter@lemmy.sdf.org 0 points 1 year ago

It’s a real technical limitation. The major driver of CPU progress was the ability to make transistors smaller so you could pack more in without needing so much power that they burn themselves up. However, we’ve reached the point where they are so small that is it getting to be extremely difficult to make them any smaller because of physics. Sure, there is and will continue to be progress in shrinking transistors even more, but it’s just damn hard.

[-] Benjamin@jlai.lu 0 points 1 year ago

Quite right, the argument seemed coherent to me until I observed the new performance of recent GPUs; it seems that the limits no longer exist.

I think it's legitimate to ask the question: My hypothesis is that the industry is trying to restrict the computing power of consumer machines(for military defence interests?), but the very large market for video games and 3D for video games, on the contrary, is constantly demanding more computing power, and machine manufacturers are obliged to keep up with this demand.

What confuses me, I think, is that I read a serious technical article 15 years ago that talked about a 70 Ghz CPU core prototype.

[-] BartyDeCanter@lemmy.sdf.org 1 points 1 year ago

First, GPUs and CPUs and very different beasts. GPU workloads are by definition highly parallelized and so a GPU has an architecture where it is much easier to just throw more cores at the problem, even if you don’t make each core faster. This means that the power issue is less. Take a look at GPU clock rates vs CPU clocks.

CPU workload tends to have much, much less parallelism and so there is less and less return on adding more cores.

Second, GPUs have started to have lower year over year lift. Your chart is almost a decade out of date. Take a look at, say, a 2080 vs 3080 vs 4080 and you’ll see that the overall compute lift is shrinking.

this post was submitted on 19 Aug 2023
2 points (100.0% liked)

Computer Science

418 readers
1 users here now

A community dedicated for computer science topics; Everyone's welcomed from student, lecturer, teacher to hobbyist!

founded 4 years ago
MODERATORS