186
submitted 9 months ago by L4s@lemmy.world to c/technology@lemmy.world

Firm predicts it will cost $28 billion to build a 2nm fab and $30,000 per wafer, a 50 percent increase in chipmaking costs as complexity rises::As wafer fab tools are getting more expensive, so do fabs and, ultimately, chips. A new report claims that

top 32 comments
sorted by: hot top controversial new old
[-] toiletobserver@lemmy.world 114 points 9 months ago

Or, and hear me out, we could just write less shitty software...

[-] the_q@lemmy.world 42 points 9 months ago

You're right. This is the biggest issue facing computing currently.

[-] tailiat@lemmy.ml 17 points 9 months ago

The ratio of people who are capable of writing less-shitty software to the number of things we want to do with software ensures this problem will not get solved anytime soon.

[-] go_go_gadget@lemmy.world 11 points 9 months ago

The ratio of people who are capable of writing less-shitty software to the number of things we want to do with software ensures this problem will not get solved anytime soon.

Eh I disagree. Every software engineer I've ever worked with knows how to make some optimizations to their code bases. But it's literally never prioritized by the business. I suspect this will shift as IaaS takes over and it's a lot easier to generate the necessary graphs showing the stability of your product being maintained while the consumed resources has been reduced.

[-] winterayars@sh.itjust.works 16 points 9 months ago

But what if i want to do all my work inside a JavaScript "application" inside a web browser inside a desktop?

(We really do have do much CPU power these days that we're inventing new ways to waste it...)

[-] TacoButtPlug@sh.itjust.works 14 points 9 months ago* (last edited 9 months ago)

But where's the fun in that?

[-] SynonymousStoat@lemmy.world 5 points 9 months ago* (last edited 9 months ago)

As long as humans have some hand in writing and designing software we'll always have shitty software.

[-] AA5B@lemmy.world 5 points 9 months ago

While I agree with the cynical view of humans and shortcuts, I think it’s actually the “automated” part of the process to blame. If you develop an app, there’s only so much you can code. However if you start with a framework, now you’ve automated part of your job for huge efficiency gains, but you’re also starting off with a much bigger app and likely lots of functionality you aren’t really using

[-] SynonymousStoat@lemmy.world 2 points 9 months ago

I was more getting at with software development it's never just the developers making all of the decisions. There are always stakeholders who often force time and attention to other things and make unrealistic deadlines, while most software developers I know would love to be able to take the time to do everything the right way first.

I also agree with the example you provided. Back when I used to work on more personal projects I loved it when I found a good minimal framework that allowed you to expand it as needed so you rarely ever had unused bloat.

[-] go_go_gadget@lemmy.world 2 points 9 months ago* (last edited 9 months ago)

If you're not using the functionality it's probably not significantly contributing to the required CPU/GPU cycles. Though I would welcome a counter example.

[-] filister@lemmy.world 51 points 9 months ago* (last edited 9 months ago)

And NVIDIA will use this as an excuse to hike up their prices by 100+%.

On a serious note, this will progressively come down in price as time passes, plus not everyone needs to use 2nm cutting edge technology. Plus transition to 2nm will also increase the density, so comparing wafer prices without acknowledging the increased density is not giving you the whole picture.

Plus DRAM scaling is becoming cumbersome and a lot more components cannot scale to 2nm, so 2nm is mostly a marketing term, and there are a lot of challenges that make this tech so expensive and difficult to design and produce.

[-] drmoose@lemmy.world 30 points 9 months ago

Afaik 2nm is the theoretical limit for current transistor tech so this sort of end-game for this type of tech.

[-] Earthwormjim91@lemmy.world 52 points 9 months ago* (last edited 9 months ago)

2nm process doesn’t actually mean 2nm though. Hasn’t in over a decade.

The current 3nm process has a 48nm gate pitch and a 24nm metal pitch. The 2nm process will have a 45nm gate pitch and a 20nm metal pitch.

“Nm” is just “generation” today. After 5nm was 3nm, next is 2nm, then 1nm. They’ll change the name after that even though they’re still nowhere near actual nm size.

[-] SpaceNoodle@lemmy.world 12 points 9 months ago

Where can I read more about this?

[-] Ludrol@szmer.info 16 points 9 months ago* (last edited 9 months ago)

Depending on how in-depth you want to delve into this.
Newsletter semianalysis.com
Youtube Asianometry
Wikipedia
Some litography university textbooks. Sadly I don't know which ones.

[-] weew@lemmy.ca 6 points 9 months ago

Intel already has plans to name the further generations xxA, after Angstroms

[-] AA5B@lemmy.world 1 points 9 months ago* (last edited 9 months ago)

Yeah I’m a bit curious what the marketing will be as they have to get more vertical, 3D. Will there be naming to reflect that or will they just follow existing naming, 0.5nm?

[-] terminhell@lemmy.world 4 points 9 months ago

I didn't think the ~5nm limit could be broke due to quantum tunneling.

[-] crazyminner@lemmy.ml 18 points 9 months ago

The nm number is just the smallest part on the waffer. It's not actually the transistor.

[-] BetaDoggo_@lemmy.world 9 points 9 months ago

They solved this problem by making the nanometer bigger.

https://en.m.wikipedia.org/wiki/5_nm_process

[-] OrangeCorvus@lemmy.world 19 points 9 months ago

Your device will be 11% faster and the battery will last 6% more but it will dramatically change the way you interact with your device.

[-] AA5B@lemmy.world 5 points 9 months ago* (last edited 9 months ago)

If it’s enough to run on-device ai, it’s a win. Imagine autocorrect being able to mangle your texting without ever connecting to the cloud. Huge prvacy win.

With the goggles coming soon, I think they’ll focus chip improvements on GPU and neural engine to better support that

[-] ExLisper@linux.community 10 points 9 months ago

Autocorrect doesn't send anything to the cloud, it's just a dictionary. If your keyboard is sending your texts to the cloud you have to change your keyboard, not run AI. AI doesn't do autocorrect, it could maybe do word suggestions but would be super inefficient at it and probably not much better than current methods.

I'm writing thins on a 22 nm CPU and the letter appear hella fast.

[-] Yearly1845@reddthat.com 5 points 9 months ago

I've read that analog chips will takeover AI computations

[-] billwashere@lemmy.world 5 points 9 months ago

And cost 4000% more.

[-] profdc9@lemmy.world 4 points 8 months ago

Not so far fetched:

"I predict in 100 years computers will be twice as powerful, 10,000 times larger, and only the five richest kings of Europe will own one."

https://www.youtube.com/watch?v=ykxMqtuM6Ko

[-] Sensitivezombie@lemmy.zip 2 points 9 months ago

Use that money to speed the process of quantum computing so it will make these transistor chips obsolete

[-] QuadratureSurfer@lemmy.world 7 points 9 months ago

Quantum computing wouldn't make these transistors obsolete.

Quantum computing is only really good at very specific types of calculations. You wouldn't want it being used for the same type of job that the CPU handles.

[-] ExLisper@linux.community 3 points 9 months ago
[-] profdc9@lemmy.world 3 points 8 months ago

Quantum computers are only useful where you don't deliberately want decoherence. Decoherence means an operation when you erase a bit, like for example when you overwrite a memory bit with a new value. This requires dissipation of energy and interaction with the outside world to reject the heat of the calculation to. While in principle a quantum computer can do a calculation that a classical computer can do, it would not be useful unless it was observed and this happens pretty much every time a logic gate output flips in a classical computer.

[-] QuadratureSurfer@lemmy.world 2 points 9 months ago

I know you're joking, but I feel like answering anyway.

I'm sure you could get it to do that if you forced that through engineering, but it wouldn't be anywhere near as efficient as just using a CPU.

CPUs need to be able to handle a large number of instructions quickly one after the next, and they have to do it reliably. Think of a CPU as an assembly line, there are multiple stages for each instruction, but they are setup so that work is already happening for the next instruction at each step (or clock cycle). However, if there's a problem with one of the stages (or a collision) then you have to flush out the entire assembly line and start over on all of the work among all of the stages. This wouldn't be noticeable at all to the user since the speed of each step/clock cycle is the speed of the CPU in GHz, and there are only a few stages.

Just like how GPUs are excellent at specific use cases, quantum processing will be great at solving complex problems very quickly. But, compared to a CPU handling the mundane every day instructions, it would not handle this task well. It would be like having a worker on the assembly line that could do everything super quickly... but you would have to take a lot more time to verify that the worker did everything right, and there would be a lot of times that things were done wrong.

So, yeah, you could theoretically use quantum processing for running vim... but it's a bad idea.

[-] BetaDoggo_@lemmy.world 4 points 9 months ago

Quantum computing is useless in most cases because of how fragile and inaccurate it can be, due in part to the near zero temperatures they are required to operate at.

this post was submitted on 23 Dec 2023
186 points (100.0% liked)

Technology

58125 readers
3466 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS