102
submitted 1 week ago by mothasa@x69.org to c/technology@beehaw.org

Taalas HC1: 17,000 tokens/sec on Llama 3.1 8B vs Nvidia H200's 233 tokens/sec. 73x faster at one-tenth the power. Each chip runs ONE model, hardwired into the transistors.

you are viewing a single comment's thread
view the rest of the comments
[-] dieICEdie@lemmy.org 6 points 1 week ago

This would be great if you could have a machine that would allow you to swap chips… and then they only charge < 50 USD for each chip.

[-] BarbecueCowboy@lemmy.dbzer0.com 4 points 1 week ago

Would be great, but feels unlikely, most of the gains they're making rely on the lack of versatility.

[-] boonhet@sopuli.xyz 2 points 1 week ago

Can't be that cheap unfortunately if they maxed out the die area. Though it is an older node so maybe not as expensive as flagship GPU chips and shit

[-] tetrislife@leminal.space 1 points 1 week ago
[-] dieICEdie@lemmy.org 2 points 1 week ago

That’s all technology though, sadly.

[-] FurryMemesAccount 2 points 1 week ago

This one feels shorter-lived than the average chip, tho.

With the hardwiring and all.

[-] MagicShel@lemmy.zip 3 points 1 week ago

The thing that differentiates ChatGPT and Claude is likely more the RAG pipeline that backs them and feeds them context. The models really aren't getting better, we're just getting better at using them to break tasks down into units so small AI can figure it out. I'd bet a GPT 5 model or a Claude Opus 4.6 model would last 5, maybe 10 years before you really start to notice its capabilities are falling behind. I'll bet you could use GPT 4o for 5-10 years and it would be fine.

[-] dieICEdie@lemmy.org 1 points 1 week ago

But if they could make it so the chip is the only thing that is obsolete, That could be recycled pretty easily, or resold.

[-] FurryMemesAccount 1 points 1 week ago

Then it would stop being 73 times faster than NVIDIA.

[-] dieICEdie@lemmy.org 2 points 1 week ago
[-] FurryMemesAccount 1 points 1 week ago

If you add levels of indirection, extra transistors and such, it would be surprising to manage to maintain the same level of performance, especially since this design seems to rely on hardwiring to achieve its speed...

[-] dieICEdie@lemmy.org 1 points 1 week ago

Pretty sure the advantage is the AI directly on the chip.

[-] FurryMemesAccount 1 points 1 week ago* (last edited 1 week ago)

Now it's your proposal's turn not to make any sense. This is an article about a chip with a hardwired model being super fast.

Of course the hardwiring is inflexible, and much, much faster.

[-] dieICEdie@lemmy.org 1 points 1 week ago

I just think you want to argue

this post was submitted on 27 Feb 2026
102 points (100.0% liked)

Technology

42429 readers
48 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 4 years ago
MODERATORS