418
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 26 Nov 2025
418 points (100.0% liked)
PC Gaming
12795 readers
883 users here now
For PC gaming news and discussion. PCGamingWiki
Rules:
- Be Respectful.
- No Spam or Porn.
- No Advertising.
- No Memes.
- No Tech Support.
- No questions about buying/building computers.
- No game suggestions, friend requests, surveys, or begging.
- No Let's Plays, streams, highlight reels/montages, random videos or shorts.
- No off-topic posts/comments, within reason.
- Use the original source, no clickbait titles, no duplicates. (Submissions should be from the original source if possible, unless from paywalled or non-english sources. If the title is clickbait or lacks context you may lightly edit the title.)
founded 2 years ago
MODERATORS
And who's going to be powering that NPC's LLM model? Unless all you want is a free hotlinked chatbot window disguised as a character? Because the publishers and developers sure as hell won't power it on their end and if they do you'll be paying out the ass for it. Otherwise that LLM for an NPC will have to run locally on your own hardware....in addition to the game itself.
So yeah, have fun with that.
And dialogue generation is ALL they can do btw. They can't navigate a character around a 3d environment or even play against you in a grand strategy game. So, looking at RAM and GPU prices... yeah the novelty of LLM in games will run it's course pretty quick.
It'd be a small model run locally, taking up maybe half a GB of VRAM
Bruh, that's like 25-50% on an Nvidia card. Too much overhead! /s
In theory they could offer the ability in the settings to use the NPU if one is available.
Basically the same situation as it was with Raytracing.