360
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 22 Oct 2025
360 points (100.0% liked)
PC Gaming
12578 readers
350 users here now
For PC gaming news and discussion. PCGamingWiki
Rules:
- Be Respectful.
- No Spam or Porn.
- No Advertising.
- No Memes.
- No Tech Support.
- No questions about buying/building computers.
- No game suggestions, friend requests, surveys, or begging.
- No Let's Plays, streams, highlight reels/montages, random videos or shorts.
- No off-topic posts/comments, within reason.
- Use the original source, no clickbait titles, no duplicates. (Submissions should be from the original source if possible, unless from paywalled or non-english sources. If the title is clickbait or lacks context you may lightly edit the title.)
founded 2 years ago
MODERATORS
I think it could work to give dynamic and varied answers to secondary characters given good prompts and other guardrails to preserve the immersion. As long as the core elements of the games are not AI generated slope, and developers are honest about where it was used.
You'd think that that's the one thing LLMs should be good at – have characters respond to arbitrary input in-character according to the game state. Unfortunately, restricting output to match the game state is mathematically impossible with LLMs; hallucinations are inevitable and can cause characters to randomly start lying or talking about things thy can't know about. Plus, LLMs are very heavy on resources.
There are non-generative AI techniques that could be interesting for games, of course; especially ones that can afford to run at a slower pace like seconds or tens of seconds. For example, something that makes characters dynamically adapt their medium-term action plan to the situation every once in a while could work well. But I don't think we're going to see useful AI-driven dialogue anytime soon.
You seem to imply we can only use the raw output of the LLm but that's not true. We can add some deterministic safeguards afterwards to reduce hallucinations and increase relevancy. For example if you use an LLM to generate SQL, you can verify that the answer respects the data schemas and the relationship graph. That's a pretty hot subject right now, I don't see why it couldn't be done for video game dialogues.
Indeed, I also agree that the consumption of resources it requires may not be worth the output.
If you could define a formal schema for what appropriate dialogue options would be you could just pick from it randomly, no need for the AI
It would not be a fully determining schema that could apply to random outputs, I would guess this is impossible for natural language, and if it is possible, then it may as well be used for procedural generation. It would be just enough to make an LLM output be good enough. It doesn't need to be perfect because human output is not perfect either.
Yeah that's kind of my point. That's a vastly more complicated thing than SQL.
But it also doesn't need to be as exact as SQL, which removes some kind of complexity.
I am really praying for the day corporate drops this foolish nonsense of foisting it on their company and employees - maybe even gasp enabling their teams to access and use the tools that help them do better and more creative jobs.
Because AI can fit into a lot of people’s toolsets really nicely, especially in creative fields like game design. Just need to drop the idea that AI is an authoritative final answer to our design problems and instead realize that it’s just another tool to help us get to those solutions.