388
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 16 May 2025
388 points (100.0% liked)
Technology
70048 readers
3513 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
Yeah, billionaires are just going to randomly change AI around whenever they feel like it.
That AI you've been using for 5 years? Wake up one day, and it's been lobotomized into a trump asshole. Now it gives you bad information constantly.
Maybe the AI was taken over by religious assholes, now telling people that gods exist, manufacturing false evidence?
Who knows who is controlling these AI. Billionaires, tech assholes, some random evil corporation?
Joke's on you, LLMs already give us bad information
Sure, but unintentionally. I heard about a guy whose small business (which is just him) recently had someone call in, furious because ChatGPT told them that he was having a sale that she couldn't find. The customer didn't believe him when he said that the promotion didn't exist. Once someone decides to leverage that, and make a sufficiently-popular AI model start giving bad information on purpose, things will escalate.
Even now, I think Elon could put a small company out of business if he wanted to, just by making Grok claim that its owner was a pedophile or something.
"Unintentionally" is the wrong word, because it attributes the intent to the model rather than the people who designed it.
Hallucinations are not an accidental side effect, they are the inevitable result of building a multidimensional map of human language use. People hallucinate, lie, dissemble, write fiction, misrepresent reality, etc. Obviously a system that is designed to map out a human-sounding path from a given system prompt to a particular query is going to take those same shortcuts that people used in its training data.
Unintentionally is the right word because the people who designed it did not intend for it to be bad information. They chose an approach that resulted in bad information because of the data they chose to train and the steps that they took throughout the process.
Incorrect. The people who designed it did not set out with a goal of producing a bot that reguritates true information. If that's what they wanted they'd never have used a neural network architecture in the first place.
Yep, I knew this from the very beginning. Sadly the hype consumed the stupid, as it always will. And we will suffer for it, even though we knew better. Sometimes I hate humanity.
That's a good reason to use open source models. If your provider does something you don't like, you can always switch to another one, or even selfhost it.
Or better yet, use your own brain.
Yep, not arguing for the use of generative AI in the slightest. I very rarely use it myself.
While true, it doesn't keep you safe from sleeper agent attacks.
These can essentially allow the creator of your model to inject (seamlessly, undetectably until the desired response is triggered) behaviors into a model that will only trigger when given a specific prompt, or when a certain condition is met. (such as a date in time having passed)
https://arxiv.org/pdf/2401.05566
It's obviously not as likely as a company simply tweaking their models when they feel like it, and it prevents them from changing anything on the fly after the training is complete and the model is distributed, (although I could see a model designed to pull from the internet being given a vulnerability where it queries a specific URL on the company's servers that can then be updated with any given additional payload) but I personally think we'll see vulnerabilities like this become evident over time, as I have no doubts it will become a target, especially for nation state actors, to simply slip some faulty data into training datasets or fine-tuning processes that get picked up by many models.
I currently treat any positive interaction with an LLM as a “while the getting’s good” experience. It probably won’t be this good forever, just like Google’s search.
Pretty sad that the current state would be considered "good"
With accuracy rates declining over time, we are at the 'as good as it gets' phase!
If that's the case, where's Jack Nicholson?