Elon looking for the unauthorized person:
Yeah, billionaires are just going to randomly change AI around whenever they feel like it.
That AI you've been using for 5 years? Wake up one day, and it's been lobotomized into a trump asshole. Now it gives you bad information constantly.
Maybe the AI was taken over by religious assholes, now telling people that gods exist, manufacturing false evidence?
Who knows who is controlling these AI. Billionaires, tech assholes, some random evil corporation?
Joke's on you, LLMs already give us bad information
Sure, but unintentionally. I heard about a guy whose small business (which is just him) recently had someone call in, furious because ChatGPT told them that he was having a sale that she couldn't find. The customer didn't believe him when he said that the promotion didn't exist. Once someone decides to leverage that, and make a sufficiently-popular AI model start giving bad information on purpose, things will escalate.
Even now, I think Elon could put a small company out of business if he wanted to, just by making Grok claim that its owner was a pedophile or something.
"Unintentionally" is the wrong word, because it attributes the intent to the model rather than the people who designed it.
Hallucinations are not an accidental side effect, they are the inevitable result of building a multidimensional map of human language use. People hallucinate, lie, dissemble, write fiction, misrepresent reality, etc. Obviously a system that is designed to map out a human-sounding path from a given system prompt to a particular query is going to take those same shortcuts that people used in its training data.
Unintentionally is the right word because the people who designed it did not intend for it to be bad information. They chose an approach that resulted in bad information because of the data they chose to train and the steps that they took throughout the process.
Incorrect. The people who designed it did not set out with a goal of producing a bot that reguritates true information. If that's what they wanted they'd never have used a neural network architecture in the first place.
I currently treat any positive interaction with an LLM as a “while the getting’s good” experience. It probably won’t be this good forever, just like Google’s search.
Pretty sad that the current state would be considered "good"
With accuracy rates declining over time, we are at the 'as good as it gets' phase!
If that's the case, where's Jack Nicholson?
Don’t know the reference but I’m sure it’s awesome. :p
It's from the show "I think you should leave." There's a sketch where someone has crashed a weinermobile into a storefront, and bystanders are like "did anyone get hurt?" "What happened to the driver?" And then this guy shows up.
they say unauthorised because they got caught
"I didn't give you permission to get caught!"
The unauthorized edit is coming from inside the house.
Unauthorized is their office nickname for Musk.
Its incredible how things can just slip through, especially when they start at the very top
Musk made the change but since AI is still as rough as his auto driving tech it did t work like he planned
But this is the future folks. Modifying the AI to fit the narrative of the regime. He's just too stupid to do it right or he might be stupid and think these llms work better than they actually do.
he might be stupid and think these llms work better than they actually do.
There it is.
Brah, if your CEO edits the prompt, it's not unauthorized. It may be undesirable, but it really ain't unauthorised
This is why I tell people stop using LLMs. The owner class owns them (imagine that) and will tell it to tell you what they want so they make more money. Simple as that.
This is why the Chinese openly releasing deepseek was such a kick in the balls to the LLM tech bros.
They say that they'll upload the system prompt to github but that's just deception. The Twitter algorithm is "open source on github" and hasn't been updated for over 2 years. The issues are a fun read tho https://github.com/twitter/the-algorithm/issues
There's just no way to trust that anything is running on the server unless it's audited by 3rd party.
So now all of these idiots going to believe "but its on github open source" when the code is never actually being run by anyone ever.
We need people educated on open source, community-made hardware and software
I'm going to bring it up.
Isn't this the same asshole who posted the "Woke racist" meme as a response to Gemini generating images of Black SS officers? Of course we now know he was merely triggered by the suggestion because of his commitment to white supremacy and alignment with the SS ideals, which he could not stand to see, pun not intended, denigrated.
The Gemini ordeal was itself a result of a system prompt; a half-ass attempt to correct for white bias deeply learned by the algorithm, just a few short years after Google ousted their AI ethics researcher for bringing this type of stuff up.
Few were the outlets that did not lend credence to the "outrage" about "diversity bias" bullshit and actually covered that deep learning algorithms are indeed sexist and racist.
Now this nazi piece of shit goes ahead and does the exact same thing; he tweaks a system prompt causing the bot to bring up the self-serving and racially charged topic of apartheid racists being purportedly persecuted. He does the vary same thing he said was "uncivilizational", the same concept he brought up just before he performed the two back-to-back Sieg Heil salutes during Trump's inauguration.
He was clearly not concerned about historical accuracy, not the superficial attempt to brown-wash the horrible past of racism which translates to modern algorithms' bias. His concern was clearly the representation of people of color, and the very ideal of diversity, so he effectively went on and implemented his supremacist seething into a brutal, misanthropic policy with his interference in the election and involvement in the criminal, fascist operation also known as DOGE.
Is there anyone at this point that is still sitting on the fence about Musk's intellectual dishonesty and deeply held supremacist convictions? Quickest way to discover nazis nowadays really: (thinks that Musk is a misunderstood genius and the nazi shit is all fake).
In a (code) perfect world, wouldn't an LLMs "personality" and biases be aligned with the median of its training set?
In other words....stupid-in/stupid-out. As long as the (median of) input data is racist and sexist, the output data would be equally bigoted.
That's not to say that the average person is openly bigoted, but the open bigots are pretty damn loud.
That’s not to say that the average person is openly bigoted
I do think the average person openly perpetuates racist stereotypes due to the pressure of systemic racism. Not that they intend to, and their beliefs frequently contradict their actions because they just don't notice that they are going along with it.
Like the average person will talk about the 'bad part of town' in a way that implies the bad part is due to being where 'those people' live.
I don't disagree, but that's probably closer to implicit bias than overt bigotry. When people talk about the "bad part of town", often it's the "bad part" as a result of perpetual systemic racism, and the concerns of going there is more rooted in personal safety (or at least the perception of it). And sure, that feeds into it, but it's really more of a cycle or a feedback loop.
And there's also the anxiety of being the cultural and demographic opposite of everyone around you. That's gotta be some sub-type of agoraphobia or something.
Sure, probably, "implicit bias" is just a PC way of saying "racist-ish", but it is at least a start. It's very difficult to retrain behaviors that have been learned since birth, if not hypnopaedically earlier.
I would say that the "bad part of town" usually has overlap with the poorer part of town, regardless of what skin colour people have there. Of course, especially in the US, there's significant overlap between economic status and skin colour. I just hate how the typical American view on "race" is projected onto other countries.
Americans typically have this hang-up on "race" that you really don't find anywhere else. A lot of places you have talk about "ethnicity" or similar, but the American fascination with categorising people by their skin colour and then using that to make generalisations is pretty unique.
Hmm, sure.
I am pretty sure that is called disinformation. Rather than "unauthorized edit"...
There goes Adrian Dittman again. That guy oughta be locked up.
Looks like someone's taking some lessons from Zuck's methodology. "Whoops! That highly questionable and suspiciously intentional shit we did was totes an accident! Spilt milk now, I guess! Wuh-huh-heey honk-honk!"
This actually shows that they're is work being done to use LLM on social media to pretend to be ordinary users and trying to sway opinion of the population.
This is currently the biggest danger of LLM, and the bill to prevent states from regulating it is to ensure they can continue using it
...is entertaining a plan to grant refugee status to white Afrikaners
FYI, the Republicans have already done it.
https://www.npr.org/2025/05/12/nx-s1-5395067/first-group-afrikaner-refugees-arrive
That's the problem with modern AI and future technologies we are creating
We, as a human civilization, are not creating future technology for the betterment of mankind ... we are arrogantly and ignorantly manipulating all future technology for our own personal gain and preferences.
And what about Elmo’s white genocide obsession?
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.