434
submitted 1 year ago by L4s@lemmy.world to c/technology@lemmy.world

AI Industry Struggles to Curb Misuse as Users Exploit Generative AI for Chaos::Artificial intelligence just can't keep up with the human desire to see boobs and 9/11 memes, no matter how strong the guardrails are.

top 50 comments
sorted by: hot top controversial new old
[-] capital@lemmy.world 134 points 1 year ago

Is this really something people are mad about? Who cares? This shit is hilarious.

[-] EdibleFriend@lemmy.world 81 points 1 year ago

Of all the fucking things to worry about with AI... Pregnant sonic being behind 9/11.

[-] Cocodapuf@lemmy.world 5 points 1 year ago

Well I mean it points to our inability to control the use of ai systems, that is in fact a very real problem.

If you can't keep people from making stupid memes, you also can't keep people from making misleading propaganda or other seriously problematic content.

Towards the end of the story there was the example where they couldn't stop the system from giving people a recipe for napalm, despite "weapons development" being an explicitly banned topic. I don't think I need to spell out how that's a problem.

[-] kromem@lemmy.world 12 points 1 year ago

No, no one cares but it gets a bunch of clicks because it's hilarious so articles keep getting written.

It's a solved problem too. You just run the prompt and the result of the generation through a second pass of a fine tuned model checking for jailbreaking or rule breaking content generation.

But that increases cost per query by 2-3x.

And as you said, no one really cares, so it's not deemed worth it.

Yet the clicks keep coming in for anti-AI articles, so they keep getting pumped out, and laypeople now somehow think jailbreaking or hallucinations are intractable problems preventing enterprise adoption of LLMs, which is only true for the most basic plug and play high volume integrations.

load more comments (1 replies)
[-] bitsplease@lemmy.ml 90 points 1 year ago

Serious question - why should anyone care about using AI to make 9/11 memes? Boobs I can see the potential argument against at least (deep fakes and whatnot), but bad taste jokes?

Are these image generation companies actually concerned they'll be sued because someone used their platform to make an image in bad taste? Even if such a thing we're possible, wouldn't the responsibility be on the person who made it? Or at worst the platform that distributed the images -As opposed to the one that privately made it?

[-] Fyurion@lemmy.world 79 points 1 year ago

I don't see adobe trying to stop people from making 911 memes in photoshop nor have they been sued over anything like that, I dont get why AI should be different. It's just a tool.

[-] bitsplease@lemmy.ml 21 points 1 year ago

That's a great analogy, wish I'd thought of it

I guess it comes down to whether the courts decide to view AI as a tool like photoshop, or a service - like an art commission. I think it should be the former, but I wouldn't be at all surprised if the dinosaurs in the US gov think it's the latter

[-] makyo@lemmy.world 7 points 1 year ago

The problem for Adobe is that the AI work is being done on their computers, not yours, so it could be argued that they are liable for generated content. 'Could' because it's far from established but you can imagine how nervous this all must make their lawyers.

[-] kromem@lemmy.world 19 points 1 year ago

Protect the brand. That's it.

Microsoft doesn't want non-PC stuff being associated with the Bing brand.

It's what a ton of the 'safety' alignment work is about.

This generation of models doesn't pose any actual threat of hostile actions. The "GPT-4 lied and said it was human to try to buy chemical weapons" in the safety paper at release was comical if you read the full transcript.

But they pose a great deal of risk to brand control.

Yet still apparently not enough to run results through additional passes which fixes 99% of all these issues, just at 2-3x the cost.

It's part of why articles like these are ridiculous. It's broadly a solved problem, it's just the cost/benefit of the solution isn't enough to justify it because (a) these issues are low impact and don't really matter for 98% of the audience, and (b) the robust fix is way more costly than the low hanging fruit chatbot applications can justify.

load more comments (1 replies)
[-] M500@lemmy.ml 4 points 1 year ago

I’d guess that they are worried the IP owners will sue them for singing their IP.

So sonic creators will say, your profiting by using sonic and not paying us for the right to use him.

But I agree that deep fakes can be pretty bad.

[-] elbarto777@lemmy.world 14 points 1 year ago

your profiting

You are profiting = you're profiting.

[-] Lantern@lemmy.world 82 points 1 year ago

Was not expecting to see a pregnant sonic flying a plane today.

[-] Kusimulkku@lemm.ee 45 points 1 year ago

The target was very much expected though

[-] zorro@lemmy.world 22 points 1 year ago

Oh God I didn't even see that lol

[-] cheese_greater@lemmy.world 5 points 1 year ago

And Ganondorf is the father

[-] Wander@yiffit.net 76 points 1 year ago

One step towards avoiding misuse is to stop considering porn to be misuse.

load more comments (1 replies)
[-] pewnit@lemmings.world 59 points 1 year ago

You opened up Pandora's box. There's no closing it.

[-] Agent641@lemmy.world 27 points 1 year ago

We opened up pandoras box and Frankenstein's monster crawled out, and his cerebral cortex is wired directly into 4chan, and also he's a nazi.

[-] shalafi@lemmy.world 16 points 1 year ago
load more comments (1 replies)
[-] hOrni@lemmy.world 53 points 1 year ago

I busted out laughing on a public bus while reading grandma's napalm recipe.

load more comments (1 replies)
[-] brsrklf@jlai.lu 49 points 1 year ago

Image Credits: Bing Image Creator / Microsoft

Best part of the article.

[-] Agent641@lemmy.world 47 points 1 year ago

Why didnt someone warn us about this? Nobody said this might happen, nobody! Not a single person tried to be the voice of reason!

[-] Grass@sh.itjust.works 40 points 1 year ago

Meanwhile bing images blocks 90% of my generation attempts for unsavory content when the prompt is generally something that should be safe even for kids. Why do we only get the extremes?

[-] saltnotsugar@lemm.ee 23 points 1 year ago

I used ChatGPT to write a song about hamsters robbing a bank.

[-] olsonexi@lemmy.wtf 11 points 1 year ago

It’s so beautifully human that decades of scientific innovation paved the way for this technology, only for us to use it to look at boobs.

[-] bitsplease@lemmy.ml 17 points 1 year ago

I can't remember the exact quote, or where I read it (I think it might have been Mickey7 by Ashton Edward) but it went something like this

"virtually all technological innovation throughout all time has been first and foremost used for one thing. Easier and better access to Porn. The printing press, the TV, the internet, VR, and occular implants. What we couldn't figure out how to watch porn with, we used to kill each other instead"

Frankly, anyone who first heard about AI image generation and didn't immediately think "oh, people are gonna use that for porn" is incredibly naive lol

[-] Hamartiogonic@sopuli.xyz 10 points 1 year ago

This is a part of a bigger topic people need to be aware of. As more and more AI is used in public spaces and the internet, people will find creative ways to exploit it.

There will always be ways to make the AI do stuff the owners don’t want it to. You could think of it like the exploits used in speedrunning, but in this case there’s a lot more variety. Just like you can make an AI generate morally questionable material, you could potentially find a way to exploit the AI of a self driving car to do whatever you can think of.

[-] kromem@lemmy.world 3 points 1 year ago* (last edited 1 year ago)

This is trivially fixable, it's just at 2-3x the per query cost so it isn't deemed worth it for high volume chatbots given the low impact of jailbreaking.

For anything where jailbreaking would somehow be a safety concern, that cost just needs to be factored in.

load more comments (1 replies)
[-] bappity@lemmy.world 9 points 1 year ago

who could've seen this coming

[-] Usernameblankface@lemmy.world 7 points 1 year ago

Welcome to the Internet

[-] autotldr@lemmings.world 7 points 1 year ago

This is the best summary I could come up with:


Both Meta and Microsoft’s AI image generators went viral this week for responding to prompts like “Karl marx large breasts” and fictional characters doing 9/11.

“I don’t think anyone involved has thought anything through,” X (formally Twitter) user Pioldes posted, along with screenshots of AI-generated stickers of child soldiers and Justin Trudeau’s buttocks.

One Bing user went further, and posted a thread of Kermit committing a variety of violent acts, from attending the January 6 Capitol riot, to assassinating John F. Kennedy, to shooting up the executive boardroom of ExxonMobil.

In the race to one-up competitors’ AI features, tech companies keep launching products without effective guardrails to prevent their models from generating problematic content.

Messing around with roundabout prompts to make generative AI tools produce results that violate their own content policies is referred to as jailbreaking (the same term is used when breaking open other forms of software, like Apple’s iOS).

Midjourney bans pornographic content, going as far as blocking words related to the human reproductive system, but users are still able to bypass the filters and generate NSFW images.


The original article contains 1,220 words, the summary contains 181 words. Saved 85%. I'm a bot and I'm open source!

[-] batmangrundies@lemmy.world 5 points 1 year ago* (last edited 1 year ago)

In Australia we are currently voting on a constitutional amendment. It would create an advisory body that represents first nations people. It's super basic, doesn't really cover how it will work, because they can't really even work on that until the amendment passes.

But presumably it will allow them to directly advise government, rather than through the spiderweb of community leaders, NGOs and whatnot that exist now, and provide some structure for Aboriginal representation in parliament

The sheer amount of disinformation circulating is staggering. I was lucky enough to really avoid most of the drama, until I went and had a look this past week finally.

What interested me, was rather than the usual short posts and snarky racist comments, of which plenty exist. These long diatribes were dominant, on places like Reddit and Facebook.

Then it struck me, they all sound like they were written by the same person. Not just a little, if you had removed the names and pictures of the users, I would have flat out assumed it was the same person.

We have opened Pandora's Box. We don't need "AGI" or whatever, this is plenty enough to do us in.

[-] Letstakealook@lemm.ee 5 points 1 year ago

Can we stop calling these technologies "AI?" Then can we stop talking about them?

[-] bitsplease@lemmy.ml 12 points 1 year ago

Why did no one care about the misuse of the term AI until these image generators or LLMs? Seriously, people have been talking about video game "AI", chess "AI" and stuff like that. It's understood that when people say "AI" they don't mean "general machine intelligence" or anything like that. And frankly LLMs and image generators fit the bill better than most of the things we've used the term for previously

As for "can we stop talking about them", these and LLMs are already having some pretty huge impacts on modern society - for better or worse, it'd be pretty odd for us all to decide to just stop talking about them.

[-] Letstakealook@lemm.ee 9 points 1 year ago

The difference from prior use of the term "AI" and these technologies is, as you said, before it was understood that it was a short hand, not actual intelligence. Now you have a bunch of panicky people acting as if skynet has arrived.

They really haven't had much of an impact beyond people talking about them all the damn time, especially the fear mongering. At present, these are really just expensive toys. Computer image and gibberish generators.

The real concerns with developing technologies should be in regards to things like facial recognition and so-called self driving cars. These technologies present actual dangers to society and public safety, not to mention the complex legal questions that come with their use.

[-] bitsplease@lemmy.ml 5 points 1 year ago

^They really haven't had much of an impact beyond people talking about them all the damn time, especially the fear mongering. At present, these are really just expensive toys. Computer image and gibberish generators.

I highly disagree. Almost everyone I know under the age of 40 uses LLMs to some extent in the course of their job already, whether it's as simple as composing emails or as significant as using copilot/chatGPT to code. And just today I read an article about an entire call center getting laid off this week to be replaced by an LLM.

I completely agree that a lot of the hype is overblown, but "AI" is absolute significant in our society, and so we talk about it

[-] Letstakealook@lemm.ee 4 points 1 year ago

It seems everyone you know under the age of 40 is in a very specific subset of the workforce. They do not represent a significant portion of the workforce. I would love to read that article about the call center so I can keep an eye out for news when that plan completely fails. I'm assuming it must be a consumer facing call center to be so brazen. They wouldn't risk business accounts (big money) on an llm, the technology just isn't there.

[-] bitsplease@lemmy.ml 4 points 1 year ago

https://nationalpost.com/news/business-owner-hires-chatgpt-for-customer-service-then-fires-the-humans

And I don't disagree that it will fail, but the fact that it's happening in the first place makes it significant and so worth talking about. Whether or not its a good idea, companies all over the world are exploring ways to replace human labor with these products, and thats what makes it significant.

load more comments (1 replies)
[-] iopq@lemmy.world 2 points 1 year ago

Facial recognition and image generation is the same technology applied in a different way

load more comments
view more: next ›
this post was submitted on 09 Oct 2023
434 points (100.0% liked)

Technology

59334 readers
4679 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS