501
submitted 1 year ago by L4s@lemmy.world to c/technology@lemmy.world

Google apologizes for ‘missing the mark’ after Gemini generated racially diverse Nazis::Google says it’s aware of historically inaccurate results for its Gemini AI image generator, following criticism that it depicted historically white groups as people of color.

top 50 comments
sorted by: hot top controversial new old
[-] xantoxis@lemmy.world 140 points 1 year ago

I don't know how you'd solve the problem of making a generative AI accurately create a slate of images that both a) inclusively produces people with diverse characteristics and b) understands the context of what characteristics could feasibly be generated.

But that's because the AI doesn't know how to solve the problem.

Because the AI doesn't know anything.

Real intelligence simply doesn't work like this, and every time you point it out someone shouts "but it'll get better". It still won't understand anything unless you teach it exactly what the solution to a prompt is. It won't, for example, interpolate its knowledge of what US senators look like with the knowledge that all of them were white men for a long period of American history.

[-] random9@lemmy.world 49 points 1 year ago

You don't do what Google seems to have done - inject diversity artificially into prompts.

You solve this by training the AI on actual, accurate, diverse data for the given prompt. For example, for "american woman" you definitely could find plenty of pictures of American women from all sorts of racial backgrounds, and use that to train the AI. For "german 1943 soldier" the accurate historical images are obviously far less likely to contain racially diverse people in them.

If Google has indeed already done that, and then still had to artificially force racial diversity, then their AI training model is bad and unable to handle that a single input can match to different images, instead of the most prominent or average of its training set.

[-] xantoxis@lemmy.world 17 points 1 year ago

Ultimately this is futile though, because you can do that for these two specific prompts until the AI appears to "get it", but it'll still screw up a prompt like "1800s Supreme Court justice" or something because it hasn't been trained on that. Real intelligence requires agency to seek out new information to fill in its own gaps; and a framework to be aware of what the gaps are. Through exploration of its environment, a real intelligence connects things together, and is able to form new connections as needed. When we say "AI doesn't know anything" that's what we mean--understanding is having a huge range of connections and the ability to infer new ones.

[-] TheGreenGolem@lemmy.dbzer0.com 11 points 1 year ago

That's why I hate that they started to call them artificial intelligence. There is nothing intelligent in them at all. They work on probability based on a shit ton of data, that's all. That's not intelligence, that's basically brute force. But there is no going back at this point, I know.

load more comments (1 replies)
[-] TORFdot0@lemmy.world 32 points 1 year ago* (last edited 1 year ago)

Edit: further discussion on the topic has changed my viewpoint on this, its not that its been trained wrong on purpose and now its confused, its that everything its being asked is secretly being changed. It's like a child being told to make up a story by their teacher when the principal asked for the right answer.

Original comment below


They’ve purposefully overrode its training to make it create more PoCs. It’s a noble goal to have more inclusivity but we purposely trained it wrong and now it’s confused, the same thing as if you lied to a child during their education and then asked them for real answers, they’ll tell you the lies they were taught instead.

[-] TwilightVulpine@lemmy.world 16 points 1 year ago

This result is clearly wrong, but it's a little more complicated than saying that adding inclusivity is purposedly training it wrong.

Say, if "entrepreneur" only generated images of white men, and "nurse" only generated images of white women, then that wouldn't be right either, it would just be reproducing and magnifying human biases. Yet this a sort of thing that AI does a lot, because AI is a pattern recognition tool inherently inclined to collapse data into an average, and data sets seldom have equal or proportional samples for every single thing. Human biases affect how many images we have of each group of people.

It's not even just limited to image generation AIs. Black people often bring up how facial recognition technology is much spottier to them because the training data and even the camera technology was tuned and tested mainly for white people. Usually that's not even done deliberately, but it happens because of who gets to work on it and where it gets tested.

Of course, secretly adding "diverse" to every prompt is also a poor solution. The real solution here is providing more contextual data. Unfortunately, clearly, the AI is not able to determine these things by itself.

load more comments (5 replies)
[-] FooBarrington@lemmy.world 25 points 1 year ago* (last edited 1 year ago)

I'll get the usual downvotes for this, but:

Because the AI doesn't know anything.

is untrue, because current AI fundamentally is knowledge. Intelligence fundamentally is compression, and that's what the training process does - it compresses large amounts of data into a smaller size (and of course loses many details in the process).

But there's no way to argue that AI doesn't know anything if you look at its ability to recreate a great number of facts etc. from a small amount of activations. Yes, not everything is accurate, and it might never be perfect. I'm not trying to argue that "it will necessarily get better". But there's no argument that labels current AI technology as "not understanding" without resorting to a "special human sauce" argument, because the fundamental compression mechanisms behind it are the same as behind our intelligence.

Edit: yeah, this went about as expected. I don't know why the Lemmy community has so many weird opinions on AI topics.

[-] eatthecake@lemmy.world 25 points 1 year ago

This is all the same as saying a book is intelligent.

load more comments (6 replies)
[-] sxt@lemmy.world 16 points 1 year ago

Part of the problem with talking about these things in a casual setting is that nobody is using precise enough terminology to approach the issue so others can actually parse specifically what they're trying to say.

Personally, saying the AI "knows" something implies a level of cognizance which I don't think it possesses. LLMs "know" things the way an excel sheet can.

Obviously, if we're instead saying the AI "knows" things due to it being able to frequently produce factual information when prompted, then yeah it knows a lot of stuff.

I always have the same feeling when people try to talk about aphantasia or having/not having an internal monologue.

load more comments (5 replies)
[-] shiftymccool@lemm.ee 12 points 1 year ago

I think you might be confusing intelligence with memory. Memory is compressed knowledge, intelligence is the ability to decompress and interpret that knowledge.

load more comments (5 replies)
[-] thehatfox@lemmy.world 9 points 1 year ago

Knowledge is a bit more than just handling data, and in terms of intelligence it also involves understanding. I don’t think knowledge in an intelligent sense can be reduced to summarising data to keywords, and the reverse.

In those terms an encyclopaedia is also knowledge, but not in an intelligent way.

load more comments (5 replies)
load more comments (19 replies)
[-] redcalcium@lemmy.institute 15 points 1 year ago

Easy, just add "no racism please, except for nazi-related stuff" into the ever expanding system prompt.

[-] kautau@lemmy.world 7 points 1 year ago* (last edited 1 year ago)

And for the source of this:

https://twitter.com/dylan522p/status/1755118636807733456

That’s hilarious someone was able make the GPT unload its directive

load more comments (3 replies)
load more comments (18 replies)
[-] jacksilver@lemmy.world 85 points 1 year ago

It's great seeing time and time again that no one really does understand these models and that their preconceived notions of what biases exist ends up shooting them in the foot. It truly shows that they don't really understand how systematically problematic the underlying datasets are and the repurcussions of relying on them too heavily.

[-] Nomad@infosec.pub 16 points 1 year ago

Its not an issue. Gemini can generate the apology for you.

[-] Jeom@lemmy.world 56 points 1 year ago

inclusivity is obviously good but what googles doing just seems all too corporate and plastic

[-] Guajojo@lemmy.world 30 points 1 year ago* (last edited 1 year ago)

It's trying so hard to not be racist that is being even more racist than other AI, is hilarious

load more comments (1 replies)
[-] RGB3x3@lemmy.world 53 points 1 year ago

A Washington Post investigation last year found that prompts like “a productive person” resulted in pictures of entirely white and almost entirely male figures, while a prompt for “a person at social services” uniformly produced what looked like people of color. It’s a continuation of trends that have appeared in search engines and other software systems.

This is honestly fascinating. It's putting human biases on full display at a grand scale. It would be near-impossible to quantify racial biases across the internet with so much data to parse. But these LLMs ingest so much of it and simplify the data all down into simple sentences and images that it becomes very clear how common the unspoken biases we have are.

There's a lot of learning to be done here and it would be sad to miss that opportunity.

load more comments (10 replies)
[-] BurningnnTree@lemmy.one 46 points 1 year ago* (last edited 1 year ago)

No matter what Google does, people are going to come up with gotcha scenarios to complain about. People need to accept the fact that if you don't specify what race you want, then the output might not contain the race you want. This seems like such a silly thing to be mad about.

[-] OhmsLawn@lemmy.world 10 points 1 year ago

It's really a failure of one-size-fits-all AI. There are plenty of non-diverse models out there, but Google has to find a single solution that always returns diverse college students, but never diverse Nazis.

If I were to use A1111 to make brown Nazis, it would be my own fault. If I use Google, it's rightfully theirs.

load more comments (1 replies)
load more comments (10 replies)
[-] FinishingDutch@lemmy.world 41 points 1 year ago

Honestly, this sort of thing is what’s killing any sort of enjoyment and progress of these platforms. Between the INCREDIBLY harsh censorship that they apply and injecting their own spin on things like this, it’s nigh on impossible to get a good result these days.

I want the tool to just do its fucking job. And if I specifically ask for a thing, just give me that. I don’t mind it injecting a bit of diversity in say, a crowd scene - but it’s also doing it in places where it’s simply not appropriate and not what I asked for.

It’s even more annoying that you can’t even PAY to get rid of these restrictions and filters. I’d gladly pay to use one if it didn’t censor any prompt to death…

[-] mellowheat@suppo.fi 11 points 1 year ago* (last edited 1 year ago)

I want the tool to just do its fucking job. And if I specifically ask for a thing, just give me that. I don’t mind it injecting a bit of diversity in say, a crowd scene - but it’s also doing it in places where it’s simply not appropriate and not what I asked for.

The thing is, if it's injecting diversity into a place where there shouldn't have been diversity, this can usually be fixed by specifying better in the next prompt. Not by writing ragebait articles about it.

But yeah, I'd also be happy to be able to use an unhinged LLM once in a while.

[-] AnonStoleMyPants@sopuli.xyz 8 points 1 year ago

Taking responsibility of how I use the tools that I use? How dare you.

load more comments (3 replies)
load more comments (5 replies)
[-] kaffiene@lemmy.world 34 points 1 year ago* (last edited 1 year ago)

Why would anyone expect "nuance" from a generative AI? It doesn't have nuance, it's not an AGI, it doesn't have EQ or sociological knowledge. This is like that complaint about LLMs being "warlike" when they were quizzed about military scenarios. It's like getting upset that the clunking of your photocopier clashes with the peaceful picture you asked it to copy

[-] UlrikHD@programming.dev 14 points 1 year ago

I'm pretty sure it's generating racially diverse nazis due to companies tinkering with the prompts under the hood to counterweight biases in the training data. A naive implementation of generative AI wouldn't output black or Asian nazis.

it doesn't have EQ or sociological knowledge.

It sort of does (in a poor way), but they call it bias and tries to dampen it.

load more comments (4 replies)
load more comments (2 replies)
[-] heavy@sh.itjust.works 28 points 1 year ago

Now that shit is funny. I hope more people take more time to laugh at companies scrambling to pour billions into projects they don't understand.

Laugh while it's still funny, anyway.

[-] Kusimulkku@lemm.ee 27 points 1 year ago

If the black Scottish man post is anything to go by, someone will come in explaining how this is totally fine because there might've been a black Nazi somewhere, once.

[-] glowie@h4x0r.host 26 points 1 year ago
[-] Kusimulkku@lemm.ee 25 points 1 year ago

Someone needs to edit this to feature Kanye

[-] ArmoredThirteen@lemmy.ml 19 points 1 year ago

Looks like they scrubbed swastikas out of the training set? I have mixed feelings about this. Like if they want something to have historical accuracy or my own personal opinions on censorship that shouldn't be scrubbed. But also this is the perfect tool to churn out endless amounts of pro nazi propaganda so maybe it's safer to keep it removed?

[-] Excrubulent@slrpnk.net 9 points 1 year ago

I wonder if it's just a hard shape to get right, like hands.

load more comments (2 replies)
[-] PhlubbaDubba@lemm.ee 8 points 1 year ago

Hey! If Demoman catches you talkin' anymore shit like that he's gonna turn the lot of us into a fine red spray!

load more comments (1 replies)
[-] yildolw@lemmy.world 24 points 1 year ago

Oh no, not racial impurity in my Nazi fanart generator! /s

Maybe you shouldn't use a plagiarism engine to generate Nazi fanart. Thanks

[-] NotJustForMe@lemmy.ml 22 points 1 year ago

It's okay when Disney does it. What a world. Poor AI, how are they supposed to learn if all its data is created by mentally ill and crazy people. ٩(。•́‿•̀。)۶

load more comments (5 replies)
[-] echodot@feddit.uk 22 points 1 year ago

Who exactly are they apologizing to? Is it the Nazis?

load more comments (1 replies)
[-] blahsay@lemmy.world 21 points 1 year ago

Kanye has entered the chat.

load more comments (1 replies)
[-] Underwaterbob@lemm.ee 10 points 1 year ago

This could make for some hilarious, alternate history satire or something. I could totally see Key and Peele heading a group of racially diverse nazis ironically preaching racial purity and attempting to take over the world.

load more comments (1 replies)
load more comments
view more: next ›
this post was submitted on 22 Feb 2024
501 points (100.0% liked)

Technology

72957 readers
2449 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS