668
submitted 1 year ago by L4s@lemmy.world to c/technology@lemmy.world

An Asian MIT student asked AI to turn an image of her into a professional headshot. It made her white with lighter skin and blue eyes.::Rona Wang, a 24-year-old MIT student, was experimenting with the AI image creator Playground AI to create a professional LinkedIn photo.

top 50 comments
sorted by: hot top controversial new old
[-] ExclamatoryProdundity@lemmy.world 249 points 1 year ago

Look, I hate racism and inherent bias toward white people but this is just ignorance of the tech. Willfully or otherwise it’s still misleading clickbait. Upload a picture of an anonymous white chick and ask the same thing. It’s going go to make a similar image of another white chick. To get it to reliably recreate your facial features it needs to be trained on your face. It works for celebrities for this reason not a random “Asian MIT student” This kind of shit sets us back and makes us look reactionary.

[-] AbouBenAdhem@lemmy.world 164 points 1 year ago* (last edited 1 year ago)

It’s less a reflection on the tech, and more a reflection on the culture that generated the content that trained the tech.

Wang told The Globe that she was worried about the consequences in a more serious situation, like if a company used AI to select the most "professional" candidate for the job and it picked white-looking people.

This is a real potential issue, not just “clickbait”.

[-] HumbertTetere@feddit.de 35 points 1 year ago

If companies go pick the most professional applicant by their photo that is a reason for concern, but it has little to do with the image training data of AI.

load more comments (2 replies)
[-] luthis@lemmy.nz 29 points 1 year ago

A company using a photo to choose a candidate is really concerning regardless if they use AI to do it.

[-] AbouBenAdhem@lemmy.world 21 points 1 year ago* (last edited 1 year ago)

Some people (especially in business) seem to think that adding AI to a workflow will make obviously bad ideas somehow magically work. Dispelling that notion is why articles like this are important.

(Actually, I suspect they know they’re still bad ideas, but delegating the decisions to an AI lets the humans involved avoid personal blame.)

load more comments (2 replies)
load more comments (1 replies)
load more comments (2 replies)
[-] hardypart@feddit.de 24 points 1 year ago

It still perfectly and visibly demonstrates the big point of criticism in AI: The tendencies the the training material inhibits.

[-] heartlessevil@lemmy.one 11 points 1 year ago

This is like a demonstration of lack of self awareness

[-] Buddahriffic@lemmy.world 9 points 1 year ago

The AI might associate lighter skin with white person facial structure. That kind of correlation would need to be specifically accounted for I'd think, because even with some examples of lighter skinned Asians, the majority of photos of people with light skin will have white person facial structure.

Plus it's becoming more and more apparent that AIs just aren't that good at what they do in general at this point. Yes, they can produce some pretty interesting things, but they seem to be the exception rather than the norm, and in hindsight, a lot of my being impressed with results I've seen so far is that it's some kind of algorithm that is producing that in the first place when the algorithm itself isn't directly related to the output but is a few steps back from that.

I bet for the instances where it does produce good results, it's still actually doing something simpler than what it looks like it's doing.

load more comments (1 replies)
[-] gorogorochan@lemmy.world 99 points 1 year ago

Meanwhile every trained model on Civit.ai produces 12/10 Asian women...

Joking aside, what you feed the model is what you get. Model is trained. You train it on white people, it's going to create white people, you train it on big titty anime girls it's not going to produce WWII images either.

Then there's a study cited that claims Dall-e has a bias when producing images of CEO or director as cis-white males. Think of CEOs that you know. Better yet, google them. It's shit but it's the world we live in. I think the focus should be on not having so many white privileged people in the real world, not telling AI to discard the data.

[-] locuester@lemmy.zip 8 points 1 year ago

Yeah there are a lot of cases of claims being made of AI “bias” which is in fact just a reflection of the real world (from which it was trained). Forcing AI to fake equal representation is not fixing a damn thing in the real world.

load more comments (7 replies)
[-] AbouBenAdhem@lemmy.world 93 points 1 year ago* (last edited 1 year ago)

They should just call AIs “confirmation bias amplifiers”.

[-] ghariksforge@lemmy.world 28 points 1 year ago

AI learns what is in the data.

[-] brambledog@infosec.pub 8 points 1 year ago

The AI we have isn't "learning". They are pre-trained.

[-] rebelsimile@sh.itjust.works 13 points 1 year ago

The “pre-training” is learning, they are often then fine-tuned with additional training (that’s the training that isn’t the ‘pre-training’), i.e. more learning, to achieve specific results.

[-] GenderNeutralBro@lemmy.sdf.org 66 points 1 year ago

This is not surprising if you follow the tech, but I think the signal boost from articles like this is important because there are constantly new people just learning about how AI works, and it's very very important to understand the bias embedded into them.

It's also worth actually learning how to use them, too. People expect them to be magic, it seems. They are not magic.

If you're going to try something like this, you should describe yourself as clearly as possible. Describe your eye color, hair color/length/style, age, expression, angle, and obviously race. Basically, describe any feature you want it to retain.

I have not used the specific program mentioned in the article, but the ones I have used simply do not work the way she's trying to use them. The phrase she used, "the girl from the original photo", would have no meaning in Stable Diffusion, for example (which I'd bet Playground AI is based on, though they don't specify). The img2img function makes a new image, with the original as a starting point. It does NOT analyze the content of the original or attempt to retain any features not included in the prompt. There's no connection between the prompt and the input image, so "the girl from the original photo" is garbage input. Garbage in, garbage out.

There are special-purpose programs designed for exactly the task of making photos look professional, which presumably go to the trouble to analyze the original, guess these things, and pass those through to the generator to retain the features. (I haven't tried them, personally, so perhaps I'm giving them too much credit...)

[-] CoderKat@lemm.ee 22 points 1 year ago

If it's stable diffusion img2img, then totally, this is a misunderstanding of how that works. It usually only looks at things like the borders or depth. The text based prompt that the user provides is otherwise everything.

That said, these kinds of AI are absolutely still biased. If you tell the AI to generate a photo of a professor, it will likely generate an old white dude 90% of the time. The models are very biased by their training data, which often reflects society's biases (though really more a subset of society that created whatever training data the model used).

Some AI actually does try to counter bias a bit by injecting details to your prompt if you don't mention them. Eg, if you just say "photo of a professor", it might randomly change your prompt to "photo of a female professor" or "photo of a black professor", which I think is a great way to tackle this bias. I'm not sure how widespread this approach is or how effective this prompt manipulation is.

load more comments (1 replies)
[-] BURN@lemmy.world 57 points 1 year ago

Garbage in = Garbage out

ML training data sets are only as good as their data, and almost all data is inherently flawed. Biases are just more pronounced in these models because they scale the bias with the size of the model, becoming more and more noticeable.

[-] notapantsday@feddit.de 43 points 1 year ago

Can we talk about how a lot of these AI-generated faces have goat pupils? That's some major bias that is often swept under the rug. An AI that thinks only goats can be professionals could cause huge disadvantages for human applicants.

[-] sirswizzlestix@lemmy.world 37 points 1 year ago

These biases have always existed in the training data used for ML models (society and all that influencing the data we collect and the inherent biases that are latent within), but it’s definitely interesting that generative models now make these biases much much more visible (figuratively and literally with image models) to the lay person

load more comments (23 replies)
[-] ryannathans@lemmy.fmhy.net 30 points 1 year ago

User error, be more specific

[-] pacoboyd@lemm.ee 28 points 1 year ago

Also depends on what model was used, prompt, strength of prompt etc.

No news here, just someone who doesn't know how to use AI generation.

[-] ghariksforge@lemmy.world 20 points 1 year ago

Why is anyone surprised at this? People are using AI for things it was never designed and optimized for.

load more comments (2 replies)
[-] IndisposedShakerCup@lemmy.world 19 points 1 year ago

Hm. Probably trained on more white people than Asians shrug

[-] RobotToaster@infosec.pub 19 points 1 year ago

She asked the AI to make her photo more like what society stereotypes as professional, and it made her photo more like what society stereotypes as professional.

[-] Haha@lemmy.world 19 points 1 year ago

Garbage post

[-] starcat@lemmy.world 16 points 1 year ago

Racial bias propagating, click-baity article.

Did anyone bother to fact check this? I ran her exact photo and prompt through Playground AI and it pumped out a bad photo of an Indian woman. Are we supposed to play the raical bias card against Indian women now?

This entire article can be summarized as "Playground AI isn't very good, but that's boring news so let's dress it up as something else"

[-] Kinglink@lemmy.world 14 points 1 year ago

Media: "I don't understand technology" even though writing about the technology multiple times.

AIs are completely based on the training data that they'll use. If they only loaded professional headshots of Asian people, a white person would turn Asian if added.

Besides which you run it multiple times, and choose the one you want, I'm sure if you did that, it'd change her eye color multiple times.

Really blame the AI, not AI in general. Or blame the media for making clickbait articles in the first place.

[-] mojo@lemm.ee 14 points 1 year ago

Rage bait to push the ethics in AI narrative

load more comments (5 replies)
[-] cloudless@feddit.uk 13 points 1 year ago

White = professional?!

Blonde hair = Super Saiyan

load more comments (1 replies)
[-] morhp@lemmy.wtf 11 points 1 year ago

Interestingly, many stable diffusion models are trained on pictures of Asian people and thus often generate people that look more or less Asian if there's no specific input or tuning otherwise. It's all in the training data and tuning.

See for example here for reference.

load more comments (2 replies)
[-] sw2de3fr4gt@lemmy.world 11 points 1 year ago

AI turned her into a White Walker

load more comments (2 replies)
[-] gmtom@lemmy.world 11 points 1 year ago

This is just dumb rage-bait. At worst this shows a bias in training data, probably because the AI was developed in a majority white country that used images of majority white people to train it.

And likely its not even that. The AI has no concept of race, so doesnt know to make white people white and asian people asian, so would also be likely to do the reverse.

[-] EmotionalMango22@lemmy.world 11 points 1 year ago

So? There are white people in the world. Ten bucks says she tuned it to make her look white for the clicks. I've seen this in person several times at my local college. People die for attention, and shit like this is an easy-in.

[-] funkajunk@lemm.ee 10 points 1 year ago

Sigh...

It's not racial bias, it works from a limited dataset and what it understands a "professional headshot" even is.

Seems like some ragebait to me.

[-] HobbitFoot@thelemmy.club 13 points 1 year ago

It isn't intended to be racial bias, it just happens to be racial bias.

load more comments (1 replies)
[-] Mereo@lemmy.ca 7 points 1 year ago

It reminds me of Google back in the day (probably early 2010s). If you searched for White Women, it returned professional and respectable images. But if you searched for Black Women, it returned explicit images.

Machine learning algorithms are like sponges and learn from existing social biases.

load more comments
view more: next ›
this post was submitted on 01 Aug 2023
668 points (100.0% liked)

Technology

59166 readers
2013 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS