80
submitted 1 year ago* (last edited 1 year ago) by Elephant0991@lemmy.bleh.au to c/technology@beehaw.org

Summary

  • Detroit woman wrongly arrested for carjacking and robbery due to facial recognition technology error.
  • Porsche Woodruff, 8 months pregnant, mistakenly identified as culprit based on outdated 2015 mug shot.
  • Surveillance footage did not match the identification, victim wrongly identified Woodruff from lineup based on the 2015 outdated photo.
  • Woodruff arrested, detained for 11 hours, charges later dismissed; she files lawsuit against Detroit.
  • Facial recognition technology's flaws in identifying women and people with dark skin highlighted.
  • Several US cities banned facial recognition; debate continues due to lobbying and crime concerns.
  • Law enforcement prioritized technology's output over visual evidence, raising questions about its integration.
  • ACLU Michigan involved; outcome of lawsuit uncertain, impact on law enforcement's tech use in question.
you are viewing a single comment's thread
view the rest of the comments
[-] scrubbles@poptalk.scrubbles.tech 13 points 1 year ago

Facial recognition (and AI ) is a great tool to use as a double check.

"Hey I think we found her because her car is parked there, is that her?" AI says it's 90% sure it's her, sounds good let's go.

Or entering through an airport gate after scanning your passport. Great double check.

It should never be the first line to find people out of a crowd. That's how we slide into the dystopia

[-] storksforlegs@beehaw.org 5 points 1 year ago* (last edited 1 year ago)

You're right. Hopefully they will have rules like this. But knowing police, even if these rules did exist many departments would still go "Facial match? Good enough"

[-] Hamartiogonic@sopuli.xyz 4 points 1 year ago

I think the appropriate procedure depends on the ratio of true/false positives/negatives. This is basically the same discussion that occurred with covid tests, because the mathematics behind are identical. Looking at the Positive/Negative Predictive Value should give you an idea how much you should trust each assessment.

Based on the articles we’ve seen recently, it seems that the false positives are the main problem here, so perhaps the PPV isn’t high enough. Ideally, you would combine two types of methods so that at the end of the day you’ll get to a very high PPV and NPV. However, I’m pretty sure humans have a very low NPV. Hopefully, the PPV is a lot higher, but racism clearly isn’t helping here. Augmenting that appalling mess with a flawed system is still a step forward IMO. Well, at least it would be if the system wasn’t used to oppress and discriminate people.

this post was submitted on 10 Aug 2023
80 points (100.0% liked)

Technology

37716 readers
374 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS