164

I don't have a quip, just a sorrowful head shake that I somehow got in the shitty timeline.

top 22 comments
sorted by: hot top controversial new old
[-] PeepinGoodArgs@reddthat.com 37 points 1 year ago

Researchers ultimately concluded that “a constructive strategy for identifying the violation of social norms is to focus on a limited set of social emotions signaling the violation,” namely guilt and shame. In other words, the scientists wanted to use AI to understand when a mobile user might be feeling bad about something they’ve done. To do this, they generated their own “synthetic data” via GPT-3, then leveraged zero-shot text classification to train predictive models that could “automatically identify social emotions” in that data. The hope, they say, is that this model of analysis can be pivoted to automatically scan text histories for signs of misbehavior.

Lemme get this straight: DARPA researches fabricated a series of words that signaled emotional states. And then, they, the DARPA researchers classified the series of words with the emotional states for the AI to train on (zero-shot classification). And then they hope to leverage the trained AI to identify "social emotions"?

Everything about this is fucking stupid.

The GPT-3 prompt could've been: "What are some sentences a shameful socialist/conservative/anarchist/terrorist/etc protestor/litterer/murderer/liar/etc might use?", implicitly connecting shame a particular ideology. As such, social emotions signals more emotions by their method of generation and classification.

Suddenly, some random person is being targeted for having fucked up and they're like, "Wtf did I do? Yes, I did shoplift from Target, but it was like a $20 shirt because my job at Wal-Mart makes me use food stamps to make ends meet. Fuck off!"

The AI automatically detects another violation of social norms.

And you're like, "That's an edge case...". Yeah, sure, but it's DARPA, we're talking about here. That should be enough said.

[-] captainlezbian@lemmy.world 14 points 1 year ago

It sounds like they’re focusing on the shame associated, which leads to the irony of they’ll find the awkward and uncomfortable ones but not the ardent. That sounds unwise

[-] funkless@lemmy.world 8 points 1 year ago

it's long been the case that to get away with a crime, you make it your business.

[-] chaogomu@kbin.social 9 points 1 year ago

The worst part is, ChatGPT cannot generate anything new. It's pre-trained, which is the P in the name.

It can only recombine the training data into forms that sort of match the training data. So, if the training data is garbage, the output will be more garbage.

And this garbage in garbage out is going to be used to harm real people.

In addition, ChatGPT lies. It hallucinates shit that is provably false, because that's what it's generated text needs to look like to match the training data.

So it will likely lead to a bunch of false positives, because the positive response better matches the training data.

[-] Astrealix@lemmy.world 28 points 1 year ago

As someone who's autistic, fuck this with a fucking chainsaw lmfao.

[-] Shotgun_Alice@lemmy.world 20 points 1 year ago

Coming next, is a social credit score.

[-] Shialac@lemmy.world 7 points 1 year ago

That already exists, and did for a long time. It just comes in multiple, disguised forms.

[-] bappity@lemmy.world 19 points 1 year ago

alert, user detected performing a dab in 2023. arrest them and LOCK THEM AWAY

[-] Badgernomics@lemmy.world 8 points 1 year ago

That's just for the greater good....

[-] vickychen@lemmy.world 17 points 1 year ago

Minority Report vibes from this

[-] Infynis@midwest.social 8 points 1 year ago

Time to get my eyes swapped

[-] Fpsfrank85@lemmy.world 11 points 1 year ago

So like 100 percent of the internet? Mostly bots anyway. Everything is so dumb.

[-] dewritoninja@pawb.social 10 points 1 year ago* (last edited 1 year ago)

Great another tool that can be used against women and minorities. With all that far right authoritarianism on the rise I can't wait for the ai to flag me as a raging homosexual so I can end up in a labor camp or death

[-] sciawp@lemm.ee 7 points 1 year ago

Loving this dystopian hellhole we’ve created!

[-] AceFuzzLord@lemm.ee 1 points 1 year ago

Have you texted someone lately to express guilt over...something? The government probably wants to know about it.

Good thing I'm one of those people who doesn't feel guilty about the things I do and express them. The feeling of guilt goes away shortly for me anyways, so I think I'm safe until they find a way to actually start reading our minds and invading our privacy of thought.

[-] AllonzeeLV@lemmy.world 1 points 1 year ago

Because Project Insight/Pre-Crime weren't cautionary tales, they were defense proposals.

[-] Theharpyeagle@lemmy.world 2 points 1 year ago
[-] nxfsi@lemmy.world 1 points 1 year ago* (last edited 1 year ago)

Lemmings will support this as long as it is against:

  • Republicans
  • pedophiles
  • Tr*mp supporters
  • anti-vax
  • climate change deniers
  • racists
  • N***s
[-] Pseu@kbin.social 10 points 1 year ago

You're literally the only commenter here that implies any support for this.

[-] TokenBoomer@lemmy.world 2 points 1 year ago

Can we not say Nazi?

load more comments
view more: next ›
this post was submitted on 26 Jul 2023
164 points (100.0% liked)

A Boring Dystopia

9738 readers
618 users here now

Pictures, Videos, Articles showing just how boring it is to live in a dystopic society, or with signs of a dystopic society.

Rules (Subject to Change)

--Be a Decent Human Being

--Posting news articles: include the source name and exact title from article in your post title

--Posts must have something to do with the topic

--Zero tolerance for Racism/Sexism/Ableism/etc.

--No NSFW content

--Abide by the rules of lemmy.world

founded 1 year ago
MODERATORS