108

Key Facts:

  • The AI system uses ten categories of social emotions to identify violations of social norms.

  • The system has been tested on two large datasets of short texts, validating its models.

  • This preliminary work, funded by DARPA, is seen as a significant step in improving cross-cultural language understanding and situational awareness.

you are viewing a single comment's thread
view the rest of the comments
[-] MasterBlaster@lemmy.world 16 points 1 year ago

As long as this doesn't get repurposed to regulate "social credit", this is fine.

That's the scary part of it. Idc how good it is, but if it starts to be used to censor information and rate humans, that's the line.

[-] kroy@lemmy.world 9 points 1 year ago

The line will come far far FAR before that

[-] graphite@lemmy.world 5 points 1 year ago

but if it starts to be used to censor information and rate humans, that's the line.

That line has already been crossed. Since it's already been crossed, it's inevitable that this will be used in that way.

this post was submitted on 23 Jul 2023
108 points (100.0% liked)

Technology

59768 readers
2761 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS