298
submitted 8 months ago by L4s@lemmy.world to c/technology@lemmy.world

ChatGPT has meltdown and starts sending alarming messages to users::AI system has started speaking nonsense, talking Spanglish without prompting, and worrying users by suggesting it is in the room with them

you are viewing a single comment's thread
view the rest of the comments
[-] abhibeckert@lemmy.world 3 points 8 months ago* (last edited 8 months ago)

... sure ... but you don't prepare a kid for racism with a sheltered upbringing in a pretend world where discrimination doesn't exist. You point out bad behaviour and tell them why it's not OK.

My son is three years old, he has two close friends - one is an ethnic minority (you could live an entire year in my city without even walking past a single person of their ethnic background on the street). His other close friend is a girl. My kid is already witnessing (but not understanding) discrimination against both of his two closest friends in the playground and we're doing what we can to help him navigate that. Things like "I don't like him he looks funny" and "she's a girl, she can't ride a bicycle".

Large Language Model training is exactly the same - you need to include discrimination in your training set. That's a necessary step to train a model that doesn't discriminate. Reddit has worse discrimination than some other place and that's a good thing.

The worst behaviour is easier to recognise and can help you learn to recognise more subtle discrimination such as "I don't want to play with that kid" which is not an obviously discriminatory statement, but definitely could be discrimination (and you should probably investigate before agreeing with the person).

[-] Paragone@lemmy.world 9 points 8 months ago

Yes you need to include ideology/prejudice ( 2 sides of same coin ) in training a new mind, BUT

  • you must segregate the thinking this way is good training-data from the thinking this way is wrong training-data, AND

  • doing that takes work, which is why I doubt it's being done as actually required, by any AI company, anywhere.

As Musk said about the training-stuff for their mythological self-driving neural-net, classification was too costly, so they created an AI to do it for them..

"I wonder" why it is that their full-self-driving never got reliable enough for release..

_ /\ _

this post was submitted on 21 Feb 2024
298 points (100.0% liked)

Technology

59299 readers
3864 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS