677
submitted 1 year ago by L4s@lemmy.world to c/technology@lemmy.world

Over just a few months, ChatGPT went from correctly answering a simple math problem 98% of the time to just 2%, study finds. Researchers found wild fluctuations—called drift—in the technology’s abi...::ChatGPT went from answering a simple math correctly 98% of the time to just 2%, over the course of a few months.

you are viewing a single comment's thread
view the rest of the comments
[-] blue_zephyr@lemmy.world 29 points 1 year ago* (last edited 1 year ago)

This paper is pretty unbelievable to me in the literal sense. From a quick glance:

First of all they couldn't even bother to check for simple spelling mistakes. Second, all they're doing is asking whether a number is prime or not and then extrapolating the results to be representative of solving math problems.

But most importantly I don't believe for a second that the same model with a few adjustments over a 3 month period would completely flip performance on any representative task. I suspect there's something seriously wrong with how they collect/evaluate the answers.

And finally, according to their own results, GPT3.5 did significantly better at the second evaluation. So this title is a blatant misrepresentation.

Also the study isn't peer-reviewed.

this post was submitted on 20 Jul 2023
677 points (100.0% liked)

Technology

59598 readers
2681 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS