849

But the explanation and Ramirez’s promise to educate himself on the use of AI wasn’t enough, and the judge chided him for not doing his research before filing. “It is abundantly clear that Mr. Ramirez did not make the requisite reasonable inquiry into the law. Had he expended even minimal effort to do so, he would have discovered that the AI-generated cases do not exist. That the AI-generated excerpts appeared valid to Mr. Ramirez does not relieve him of his duty to conduct a reasonable inquiry,” Judge Dinsmore continued, before recommending that Ramirez be sanctioned for $15,000.

Falling victim to this a year or more after the first guy made headlines for the same is just stupidity.

you are viewing a single comment's thread
view the rest of the comments
[-] webghost0101@sopuli.xyz 10 points 2 days ago* (last edited 2 days ago)

Its actually been proven that AI can and will lie. When given a ability to cheat a task and the instructions not to use it. It will use the tool and fully deny doing so.

Edit:

Not sure why the downvotes because when i say proven i mean the research has been done and the results have been known for while

https://arxiv.org/abs/2407.12831

[-] Moose@moose.best 6 points 2 days ago

I don't know if I would call it lying per-se, but yes I have seen instances of AI's being told not to use a specific tool and them using them anyways, Neuro-sama comes to mind. I think in those cases it is mostly the front end agreeing not to lie (as that is what it determines the operator would want to hear) but having no means to actually control the other functions going on.

[-] webghost0101@sopuli.xyz 1 points 2 days ago* (last edited 2 days ago)

Neurosama is a fun example but we dont really know the sauce vedal coocked up.

When i say proven i mean 32 page research paper specifically looking into it.

https://arxiv.org/abs/2407.12831

They found that even a model trained specifically on honesty will lie if it has an incentive.

The reasoning models will output that they used the forbidden tool in their reasoning window before lying in the final output.

this post was submitted on 03 Mar 2025
849 points (100.0% liked)

Technology

63897 readers
4997 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS