24
top 9 comments
sorted by: hot top controversial new old
[-] BlueMonday1984@awful.systems 13 points 3 months ago

Artificial intelligence and cheating/lying: two great tastes that go together

[-] diz@awful.systems 8 points 3 months ago* (last edited 3 months ago)

When they tested on bugs not in SWE-Bench, the success rate dropped to 57‑71% on random items, and 50‑68% on fresh issues created after the benchmark snapshot. I’m surprised they did that well.

After the benchmark snapshot. Could still be before LLM training data cut off, or available via RAG.

edit: For a fair test you have to use git issues that had not been resolved yet by a human.

This is how these fuckers talk, all of the time. Also see Sam Altman's not-quite-denials of training on Scarlett Johansson's voice: they just asserted that they had hired a voice actor, but didn't deny training on actual Scarlett Johansson's voice. edit: because anyone with half a brain knows that not only did they train on her actual voice, they probably gave it and their other pirated movie soundtracks massively higher weighting, just as they did for books and NYT articles.

Anyhow, I fully expect that by now they just use everything they can to cheat benchmarks, up to and including RAG from solutions past the training dataset cut off date. With two of the paper authors being from Microsoft itself, expect that their "fresh issues" are gamed too.

[-] abcdqfr@lemmy.world 1 points 3 months ago

I also likes to cheat on tests by studying every answer on the subject the test giver might put in the test??? We've got a computer than can study and pass tests, cmon. Where's the real story?

[-] self@awful.systems 22 points 3 months ago

it’s appropriate that you think your brain works like an LLM, because you regurgitated this shitty opinion from somewhere else without giving it any thought at all

[-] diz@awful.systems 7 points 3 months ago

Yeah I'm thinking that people who think their brains work like LLM may be somewhat correct. Still wrong in some ways as even their brains learn from several orders of magnitude less data than LLMs do, but close enough.

[-] YourNetworkIsHaunted@awful.systems 19 points 3 months ago

This isn't studying possible questions, this is memorizing the answer key to the test and being able to identify that the answer to question 5 is "17" but not being able to actually answer it when they change the numbers slightly.

[-] V0ldek@awful.systems 10 points 3 months ago

Hey mate what do you think learning is. Like genuinely, if you were to describe the process of learning a subject to me.

[-] Seminar2250@awful.systems 7 points 3 months ago* (last edited 3 months ago)

i have a potato that can study, send me your venmo if interested

[-] o7___o7@awful.systems 7 points 3 months ago

LLMs are seven or eight bipartite graphs in a trench coat. Is your brain seven neurons thick, because that would explain a few things.

this post was submitted on 02 Jul 2025
24 points (100.0% liked)

TechTakes

2223 readers
164 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS