22
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 24 Aug 2025
22 points (100.0% liked)
TechTakes
2215 readers
108 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 2 years ago
MODERATORS
Via, prob posted here already "Stack Overflow data reveals the hidden productivity tax of ‘almost right’ AI code", as the article is a month old.
What the hell does that even mean, lmao?
What kind of drugs are they on
Second quote is classic "you must be prompting it wrong". No, it can't be that people which find a tool less useful will be using it less often.
Feel like people are just reaching for things 'clearly we need tools to help us with the process, so lets just call them debuggers for AI'
So the normal debuggers that we have for ages, right?
I assume these 'ai debuggers' are for looking inside the AI black box when they AI goes 'yes I'm very sorry I will not do it again' before doing it again.
but can you grift using these?
AI innovation in this space usually means automatically adding stuff to the model's context.
It probably started meaning the (failed) build output got added in every iteration, but it's entirely possible to feed the LLM debugger data from a runtime crash and hope something usable happens.