20
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 02 Mar 2026
20 points (100.0% liked)
TechTakes
2480 readers
35 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 2 years ago
MODERATORS
As a layperson skimming the paper, this strikes me as equivalent to a dashed-off letter to the editor coming from someone in Knuth's position. It's an incomplete, second-hand reporting of somebody else's results that doesn't really investigate any of the interesting features of the system at hand. The implicit claim (here and elsewhere) is that we have a runtime for natural-language programming in English, and the main method reported for demonstrating this is the partial prompt:
and later on, a slightly longer prompt from a correspondent using GPT-5.2 Pro, that also loads a PDF of Knuth's article into the context window. No discussion of debugging how these systems arrive at their output, or programmatically constraining them for more targeted output in their broader vector space. Just more of the braindead prompting-and-hoping approach, which eventually, unsurprisingly diverges from outputting any viable code whatsoever. This all strikes me as being an exercise similar to
The cargo-cult system prompt approach is like banging two rocks together compared to what a computational system should be capable of, and I would be much more impressed and much more interested if someone like Knuth was investigating such capabilities, instead of blogging somebody else pretending to have the Star Trek computer.