22
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 11 May 2025
22 points (100.0% liked)
TechTakes
1864 readers
82 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 2 years ago
MODERATORS
More of a notedump than a sneer. I have been saying every now and then that there was research and stuff showing that LLMs require exponentially more effort for linear improvements. This is post by Iris van Rooij (Professor of Computational Cognitive Science) mentions something like that (I said something different, but The intractability proof/Ingenia theorem might be useful to look into): https://bsky.app/profile/irisvanrooij.bsky.social/post/3lpe5uuvlhk2c
I think this theorem is worthless for practical purposes. They essentially define the "AI vs learning" problem in such general terms that I'm not clear on whether it's well-defined. In any case it is not a serious CS paper. I also really don't believe that NP-hardness is the right tool to measure the difficulty of machine learning problems.