24
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 24 Feb 2025
24 points (100.0% liked)
TechTakes
1662 readers
76 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 2 years ago
MODERATORS
Ran across a piece of AI hype titled "Is AI really thinking and reasoning — or just pretending to?".
In lieu of sneering the thing, here's some unrelated thoughts:
The AI bubble has done plenty to broach the question of "Can machines think?" that Alan Turing first asked in 1950. From the myriad failures and embarrassments its given us, its given plenty of evidence to suggest they can't - to repeat an old prediction of mine, I expect this bubble is going to kill AI as a concept, utterly discrediting it in the public eye.
On another unrelated note, I expect we're gonna see a sharp change in how AI gets depicted in fiction.
With AI's public image being redefined by glue pizzas and gen-AI slop on one end, and by ethical contraventions and Geneva Recommendations on another end, the bubble's already done plenty to turn AI into a pop-culture punchline, and support of AI into a digital "Kick Me" sign - a trend I expect to continue for a while after the bubble bursts.
For an actual prediction, I predict AI is gonna pop up a lot less in science fiction going forward. Even assuming this bubble hasn't turned audiences and writers alike off of AI as a concept, the bubble's likely gonna make it a lot harder to use AI as a plot device or somesuch without shattering willing suspension of disbelief.
I do love a good middle ground fallacy.
EDIT:
I do abhor a "Because the curtains were blue" take.
EDIT^2:
Of course "Jagged intelligence" is also—stealthily?—believing in the "g-factor".
OK I sped read that thing earlier today, and am now reading it proper.
Here's how they describe this term, about 2000 words in:
So basically, this term is just pure hype, designed to play up the "intelligence" part of it, to suggest that "AI can be great". The article just boils down to "use AI for the things that we think it's good at, and don't use it for the things we think it's bad at!" As they say on the internet, completely unserious.
Demonstrably no.
Fuck right off.
Ah, yes, as we all know, the burden of proof lies on skeptics.
Again, fuck off.
Moving on...
vs
A LW-level analysis shows that the article spends 650 words on the skeptic's case and 889 on the believer's case. BIAS!!!!! /s.
Anyway, here are the skeptics quoted:
Great, now the believers:
You will never guess which two of these four are regular wrongers.
Note that the article only really has examples of the dumbass-nature of LLMs. All the smart things it reportedly does is anecdotal, i.e. the author just says shit like "AI can do solve some really complex problems!" Yet, it still has the gall to both-sides this and suggest we've boiled the oceans for something more than a simulated idiot.
Humans have bouba intelligence, computers have kiki intelligence. This is makes so much more sense than considering how a chatbot actually works.
But if Bouba is supposed to be better why is "smooth brained" used as an insult? Checkmate Inbasilifidelists!
you can't make me do anything
my brain is too smooth, smoothest there is
your prompt injection slides right off
people knotting themselves into a pretzel to avoid recognising that they've been deeply and thoroughly conned for years
I love how thoroughly inconcrete that suggestion is. supes a great answer for this thing we're supposed to be putting all of society on
it's also a hell of a trip to frame it as "believers" vs "skeptics". I get it's vox and it's basically a captured mouthpiece and that it's probably wildly insane to expect even scientism (much less so an acknowledgement of science/evidence), but fucking hell
I'm thinking stupid and frustrating AI will become a plot device.
"But if I don't get the supplies I can't save the town!"
"Yeah, sorry, the AI still says no"
Sounds pretty likely to me. With how much frustration AI has given us, I expect comedians and storytellers alike will have plenty of material for that kinda shit.
Does vox do anything other than vomit hot garbage?
Ian Millhiser's reports on Supreme Court cases have been consistently good (unlike the Supreme Court itself). But Vox reporting on anything touching TESCREAL seems pretty much captured.