651
Four Eyes Principle
(discuss.tchncs.de)
Comic Strips is a community for those who love comic stories.
The rules are simple:
Web of links
Except we didn't call all of that AI then, and it's silly to call it AI now. In chess, they're called "chess engines". They are highly specialized tools for analyzing chess positions. In medical imaging, that's called computer vision, which is a specific, well-studied field of computer science.
The problem with using the same meaningless term for everything is the precise issue you're describing: associating specialized computer programs for solving specific tasks with the misapplication of the generative capabilities of LLMs to areas in which it has no business being applied.
chess engines are, and always have been called, AI. computer vision is and always has been AI.
the only reason you might think they’re not is because in the most recent AI winter in which those technologies experienced a boom they avoided terminology like “AI” when requesting funding and advertising their work because people like you who had recently decided that they’re the arbiters of what is and isn’t intelligence.
turing once said if we were to gather the meaning of intelligence from a gallup poll it would be patently absurd, and i agree.
but sure, computer vision and chess engines, the two most prominent use cases for AI and ML technologies - aren’t actual artificial intelligence, because you said so. why? idk. i guess because we can do those things well and the moment we understand something well as a society people start getting offended if you call it intelligence rather than computation. can’t break the “i’m a special and unique snowflake” spell for people, god forbid…
There’s a whole history of people, both inside and outside the field, shifting the definition of AI to exclude any problem that had been the focus of AI research as soon as it’s solved.
Bertram Raphael said “AI is a collective name for problems which we do not yet know how to solve properly by computer.”
Pamela McCorduck wrote “it’s part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, but that’s not thinking” (Page 204 in Machines Who Think).
In Gödel, Escher, Bach: An Eternal Golden Braid, Douglas Hofstadter named “AI is whatever hasn’t been done yet” Tesler’s Theorem (crediting Larry Tesler).
https://praxtime.com/2016/06/09/agi-means-talking-computers/ reiterates the “AI is anything we don’t yet understand” point, but also touches on one reason why LLMs are still considered AI - because in fiction, talking computers were AI.
The author also quotes Jeff Hawkins’ book On Intelligence:
Another reason why LLMs are still considered AI, in my opinion, is that we still don’t understand how they work - and by that, I of course mean that LLMs have emergent capabilities that we don’t understand, not that we don’t understand how the technology itself works.
We absolutely did call it "AI" then. The same applies to chess engines when they were being researched.
Machine Learning is the general field, and I think if we weren't wrapped up in the AI hype we could be training models to do important things like diagnosing disease and not writing shitty code or creating fantasy art work.
We are. Why do you think we stopped?