16
submitted 1 month ago* (last edited 1 month ago) by BlueMonday1984@awful.systems to c/morewrite@awful.systems

(This is basically an expanded version of a comment on the weekly Stubsack - I've linked it above for convenience's sake.)

This is pure gut instinct, but I’m starting to get the feeling this AI bubble’s gonna destroy the concept of artificial intelligence as we know it.

On the artistic front, there's the general tidal wave of AI-generated slop (which I've come to term "the slop-nami") which has come to drown the Internet in zero-effort garbage, interesting only when the art's utterly insane or its prompter gets publicly humiliated, and, to quote Line Goes Up, "derivative, lazy, ugly, hollow, and boring" the other 99% of the time.

(And all while the AI industry steals artists' work, destroys their livelihoods and shamelessly mocks their victims throughout.)

On the "intelligence" front, the bubble's given us public and spectacular failures of reasoning/logic like Google gluing pizza and eating onions, ChatGPT sucking at chess and briefly losing its shit, and so much more - even in the absence of formal proof LLMs can't reason, its not hard to conclude they're far from intelligent.

All of this is, of course, happening whilst the tech industry as a whole is hyping the ever-loving FUCK out of AI, breathlessly praising its supposed creativity/intelligence/brilliance and relentlessly claiming that they're on the cusp of AGI/superintelligence/whatever-the-fuck-they're-calling-it-right-now, they just need to raise a few more billion dollars and boil a few more hundred lakes and kill a few more hundred species and enable a few more months of SEO and scams and spam and slop and soulless shameless scum-sucking shitbags senselessly shitting over everything that was good about the Internet.


The public's collective consciousness was ready for a lot of futures regarding AI - a future where it took everyone's jobs, a future where it started the apocalypse, a future where it brought about utopia, etcetera. A future where AI ruins everything by being utterly, fundamentally incompetent, like the one we're living in now?

That's a future the public was not ready for - sci-fi writers weren't playing much the idea of "incompetent AI ruins everything" (Paranoia is the only example I know of), and the tech press wasn't gonna run stories about AI's faults until it became unignorable (like that lawyer who got in trouble for taking ChatGPT at its word).

Now, of course, the public's had plenty of time to let the reality of this current AI bubble sink in, to watch as the AI industry tries and fails to fix the unfixable hallucination issue, to watch the likes of CrAIyon and Midjourney continually fail to produce anything even remotely worth the effort of typing out a prompt, to watch AI creep into and enshittify every waking aspect of their lives as their bosses and higher-ups buy the hype hook, line and fucking sinker.


All this, I feel, has built an image of AI as inherently incapable of humanlike intelligence/creativity (let alone Superintelligence^tm^), no matter how many server farms you build or oceans of water you boil.

Especially so on the creativity front - publicly rejecting AI, like what Procreate and Schoolism did, earns you an instant standing ovation, whilst openly shilling it (like PC Gamer or The Bookseller) or showcasing it (like Justine Moore, Proper Prompter or Luma Labs) gets you publicly and relentlessly lambasted. To quote Baldur Bjarnason, the “E-number additive, but for creative work” connotation of “AI” is more-or-less a permanent fixture in the public’s mind.

I don't have any pithy quote to wrap this up, but to take a shot in the dark, I expect we're gonna see a particularly long and harsh AI winter once the bubble bursts - one fueled not only by disappointment in the failures of LLMs, but widespread public outrage at the massive damage the bubble inflicted, with AI funding facing heavy scrutiny as the public comes to treat any research into the field as done with potentally malicious intent.

you are viewing a single comment's thread
view the rest of the comments
[-] Masonicon@awful.systems 2 points 3 weeks ago* (last edited 3 weeks ago)

apparently, the only way AIs can have Human-level Intelligence and creativity(if not Superintelligence) was: uses Human brains wired together, photonics, Quantum computing, or some combinations between of, instead using conventional silicon chips, for processing data

this post was submitted on 13 Oct 2024
16 points (100.0% liked)

MoreWrite

112 readers
1 users here now

post bits of your writing and links to stuff you’ve written here for constructive criticism.

if you post anything here try to specify what kind of feedback you would like. For example, are you looking for a critique of your assertions, creative feedback, or an unbiased editorial review?

if OP specifies what kind of feedback they'd like, please respect it. If they don't specify, don't take it as an invite to debate the semantics of what they are writing about. Honest feedback isn’t required to be nice, but don’t be an asshole.

founded 1 year ago
MODERATORS