On a semi-related note, I suspect we're gonna see a pushback against automation in general at some point, especially in places where "shitty automation".
In other news, Brian Merchant's going full-time on Blood in the Machine.
Did notice a passage in the annoucement which caught my eye:
Meanwhile, the Valley has doubled down on a grow-at-all-costs approach to AI, sinking hundreds of billions into a technology that will automate millions of jobs if it works, might kneecap the economy if it doesn’t, and will coat the internet in slop and misinformation either way.
I'm not sure if its just me, but it strikes me as telling about how AI's changed the cultural zeitgeist that Merchant's happily presenting automation as a bad thing without getting backlash (at least in this context).
To reference a previous sidenote, DeepSeek gives corps and randos a means to shove an LLM into their shit for dirt-cheap, so I expect they're gonna blow up in popularity.
But Apple Intelligence has its good points. “I find the best thing about Apple intelligence is that since I haven’t enabled it, my phone optimized for onboard AI has incredible battery life,” responded another Bluesky user. [Bluesky]
Y'know, if Apple had simply removed the AI altogether and went with that as a marketing point, people would probably buy more iPhones.
At the bare minimum, AI wouldn't be actively driving people away from buying them.
Here's a better idea - treat anything from ChatGPT as a lie, even if it offers sources
On the one hand, Google's still the dominant search engine, having used every dirty trick in the book to reach that position and maintain it. If you aren't on Google, you arguably might as well not exist.
On the other hand, Google's already under heavy scrutiny since being officially declared an illegal monopoly, and the public is pissed with how Google's declined in search quality - and deliberately so.
Part of me says we're about to see some truly wild shit go down.
I've already seen people go absolutely fucking crazy with this - from people posting trans-supportive Muskrat pictures to people making fucked-up images with Nintendo/Disney characters, the utter lack of guardrails has led to predictable chaos.
Between the cost of running an LLM and the potential lawsuits this can unleash, part of me suspects this might end up being what ultimately does in Twitter.
AI companies work around this by paying human classifiers in poor but English-speaking countries to generate new data. Since the classifiers are poor but not stupid, they augment their low piecework income by using other AIs to do the training for them. See, AIs can save workers labor after all.
On the one hand, you'd think the AI companies would check to make sure they aren't using AI themselves and damaging their models.
On the other hand, AI companies are being run by some of the laziest and stupidest people alive, and are probably just rubber-stamping everything no matter how blatantly garbage the results are.
The good news is I barely use Protonmail (or email at all, for that matter).
The bad news is I have a fucking Proton account. Fuck.
Do you really think "cult" is a useful category/descriptor here?
My view: things identified as "cults" have a bunch of good traits. EA should, where possible, adopt the good traits and reject the bad ones, and ignore whether they're associated with the label "cult" or not.
New piece from Brian Merchant: 'AI is in its empire era'
Recently finished it, here's a personal sidenote:
This AI bubble's done a pretty good job of destroying the "apolitical" image that tech's done so much to build up (Silicon Valley jumping into bed with Trump definitely helped, too) - as a matter of fact, it's provided plenty of material to build an image of tech as a Nazi bar writ large (once again, SV's relationship with Trump did wonders here).
By the time this decade ends, I anticipate tech's public image will be firmly in the toilet, viewed as an unmitigated blight on all our daily lives at best and as an unofficial arm of the Fourth Reich at worst.
As for AI itself, I expect it's image will go into the shitter as well - assuming the bubble burst doesn't destroy AI as a concept like I anticipate, it'll probably be viewed as a tech with no ethical use, as a tech built first and foremost to enable/perpetrate atrocities to its wielder's content.