ya know the Nicole the Fediverse Chick spam? this poster thinks it's a revenge Joe job:
Nice detective work. The second time I got one of those I figured it resembles the false flag channel ad spammers on IRC. I still wonder occasionally what #superbowl at supermets did for someone to go on a multi-year spamming campaign against them.
TV Tropes got an official app, featuring an AI "story generator". Unsurprisingly, backlash was swift, to the point where the admins were promising to nuke it "if we see that users don't find the story generator helpful".
Razer claims that its AI can identify 20 to 25 percent more bugs compared to manual testing, and this can reduce QA time by up to 50 percent as well as cost savings of up to 40 percent
as usual this is probably going to be only the simplest shit, and I don’t even want to think of the secondary downstream impacts from just listening to this shit without thought will be
If I had to judge Razer’s software quality based on what little I know about them, I’d probably raise my eyebrows because they ship some insane 600+ MiB driver with a significant memory impact with their mice and keyboards that’s needed to use basic features like DPI buttons and LED settings, when the alternative to that is a 900 kiB open source driver which provides essentially the same functionality.
And now their answer to optimization is to staple a chatbot onto their software? I think I pass.
Here's my audio/video dispatch about framing tech through conservation of energy to kill the magical thinking of generative ai and the like podcast ep: https://pnc.st/s/faster-and-worse/968a91dd/kill-magic-thinking video ep: https://www.youtube.com/watch?v=NLHmtYWzHz8
Roundup of the current bot scourge hammering open source projects
https://thelibre.news/foss-infrastructure-is-under-attack-by-ai-companies/
We can add that to the list of things threatening to bring FOSS as a whole crashing down.
Plus the culture being utterly rancid, the large-scale AI plagiarism, the declining industry surplus FOSS has taken for granted, having Richard Stallman taint the whole movement by association, the likely-tanking popularity of FOSS licenses, AI being a general cancer on open-source and probably a bunch of other things I've failed to recognise or make note of.
FOSS culture being a dumpster fire is probably the biggest long-term issue - fixing that requires enough people within the FOSS community to recognise they're in a dumpster fire, and care about developing the distinctly non-technical skills necessary to un-fuck the dumpster fire.
AI's gonna be the more immediately pressing issue, of course - its damaging the commons by merely existing.
oh would you look at that, something some people made proved helpful and good, and now cloudflare is immediately taking the idea to deploy en masse with no attribution
double whammy: every one of the people highlighted is a dude
"it's an original idea! we're totes doing the novel thing of model synthesis to defeat them! so new!" I'm sure someone will bleat, but I want them to walk into a dark cave and shout at the wall forever
(anubis isn't strictly the same in that set of things, but I link it both because completeness and subject relevance)
https://github.com/TecharoHQ/anubis/issues/50 and of course we already have chatgptfriends on the case of stopping the mean programmer from doing something the Machine doesn't like. This person doesn't even seem to understand what anubis does, but they certainly seem confident chatgpt can tell him.
New piece from Brian Merchant: DOGE's 'AI-first' strategist is now the head of technology at the Department of Labor, which is about...well, exactly what it says on the tin. Gonna pull out a random paragraph which caught my eye, and spin a sidenote from it:
“I think in the name of automating data, what will actually end up happening is that you cut out the enforcement piece,” Blanc tells me. “That's much easier to do in the process of moving to an AI-based system than it would be just to unilaterally declare these standards to be moot. Since the AI and algorithms are opaque, it gives huge leeway for bad actors to impose policy changes under the guide of supposedly neutral technological improvements.”
How well Musk and co. can impose those policy changes is gonna depend on how well they can paint them as "improving efficiency" or "politically neutral" or some random claptrap like that. Between Musk's own crippling incompetence, AI's utterly rancid public image, and a variety of factors I likely haven't factored in, imposing them will likely prove harder than they thought.
(I'd also like to recommend James Allen-Robertson's "Devs and the Culture of Tech" which goes deep into the philosophical and ideological factors behind this current technofash-stavaganza.)
TechTakes
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community