[-] swlabr@awful.systems 7 points 1 day ago

I’ve heard a lot of influencer types are leaving twitter/going private over this. Even “spicy” accounts are leaving. I’ve heard some women say they might come back if they block this functionality but uh, bad news, that’s not how the tech works, and musk doesn’t give a shit about moderation anyway.

[-] swlabr@awful.systems 4 points 2 days ago

All good friendo

[-] swlabr@awful.systems 6 points 3 days ago

aw, well, i'm not precious about the term. All I meant was that if you look at someone's post history and they're a chud, that should inform how you read whatever they write.

[-] swlabr@awful.systems 6 points 3 days ago* (last edited 3 days ago)

My following response is a little rambly and unfocused, sorry!

I also hear the “everything is political” and “do your own research” lines from the absolute looniest cranks and conspiracists.

Yes, I acknowledge that you will hear this from them. What they mean can differ and usually is pretty extreme, e.g. "democrats are making the frogs gay with fluoride", "lizard people illuminati", or even "there's a war on Christmas" type shit. And when they say "do your own research", they don't mean "seek out a variety of sources and verifiable data", they mean "read the stuff that agrees with what I'm saying".

When I say that everything is political, I mean that at minimum, language is political, and because you need language to talk about anything, everything becomes political. How things are named skews perception; the most relevant example to us is AI. We know that there is no "intelligence" in an LLM, but does the public? etc. I'll admit that many might find this trivial, but I would counter that most of these strawmen are the same ones who are scared of pronouns and say they don't know what they are allowed to say in the workplace anymore.

And generally agree with your second paragraph :) I don't think anyone here needs this reminder, but I'll note that an open mind means that you don't just reject everything new that comes to you; you at least look at it for a bit, see if it passes whatever metaphorical sniff tests you have, and then choose to toss it or engage further. I'm not saying everyone has a nefarious agenda they are trying to push; there are definitely spaces where people are attempting purely informational reporting.

And to bring it back to the original question. If you read something and it's not exactly within your purview, and you're not sure if it's being said in good faith, you should try to see what else the person has said, especially about things you know about.

E: redaction of fluff

[-] swlabr@awful.systems 3 points 3 days ago

I see. I guess I was thinking too abstractly about how a system like this might work.

[-] swlabr@awful.systems 6 points 3 days ago* (last edited 3 days ago)

Doesn't look interesting to me. NB I'm not a Swifty. If you're someone looking to make a compile-time dependency injection validation framework, cycle detection seems like an early feature to add, and feels like a pretty early unit test to implement.

E: read response from BurgersMcSlopshot please :)

[-] swlabr@awful.systems 9 points 4 days ago

*pig turned into a frog

[-] swlabr@awful.systems 4 points 4 days ago

Still the one baby

[-] swlabr@awful.systems 15 points 4 days ago

part of what helps is coming here and seeing the spectrum of chud output to inoculate yourself a little.

What REALLY helps is broadening your own knowledge and worldview, in the sense that when you realise that everything is political, you start asking yourself the meta questions. Like, what’s the agenda here, what’s not being said, etc. I mean, understanding author intention is already part of reading comprehension, it’s going a little further beyond the face-value meaning. As the memes say, you are not immune to propaganda.

[-] swlabr@awful.systems 9 points 6 days ago

Just confirming that none of what is described really approaches engineering.

11
submitted 1 month ago* (last edited 1 month ago) by swlabr@awful.systems to c/techtakes@awful.systems

Thought this essay had some interesting things to say. It speaks directly to the existence of tech takes overall, specifically those coming from the “oligarch-intellectuals”. Tried to quote some things to give an overview:

There is a certain disorienting thrill in witnessing, over the past few years, the profusion of bold, often baffling, occasionally horrifying ideas pouring from the ranks of America’s tech elite.

To write off these founders and executives as mere showmen—more “public offering” than “public intellectual”—would be a misreading. For one, they manufacture ideas with assembly-line efficiency: their blog posts, podcasts, and Substacks arrive with the subtlety of freight trains. And their “hot takes,” despite vulgar packaging, are often grounded in distinct philosophical traditions. Thus, what appears as intellectual fast food – the ultra-processed thought-nuggets deep fried in venture capital – often conceals wholesome ingredients sourced from a gourmet pantry of quite some sophistication.

Today, it’s increasingly clear that it’s the tech oligarchs — not their algorithmically-steered platforms—who present the greater danger. Their arsenal combines three deadly implements: plutocratic gravity (fortunes so vast they distort reality’s basic physics), oracular authority (their technological visions treated as inevitable prophecy), and platform sovereignty (ownership of the digital intersections where society’s conversation unfolds). Musk’s takeover of Twitter (now X), Andreessen’s strategic investments into Substack, Peter Thiel’s courting of Rumble, the conservative YouTube: they’ve colonized both the medium and the message, the system and the lifeworld.

E: this was linked closer to its original publish date here

16
submitted 2 months ago* (last edited 2 months ago) by swlabr@awful.systems to c/sneerclub@awful.systems

Peep the signatories lol.

Edit: based on some of the messages left, I think many, if not most, of these signatories are just generally opposed to AI usage (good) rather than the basilisk of it all. But yeah, there’s some good names in this.

29

Hi folks, another shitty story from the slop-pocalypse ((AI-)slopalypse?).

Archive link

Article from billboard, archive

NB: I think this story is bullshit. I imagine some parts are true, but there's no concrete source given for the "$3 million" figure. So it's my speculation that this story is hype cooked up by Suno (the AI company enabling this all) and thrown at publishers for an easy headline. Also the human behind this has their name spelled differently in the two articles, so clearly some quality journalism is happening.

18

originally posted to the stubsack but it makes more sense as a top level post.

13
45
submitted 7 months ago* (last edited 7 months ago) by swlabr@awful.systems to c/techtakes@awful.systems

(Archive)

Tickled pink that BI has decided to platform the AI safety chuds. OFC, the more probable reason of “more dosh” gets mentioned, but most of the article is about how Anthropic is more receptive to addressing AI safety and alignment.

48

Burns said the driving force behind the Runway deal was to allow filmmakers to “make movies and television shows we’d otherwise never make. We can’t make it for $100 million, but we’d make it for $50 million because of AI… We’re banging around the art of the possible. Let’s try some stuff, see what sticks.”

read: "I huffed my own farts and passed out. This gave me a dream where we made a film via promptfondling. I decided that I'll make a press release with made up numbers based on that dream."

As reported by New York Magazine: “With a library as large as Lionsgate’s, they could use Runway to repackage and resell what the studio already owned, adjusting tone, format and rating to generate a softer cut for a younger audience or convert a live-action film into a cartoon.”

read: "There's no need to do requels like disney does. The serfs will gobble the slop and they'll like it. After all, why risk creating new jobs or any creative output when we could just melt the ice caps instead?"

As for another example of how the studio can use AI, Burns said to consider this scenario: “We have this movie we’re trying to decide whether to green-light. There’s a 10-second shot — 10,000 soldiers on a hillside with a bunch of horses in a snowstorm.” Using Runaway’s AI technology, the studio can avoid a pricy film shoot that would cost millions and take a few days and use AI to create the shot for about $10,000.

read: "Here's a bottle of my farts. Smell it. Feeling dizzy? Good. Now imagine a scenario where you're looking at your bank account, and instead of number go down, number go up. Isn't that nice? Have another whiff."

56
submitted 7 months ago* (last edited 7 months ago) by swlabr@awful.systems to c/techtakes@awful.systems

Take that, Saltman! Bet you never thought it was possible!

28
submitted 8 months ago* (last edited 8 months ago) by swlabr@awful.systems to c/techtakes@awful.systems

Original Title: Elizabeth Holmes’s Partner Has a New Blood-Testing Start-Up

Billy Evans has two children with the Theranos founder, who is in prison for fraud. He’s now trying to raise money for a testing company that promises “human health optimization.”

Original link: https://www.nytimes.com/2025/05/10/business/elizabeth-holmes-partner-blood-testing-startup.html

11

Original NYT title: Billionaire Airbnb Co-Founder Is Said to Take Role in Musk’s Government Initiative

18
submitted 11 months ago* (last edited 11 months ago) by swlabr@awful.systems to c/techtakes@awful.systems

Original link

OFC if there were any real sense or justice in the world, LLMs would be banned outright.

132
[-] swlabr@awful.systems 78 points 1 year ago

A wallpaper app? What is this, 2008?

[-] swlabr@awful.systems 86 points 1 year ago

LLMs, and everyone who uses them to process information:

view more: next ›

swlabr

joined 2 years ago