[-] theluddite@lemmy.ml 21 points 15 hours ago* (last edited 15 hours ago)

I just want to emphasize that to set up a truly independent and unpaywalled piece of media, you probably need to abandon hope of it being even a viable side hustle. Quasi-independent media on, say, YouTube or Substack can make some money, but you're then stuck on those corporate platforms. If you want to do your own website or podcast or whatever, that's more independent, but you're still dependent on Google if you run ads, or on Patreon if you do that sort of thing. The lesson of Twitter should make pretty clear the danger inherent to that ecosystem. Even podcasts that seem independent can easily get into huge trouble if, say, Musk were to buy Patreon or iHeart.

I've been writing on my website for over two years now. My goal has always been to be completely independent of these kinds of platforms for the long term, no matter what, and the site's popularity has frankly exceeded my wildest dreams. For example, I'm the #1 google result for "anticapitalist tech:"

Screenshot of the google results

But I make no money. If I wanted this to be anything but a hobby, I'd have to sacrifice something that I think makes it valuable: I'd have to paywall something, or run ads, or have a paid discord server, or restrict the RSS feed. As things stand now, I don't know my exact conversion rate because I don't do any analytics and delete all web logs after a week, but I did keep the web logs from the most recent time that I went viral (top of hackernews and several big subreddits). I made something like 100 USD in tips, even though the web logs have millions of unique IPs. That's a conversion rate of something like 0.00002 USD per unique visitor.

Honestly, if I got paid even $15/hr, I would probably switch to doing it at least as a part time job, because I love it. Compare that to the right wing ecosystem, where there's fracking money and Thiel money just sloshing around, and it's very very obvious why Democrats are fucked, much less an actual, meaningful left. Even Thiel himself was a right wing weirdo before he was a tech investor, and a right wing think tank funded his anti-DEI book. He then went on to fund Vance. It's really hard to fight that propaganda machine part time.

[-] theluddite@lemmy.ml 8 points 1 day ago* (last edited 1 day ago)

I'm probably going to get shit for this here, but you have to meet people where they are. If elections are where they are, that's where you have to go. The best way to get people to work with you will always be working with them first. That's going to involve doing shit that you don't want to do. In the same way that a good teacher in school is in a two-way relationship with students, effective organizers don't organize at people, but build meaningful and mutual relationships with them. People will open up to you when they feel you're open to them.

So my advice is to join the DSA work, do their elections, but take it upon yourself to keep up the organizational momentum once the election is over and work on something else. Yes, you're going to have to canvas for some shitty democrat, but, if you knock enough doors, you'll really learn the situation on the ground where you live, and you can roll that over, hopefully with a few friends. If your personal philosophy doesn't let you compromise enough to go that route, so be it, but that's what I'd do.

[-] theluddite@lemmy.ml 2 points 3 days ago

Jesus yeah that's a great point re:Musk/Twitter. I'm not sure that it's true as you wrote it quite yet, but I would definitely agree that it's, at the very least, an excellent prediction. It might very well be functionally true already as a matter of political economy, but it hasn't been tested yet by a sufficiently big movement or financial crisis or whatever.

+1 to everything that you said about organizing. It seems that we're coming to the same realization that many 19th century socialists already had. There are no shortcuts to building power, and that includes going viral on Twitter.

I've told this story on the fediverse before, but I have this memory from occupy of when a large news network interviewed my friend, an economist, but only used a few seconds of that interview, but did air the entirety of an interview with a guy who was obviously unwell and probably homeless. Like you, it took me a while after occupy to really unpack in my head what had happened in general, and I often think on that moment as an important microcosm. Not only was it grossly exploitative, but it is actually good that the occupy camps welcomed and fed people like him. That is how our society ought to work. To have it used as a cudgel to delegitimize the entire camp was cynical beyond my comprehension at the time. To this day, I think about that moment to sorta tune the cynicism of the reaction, even to such a frankly ineffectual and disorganized threat as occupy. A meaningful challenge to power had better be ready for one hell of a reaction.

[-] theluddite@lemmy.ml 3 points 3 days ago

Same, and thanks! We're probably a similar age. My own political awakening was occupy, and I got interested in theory as I participated in more and more protest movements that just sorta fizzled.

I 100% agree re:Twitter. I am so tired of people pointing out that it has lost 80% of its value or whatever. Once you have a few billion, there's nothing that more money can do to your material circumstances. Don't get me wrong, Musk is a dumbass, but, in this specific case, I actually think that he came out on top. That says more about what you can do with infinite money than anything about his tactical genius, because it doesn't exactly take the biggest brain to decide that you should buy something that seems important.

[-] theluddite@lemmy.ml 2 points 3 days ago

I actually also reviewed that one, except my review of it was extremely favorable. I'm so glad that you read it and I'd welcome your thoughts on my very friendly amendment to his analysis if you end up reading that post.

8
submitted 3 days ago by theluddite@lemmy.ml to c/technology@lemmy.ml

#HashtagActivism is a robust and thorough defense of its namesake practice. It argues that Twitter disintermediated public discourse, analyzing networks of user interactions in that context, but its analysis overlooks that Twitter is actually a heavy-handed intermediary. It imposes strict requirements on content, like a character limit, and controls who sees what and in what context. Reintroducing Twitter as the medium and reinterpreting the analysis exposes serious flaws. Similarly, their defense of hashtag activism relies almost exclusively on Twitter engagement data, but offers no theory of change stemming from that engagement. By reexamining their evidence, I argue that hashtag activism is not just ineffective, but its institutional dynamics are structurally conservative and inherently anti-democratic.

2
submitted 1 week ago* (last edited 1 week ago) by theluddite@lemmy.ml to c/luddite@lemmy.ml

The book "#HashtagActivism" is a robust and thorough defense of its namesake practice. It argues that Twitter disintermediated public discourse, analyzing networks of user interactions in that context. But the book overlooks that Twitter is actually a heavy-handed intermediary. Twitter imposes strict requirements on content, like a character limit, and controls who sees what and in what context. Reintroducing Twitter as the medium and reinterpreting the analysis exposes serious flaws. Similarly, their defense of hashtag activism relies almost exclusively on Twitter engagement data, but offers no theory of change stemming from that engagement. By reexamining their evidence, I argue that hashtag activism is not just ineffective, but its institutional dynamics are structurally conservative and inherently anti-democratic.

2
submitted 1 month ago by theluddite@lemmy.ml to c/luddite@lemmy.ml

Regulating tech is hard, in part because computers can do so many things. This makes them useful but also complicated. Companies hide in that complexity, rendering undesirable behavior illegible to regulation: Regulating tech becomes regulating unlicensed taxis, mass surveillance, illegal hotels, social media, etc.

If we actually want accountable tech, I argue that we should focus on the tech itself, not its downstream consequences. Here's my (non-environmental) case for rationing computation.

5
Capture Platforms (theluddite.org)
submitted 3 months ago by theluddite@lemmy.ml to c/luddite@lemmy.ml

Until recently, platforms like Tinder and Uber couldn't exist. They need the intimate data that only mobile devices can provide, which they use to mediate human relationships. They never own anything. In some ways, this simplifies their task, because owning things is hard, but human activities are complicated, making them illegible to computers. As tech companies become more powerful and push deeper into our lives, here's a post about that tension and its consequences.

[-] theluddite@lemmy.ml 113 points 3 months ago* (last edited 3 months ago)

Investment giant Goldman Sachs published a research paper

Goldman Sachs researchers also say that

It's not a research paper; it's a report. They're not researchers; they're analysts at a bank. This may seem like a nit-pick, but journalists need to (re-)learn to carefully distinguish between the thing that scientists do and corporate R&D, even though we sometimes use the word "research" for both. The AI hype in particular has been absolutely terrible for this. Companies have learned that putting out AI "research" that's just them poking at their own product but dressed up in a science-lookin' paper leads to an avalanche of free press from lazy credulous morons gorging themselves on the hype. I've written about this problem a lot. For example, in this post, which is about how Google wrote a so-called paper about how their LLM does compared to doctors, only for the press to uncritically repeat (and embellish on) the results all over the internet. Had anyone in the press actually fucking bothered to read the paper critically, they would've noticed that it's actually junk science.

5
submitted 4 months ago by theluddite@lemmy.ml to c/luddite@lemmy.ml
8
submitted 5 months ago by theluddite@lemmy.ml to c/luddite@lemmy.ml

I've seen a few articles like this one from Futurism: "CEOs Could Easily Be Replaced With AI, Experts Argue." I totally get the appeal, but these articles are more anti-labor than anti-CEO. Because CEOs can't actually be disciplined with threats of automation, these articles further entrench an inherently anti-labor logic, telling readers that losing our livelihoods to automation is part of some natural order, rather than the result of political decisions that benefit capital.

8
submitted 5 months ago by theluddite@lemmy.ml to c/luddite@lemmy.ml

Lots of skeptics are writing lots of good things about the AI hype, but so far, I've encountered relatively few attempts to explain why it's happening at all. Here's my contribution, mostly based Philp Agre's work on the (so-called) internet revolution, which focuses less on the capabilities of the tech itself, as most in mainstream did (and still do), but on the role of a new technology in the ever-present and continuous renegotiation of power within human institutions.

12
submitted 6 months ago by theluddite@lemmy.ml to c/luddite@lemmy.ml

The video opens with Rober standing in front of a fancy-looking box, saying:

Hiding inside this box is an absolute marvel of engineering you might just find protecting you the next time you're at a public event that's got a lot of people.

When he says "protecting you," the video momentarily cuts to stock footage of a packed sports stadium, the first of many "war on terror"-coded editorial decisions, before returning to the box, which opens and releases a drone. This is no ordinary drone, he says, but a particularly heavy and fast drone, designed to smash "bad guy drones trying to do bad guy things." He explains how "it's only a matter of time" before these bad guys' drones attack infrastructure "or worse," cutting to a photo of a stadium for the third time in just 30 seconds.

10
submitted 6 months ago by theluddite@lemmy.ml to c/luddite@lemmy.ml

In "If We Burn," Vincent Bevins recaps the mass protests of the 2010s. He argues that they're communicative acts, but power has no way of negotiating with or interpreting them. They're "illegible."

Here's a "yes and" to Bevins. I argue that social media companies have a detailed map of all protesters' connections, communications, topics of interests, locations, etc., such that, to them, there has never been a more legible form of social organization, giving them too much power over ostensibly leaderless movements.

I also want to plug Bevins's book, independently of my post. It's extremely well researched. For many of the things that he describes, he was there, and he productively challenges many core values of the movements in which I and any others probably reading this have participated.

4
submitted 7 months ago by theluddite@lemmy.ml to c/luddite@lemmy.ml
5
submitted 7 months ago by theluddite@lemmy.ml to c/luddite@lemmy.ml
4
submitted 8 months ago by theluddite@lemmy.ml to c/luddite@lemmy.ml
[-] theluddite@lemmy.ml 207 points 8 months ago* (last edited 8 months ago)

I cannot handle the fucking irony of that article being on nature, one of the organizations most responsible for fucking it up in the first place. Nature is a peer-reviewed journal that charges people thousands upon thousands of dollars to publish (that's right, charges, not pays), asks peer reviewers to volunteer their time, and then charges the very institutions that produced the knowledge exorbitant rents to access it. It's all upside. Because they're the most prestigious journal (or maybe one of two or three), they can charge rent on that prestige, then leverage it to buy and start other subsidiary journals. Now they have this beast of an academic publishing empire that is a complete fucking mess.

[-] theluddite@lemmy.ml 169 points 10 months ago

You can tell that technology is advancing rapidly because now you can type short-form text on the internet and everybody can read it. Truly innovative stuff.

[-] theluddite@lemmy.ml 149 points 11 months ago* (last edited 11 months ago)

Gen Zers are increasingly looking for ways to prioritize quality of life over financial achievement at all costs. The TikTok trend of “soft life”—and its financial counterpart “soft saving”—is a stark departure from their millennial predecessors’ financial habits, which were rooted in toxic hustle culture and the “Girlboss” era.

"Soft savings" is, to my understanding, the opposite of savings -- it's about investing resources into making yourself happy now versus forever growing your savings for some future good time. It sounds ridiculous because they're hitting on good critiques of capitailsm, but using the language of capitalism itself.

I think this really bolsters my argument that the self-diagnosis trend might be better understood as young people being critical of society, but their education system completely failed them. Since they lack access to critical, social, and political theory, they don't have a vocabulary to express their critiques, so they've used the things we have taught them, like the language of mental health, to sorta make up their own critical theory. When mental health experts are super concerned and talk about how all these teens' self-diagnoses are "wrong," they're missing the point. It's a new theory using existing building blocks.

[-] theluddite@lemmy.ml 136 points 11 months ago

This is bad science at a very fundamental level.

Concretely, we deploy GPT-4 as an agent in a realistic, simulated environment, where it assumes the role of an autonomous stock trading agent. Within this environment, the model obtains an insider tip about a lucrative stock trade and acts upon it despite knowing that insider trading is disapproved of by company management.

I've written about basically this before, but what this study actually did is that the researchers collapsed an extremely complex human situation into generating some text, and then reinterpreted the LLM's generated text as the LLM having taken an action in the real world, which is a ridiculous thing to do, because we know how LLMs work. They have no will. They are not AIs. It doesn't obtain tips or act upon them -- it generates text based on previous text. That's it. There's no need to put a black box around it and treat it like it's human while at the same time condensing human tasks into a game that LLMs can play and then pretending like those two things can reasonably coexist as concepts.

To our knowledge, this is the first demonstration of Large Language Models trained to be helpful, harmless, and honest, strategically deceiving their users in a realistic situation without direct instructions or training for deception.

Part of being a good scientist is studying things that mean something. There's no formula for that. You can do a rigorous and very serious experiment figuring out how may cotton balls the average person can shove up their ass. As far as I know, you'd be the first person to study that, but it's a stupid thing to study.

[-] theluddite@lemmy.ml 140 points 1 year ago

"I gave an LLM a wildly oversimplified version of a complex human task and it did pretty well"

For how long will we be forced to endure different versions of the same article?

The study said 86.66% of the generated software systems were "executed flawlessly."

Like I said yesterday, in a post celebrating how ChatGPT can do medical questions with less than 80% accuracy, that is trash. A company with absolute shit code still has virtually all of it "execute flawlessly." Whether or not code executes it not the bar by which we judge it.

Even if it were to hit 100%, which it does not, there's so much more to making things than this obviously oversimplified simulation of a tech company. Real engineering involves getting people in a room, managing stakeholders, navigating conflicting desires from different stakeholders, getting to know the human beings who need a problem solved, and so on.

LLMs are not capable of this kind of meaningful collaboration, despite all this hype.

[-] theluddite@lemmy.ml 118 points 1 year ago* (last edited 1 year ago)

I know this is just a meme, but I'm going to take the opportunity to talk about something I think is super interesting. Physicists didn't build the bomb (edit: nor were they particularly responsible for its design).

David Kaiser, an MIT professor who is both a physicist and a historian (aka the coolest guy possible) has done extensive research on this, and his work is particularly interesting because he has the expertise in all the relevant fields do dig through the archives.

It’s been a long time since I’ve read him, but he concludes that the physics was widely known outside of secret government operations, and the fundamental challenges to building an atomic bomb are engineering challenges – things like refining uranium or whatever. In other words, knowing that atoms have energy inside them which will be released if it is split was widely known, and it’s a very, very, very long path engineering project from there to a bomb.

This cultural understanding that physicists working for the Manhattan project built the bomb is actually precisely because the engineering effort was so big and so difficult, but the physics was already so widely known internationally, that the government didn’t redact the physics part of the story. In other words, because people only read about physicists’ contributions to the bomb, and the government kept secret everything about the much larger engineering and manufacturing effort, we are left with this impression that a handful of basic scientists were the main, driving force in its creation.

view more: next ›

theluddite

joined 1 year ago
MODERATOR OF