[-] BlueMonday1984@awful.systems 9 points 4 hours ago

I genuinely thought therapists were gonna avoid the psychosis-inducing suicide machine after seeing it cause psychosis and suicide. Clearly, I was being too optimistic.

[-] BlueMonday1984@awful.systems 8 points 15 hours ago

Starting this Stubsack off, I found a Substack post titled "Generative AI could have had a place in the arts", which attempts to play devil's advocate for the plagiarism-fueled slop machines.

Pointing to one particular lowlight, the author attempts to conflate AI with actually useful tech to try and make an argument:

While the idea of generative AI “democratizing” art is more or less a meme these days, there are in fact AI tools that do make certain artforms more accessible to low-budget productions. The first thing to come to mind is how computer vision-based motion capture give 3D animators access to clearer motion capture data from a live-action actor using as little as a smartphone camera and without requiring expensive mo-cap suits.

7

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

[-] BlueMonday1984@awful.systems 4 points 23 hours ago

Nice and lengthy sneer against AI in academia came out a couple days ago - highly recommend reading.

CNBC Squawkbox must have had space to fill, so they invited Ed on. What’s the Saatchi vision of the future of AI in cinema? [YouTube, 3:50 on]

It’s, you know, potentially the end of human creativity.

If a villain in some children's cartoon boasted about trying to destroy human creativity, I'd have thought they were over-the-top and unrealistic. We truly do live in the stupidest timeline

I was writing this whilst seriously mad about AI's continued harms, so I do not blame you for finding this overdramatic. This bubble is testing my patience.

[-] BlueMonday1984@awful.systems 5 points 2 days ago

Adding mountains of insult to injury, the AI bubble has managed to threaten humanity in practically every way possible other than the sci-fi killbots Yud and co. doomsayed about.

Accelerating the climate crisis, threatening livelihoods, destroying drinkable water supplies, driving people to suicide/psychosis, empowering fascism and bigotry, destroying hard-earned skills, flooding the world with lies and falsehood, impending economic disaster, all of it has done horrendous damage to civilization as we know it, in ways that we may never recover from.

If AI does end up killing all of humanity, it will be by being the exact opposite of the superintelligent robo-Satan that Yudkowsky ranted and raved about, by being a carbon-belching water-guzzling almighty idiot built through razing the commons to dust, looting the economies of the world and stealing everything there was, everything there is, and everything there ever will be.

To huff a hefty degree of copium, the mistake of creating AI is a mistake humanity isn't gonna repeat - if humanity manages to survive through this, its all-but certain artificial intelligence will die from the incoming AI winter, consigned to the dustbin of history as something which cannot be created, and which should not be created.

[-] BlueMonday1984@awful.systems 6 points 2 days ago

And if you want to avoid that, you have to use a coin with some form of automatic money laundering built in (e.g. Monero), which brings its own legal problems for you and everyone working with you.

[-] BlueMonday1984@awful.systems 9 points 3 days ago

Yet another lawsuit has hit Midjourney, this time coming from Warner Bros Discovery.

[-] BlueMonday1984@awful.systems 11 points 3 days ago

Same here. The world's been forced to deal with these promptfucks ruining everything they touch for literal years at this point, some degree of schadenfreude at their expense was sorely fucking needed.

[-] BlueMonday1984@awful.systems 6 points 4 days ago

I know Elgato do a collapsible greenscreen, but that's the only one coming to mind.

[-] BlueMonday1984@awful.systems 13 points 4 days ago

Heartwarming: There Actually Is Justice In This World

Pure, unfiltered schadenfreude for today - this was a very fun read.

12

New blog entry from Baldur, comparing the Icelandic banking bubble and its fallout to the current AI bubble and its ongoing effects.

3

(This is an expanded version of a comment I made, which I've linked above.)

Well, seems the tech industry’s prepared to pivot to quantum if and when AI finally dies and goes away forever. If and when the hucksters get around to inflating the quantum bubble, I expect they’re gonna find themselves facing some degree of public resistance - probably not to the extent of what AI received, but still enough to give the hucksters some trouble.

The Encryption Issue

One of quantum’s big selling points is its purported ability to break the current encryption algorithms in use today - for a couple examples, Shor’s algorithm can reportedly double-tap public key cryptography schemes such as RSA, and Grover’s algorithm promises to supercharge brute-force attacks on symmetric-key cryptography.

Given this, I fully expect its supposed encryption-breaking abilities to stoke outcry and resistance from privacy rights groups. Even as a hypothetical, the possibility of such power falling into government hands is one that all-but guarantees Nineteen Eighty-Four levels of mass surveillance and invasion of privacy if it comes to pass.

Additionally, I expect post-quantum encryption will earn a lot of attention during the bubble as well, to pre-emptively undermine such attempts at mass surveillance.

Environmental Concerns

Much like with AI, info on how much power quantum computing requires is pretty scarce (though that’s because they more-or-less don’t exist, not because AI corps are actively hiding/juicing the numbers).

The only concrete number I could find came from IEEE Spectrum, which puts the power consumption of the D-Wave 2X (from 2015) at “slightly less than 25 kilowatts”, with practically all the power going to the refrigeration unit keeping it within a hair’s breadth of absolute zero, and the processor itself using “a tiny fraction of a microwatt”.

Given the minimal amount of info, and the AI bubble still being fresh in the public’s mind, I expect quantum systems will face resistance from environmental groups. Between the obscene power/water consumption of AI datacentres, the shitload of pollution said datacentres cause in places like Memphis, and the industry’s attempts to increase said consumption whenever possible, any notion that tech cares about the environment is dead in the (polluted) water, and attempts to sell the tech as energy efficient/environmentally friendly will likely fall on deaf ears.

17

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

8
submitted 1 week ago* (last edited 1 week ago) by BlueMonday1984@awful.systems to c/morewrite@awful.systems

It’s been a couple of weeks since my last set of predictions on the AI winter. I’ve found myself making a couple more.

Mental Health Crises

With four known suicides (Adam Raine, Sewell Setzer, Sophie Rottenberg and an unnamed Belgian man), a recent murder-suicide, and involuntary commitments caused by AI psychosis, there’s solid evidence to show that using AI is a fast track to psychological ruin.

On top of that, AI usage is deeply addictive, combining a psychic’s con with a gambling addiction to produce what amounts to digital cocaine, leaving its users hopelessly addicted to it, if not utterly dependent on it to function (such cases often being referred to as “sloppers”).

If and when the chatbots they rely on are shut down, I expect a major outbreak of mental health crises among sloppers and true believers, as they find themselves unable to handle day-to-day life without a personal sycophant/”assistant”/”””therapist””” on hand at all times. For psychiatrists/therapists, I expect they will find a steady supply of new clients during the winter, as the death of the chatbot sends addicted promptfondlers spiralling.

Skills Gaps Galore

One of the more common claims from promptfondlers and boosters when confronted is “you won’t be replaced by AI, but by a human using AI”.

With how AI prevents juniors from developing their skills, makes seniors worse at their jobs, damages productivity whilst creating a mirage of it, and damages their users’ critical thinking and mental acuity, all signs point to the exact opposite being the case - those who embrace and use AI will be left behind, their skills rotting away as their AI-rejecting peers remain as skilled as before the bubble, if not more so thanks to spending time and energy on actually useful skills, rather than shit like “prompt engineering” or “vibe coding”.

Once the winter sets in and the chatbots disappear, the gulf between these two groups is going to become much wider, as promptfondlers’ crutches are forcibly taken away from them and their “skills” in using the de-skilling machine are rendered useless. As a consequence, I expect promptfondlers will be fired en masse and struggle to find work during the winter, as their inability to work without a money-burning chatbot turns them into a drag on a company’s bottom line.

20
4

Recently, I read a short article from Iris Meredith about rethinking how we teach programming. It's a pretty solid piece of work all around, and it has got me thinking how to further build on her ideas.

This contains a quick overview of her newsletter to get you up to speed, but I recommend reading it for yourself.

The Problem

As is rather obvious to most of us, the software industry is in a dire spot - Meredith summed it up better than I can:

Software engineers tend to be detached, demotivated and unwilling to care much about the work they're doing beyond their paycheck. Code quality is poor on the whole, made worse by the current spate of vibe coding and whatever other febrile ideas come out of Sam Altman's brain. Much of the software that we write is either useless or actively hurts people. And the talented, creative people that we most need in the industry are pushed to the margins of it.

As for the cause, Iris points to the "teach the mystic incantations" style used in many programming courses, which ignores teaching students how to see through an engineer’s eyes (so to speak), and teaching them the ethics of care necessary to write good code (roughly 90% of what goes into software engineering). As Iris notes:

This tends to lead, as you might expect, to a lot of new engineers being confused, demotivated and struggling to write good code or work effectively in a software environment. [...] It also means, in the end, that a lot of people who'd be brilliant software engineers just bounce off the field completely, and that a lot of people who find no joy in anything and just want a big salary wind up in the field, never realising that they have no liking or aptitude for it.

Meredith’s Idea

Meredith’s solution, in brief, is threefold.

First, she recommends starting people off with HTML as their first language, giving students the tools they need to make something they want and care about (a personal website in this case), and providing a solid bedrock for learning fundamental programming skills

Second, she recommends using “static site generators with templating engines” as an intermediate step between HTML/CSS and full-blown programming, to provide students an intuitive method of understanding basic concepts such as loops, conditionals, data structures and variables.

(As another awful member points out, they provide an easy introduction to performance considerations/profiling by being blazing fast compared to all-too common JS monoliths online, and provide a good starting point for introducing modularity as well.)

Third, and finally, she recommends having students publish their work online right from the start, to give them reason to care about their work as early as possible and give them the earliest possible opportunity to learn about the software development life cycle.

A Complementary Idea

[basic idea: teach art alongside coding, to flex students’ creative muscles]

Meredith’s suggested approach to software education is pretty solid on all fronts - it gets students invested in their programming work, and gives them the tools needed to make and maintain high-quality code.

If I were to expand on this a bit, I think the obvious addition would be to provide an arts education to complement Iris’ proposed webdev-based approach

As explicit means of self-expression, the arts provide provide great assistance in highlighting the expressive elements of software Meredith wishes to highlight

An arts education would wonderfully complement the expressive elements of software Meredith wishes to highlight - focusing on webdev, developing students’ art skills would expand their ability to customise their websites to their liking, letting them make something truly unique to themselves.

The skills that students learn through the arts would also complement what they directly learn in programming, too. The critical eye that art critique grants them will come in handy for code review. The creative muscles they build through art will enhance their problem-solving abilities, and so on.

Beyond that, I expect the complementary arts will do a good job attracting creatives to the field, whilst pushing away “people who find no joy in anything and just want a big salary”, which Meredith notes are common in the field. Historically, “learn to code” types have viewed the arts as a “useless” degree, so they’ll near-certainly turn their noses up at having to learn it alongside something more “useful”, leaving the door open for more creatives to join up.

A More Outlandish Idea

For a more outlandish idea, the long-defunct, yet well-beloved multimedia platform Adobe Flash could provide surprisingly useful for a programming education, especially with the complementary arts education I suggested before.

Being effectively an IDE and an animation program combined into one, Flash offers a means of developing and testing a student’s skills in art and programming simultaneously, and provides an easy showcase of how the two can complement each other.

Deploying Flash to a personal website wouldn’t be hard for students either, as the Ruffle emulator allows Flash content to play without having to install Flash player. (Rather helpful, given most platforms don’t accept Flash content these days :P)

31

Another excellent piece from Iris Meredith - strongly recommend reading if you want an idea of how to un-fuck software as a field.

15
22

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

24

Well, it seems the AI bubble’s nearing its end - the Financial Times has reported a recent dive in tech stocks, the mass media has fully soured on AI, and there’s murmurs that the hucksters are pivoting to quantum.

By my guess, this quantum bubble is going to fail to get off the ground - as I see it, the AI bubble has heavily crippled the tech industry’s ability to create or sustain new bubbles, for two main reasons.

No Social License

For the 2000s and much of the 2010s, tech enjoyed a robust social license to operate - even if they weren’t loved per se (e.g. Apple), they were still pretty widely accepted throughout society, and resistance to them was pretty much nonexistent.

Whilst it was starting to fall apart with the “techlash” of the 2020s, the AI bubble has taken what social license tech has had left and put it through the shredder.

Environmental catastrophe, art theft and plagiarism, destruction of livelihoods and corporate abuse, misinformation and enabling fascism, all of this (and so much more) has eviscerated acceptance of the tech industry as it currently stands, inspiring widespread resistance and revulsion against AI, and the tech industry at large.

For the quantum bubble, I expect it will face similar resistance/mockery right out of the gate, with the wider public refusing to entertain whatever spurious claims the hucksters make, and fighting any attempts by the hucksters to force quantum into their lives.

(For a more specific prediction, quantum’s alleged encryption-breaking abilities will likely inspire backlash, being taken as evidence the hucksters are fighting against Internet privacy.)

No Hypergrowth Markets

As Baldur Bjarnason has noted about tech industry valuations:

“Over the past few decades, tech companies have been priced based on their unprecedented massive year-on-year growth that has kept relatively steady through crises and bubble pops. As the thinking goes, if you have two companies—one tech, one not—with the same earnings, the tech company should have a higher value because its earnings are likely to grow faster than the not-tech company. In a regular year, the growth has been much faster.”

For a while, this has held - even as the hypergrowth markets dried up and tech rapidly enshittified near the end of the ‘10s, the gravy train has managed to keep rolling for tech.

That gravy train is set to slam right into a brick wall, however - between the obscenely high costs of both building and running LLMs (both upfront and ongoing), and the virtually nonexistent revenues those LLMs have provided (except for NVidia, who has made a killing in the shovel selling business), the AI bubble has burned billions upon billions of dollars on a product which is practically incapable of making a profit, and heavily embrittled the entire economy in the process.

Once the bubble finally bursts, it’ll gut the wider economy and much of the tech industry, savaging evaluations across the board and killing off tech’s hypergrowth story in the process.

For the quantum bubble, this will significantly complicate attempts to raise investor/venture capital, as the finance industry comes to view tech not as an easy and endless source of growth, but as either a mature, stable industry which won’t provide the runaway returns they’re looking for, or as an absolute money pit of an industry, one trapped deep in a malaise era and capable only of wiping out whatever money you put into it.

(As a quick addendum, it's my 25th birthday tomorrow - I finished this over the course of four hours and planned to release it tomorrow, but decided to post it tonight.)

24
submitted 3 weeks ago* (last edited 3 weeks ago) by BlueMonday1984@awful.systems to c/techtakes@awful.systems

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

view more: next ›

BlueMonday1984

joined 2 years ago