[-] BlueMonday1984@awful.systems 5 points 5 days ago

I’m not sure will ever escape “move fast and break things” this side of a civilisation-toppling catastrophe. Which we might get.

Considering how "vibe coding" has corroded IT infrastructure at all levels, the AI bubble is set to trigger a 2008-style financial crisis upon its burst, and AI itself has been deskilling students and workers at an alarming rate, I can easily see why.

[-] BlueMonday1984@awful.systems 6 points 5 days ago

Cooling in space is an absolute arse. Space is an excellent insulator for heat. That’s why a thermos works. In space, thermal management is job number one. All you can use is radiators. Getting rid of your 200 kilowatts will need about 500 square metres.

To drive home how easy this is to work out, the Codex for the Mass Effect series^1^ explicitly points out that radiation is the only way to cool off in space, and goes into detail on how in-universe spaceships (civilian and military) deal with heat buildup.

BioWare did their homework on this shit for a series of sci-fi RPGs which started in the early days of the Xbox 360 and the PS3. That the startup bros, tech co's and billionaire CEOs pushing this have failed or refused to recognise this shit is goddamn negligence.

So space is a bit hard. A lot of the sci-fi guys suggest oceans! We’ll put the data centres underwater and cooling will be great!

The only way I see that idea working is if humanity works out underwater cities (e.g. Rapture from the original Bioshock) first. That'd make the issue of maintenance easier to deal with, even if getting new parts from the surface would remain a PITA.

^1^ Specifically "Starships: Heat Management", under Ships and Vehicles, in the Secondary Codex"

[-] BlueMonday1984@awful.systems 8 points 6 days ago

A philosophy professor has warned the deskilling machine is deskilling workers. In other news, water is wet.

[-] BlueMonday1984@awful.systems 4 points 6 days ago

Considering the sorry state of the software industry, plus said industry's adamant refusal to learn from its mistakes, I think society should actively avoid starting or implementing new software, if not actively cut back on software usage when possible, until the industry improves or collapses.

That's probably an extreme position to take, but IT as it stands is a serious liability - one that AI's set to make so much worse.

[-] BlueMonday1984@awful.systems 59 points 2 months ago

Not only that, she introduced mass surveillance to Bluesky and is brainstorming further methods of such in response to getting clowned on so hard.

20

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

23

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

13

New blog entry from Baldur, comparing the Icelandic banking bubble and its fallout to the current AI bubble and its ongoing effects.

3

(This is an expanded version of a comment I made, which I've linked above.)

Well, seems the tech industry’s prepared to pivot to quantum if and when AI finally dies and goes away forever. If and when the hucksters get around to inflating the quantum bubble, I expect they’re gonna find themselves facing some degree of public resistance - probably not to the extent of what AI received, but still enough to give the hucksters some trouble.

The Encryption Issue

One of quantum’s big selling points is its purported ability to break the current encryption algorithms in use today - for a couple examples, Shor’s algorithm can reportedly double-tap public key cryptography schemes such as RSA, and Grover’s algorithm promises to supercharge brute-force attacks on symmetric-key cryptography.

Given this, I fully expect its supposed encryption-breaking abilities to stoke outcry and resistance from privacy rights groups. Even as a hypothetical, the possibility of such power falling into government hands is one that all-but guarantees Nineteen Eighty-Four levels of mass surveillance and invasion of privacy if it comes to pass.

Additionally, I expect post-quantum encryption will earn a lot of attention during the bubble as well, to pre-emptively undermine such attempts at mass surveillance.

Environmental Concerns

Much like with AI, info on how much power quantum computing requires is pretty scarce (though that’s because they more-or-less don’t exist, not because AI corps are actively hiding/juicing the numbers).

The only concrete number I could find came from IEEE Spectrum, which puts the power consumption of the D-Wave 2X (from 2015) at “slightly less than 25 kilowatts”, with practically all the power going to the refrigeration unit keeping it within a hair’s breadth of absolute zero, and the processor itself using “a tiny fraction of a microwatt”.

Given the minimal amount of info, and the AI bubble still being fresh in the public’s mind, I expect quantum systems will face resistance from environmental groups. Between the obscene power/water consumption of AI datacentres, the shitload of pollution said datacentres cause in places like Memphis, and the industry’s attempts to increase said consumption whenever possible, any notion that tech cares about the environment is dead in the (polluted) water, and attempts to sell the tech as energy efficient/environmentally friendly will likely fall on deaf ears.

18

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

10
submitted 3 months ago* (last edited 3 months ago) by BlueMonday1984@awful.systems to c/morewrite@awful.systems

It’s been a couple of weeks since my last set of predictions on the AI winter. I’ve found myself making a couple more.

Mental Health Crises

With four known suicides (Adam Raine, Sewell Setzer, Sophie Rottenberg and an unnamed Belgian man), a recent murder-suicide, and involuntary commitments caused by AI psychosis, there’s solid evidence to show that using AI is a fast track to psychological ruin.

On top of that, AI usage is deeply addictive, combining a psychic’s con with a gambling addiction to produce what amounts to digital cocaine, leaving its users hopelessly addicted to it, if not utterly dependent on it to function (such cases often being referred to as “sloppers”).

If and when the chatbots they rely on are shut down, I expect a major outbreak of mental health crises among sloppers and true believers, as they find themselves unable to handle day-to-day life without a personal sycophant/”assistant”/”””therapist””” on hand at all times. For psychiatrists/therapists, I expect they will find a steady supply of new clients during the winter, as the death of the chatbot sends addicted promptfondlers spiralling.

Skills Gaps Galore

One of the more common claims from promptfondlers and boosters when confronted is “you won’t be replaced by AI, but by a human using AI”.

With how AI prevents juniors from developing their skills, makes seniors worse at their jobs, damages productivity whilst creating a mirage of it, and damages their users’ critical thinking and mental acuity, all signs point to the exact opposite being the case - those who embrace and use AI will be left behind, their skills rotting away as their AI-rejecting peers remain as skilled as before the bubble, if not more so thanks to spending time and energy on actually useful skills, rather than shit like “prompt engineering” or “vibe coding”.

Once the winter sets in and the chatbots disappear, the gulf between these two groups is going to become much wider, as promptfondlers’ crutches are forcibly taken away from them and their “skills” in using the de-skilling machine are rendered useless. As a consequence, I expect promptfondlers will be fired en masse and struggle to find work during the winter, as their inability to work without a money-burning chatbot turns them into a drag on a company’s bottom line.

22
4
submitted 3 months ago* (last edited 2 months ago) by BlueMonday1984@awful.systems to c/morewrite@awful.systems

Recently, I read a short article from Iris Meredith about rethinking how we teach programming. It's a pretty solid piece of work all around, and it has got me thinking how to further build on her ideas.

This contains a quick overview of her newsletter to get you up to speed, but I recommend reading it for yourself.

The Problem

As is rather obvious to most of us, the software industry is in a dire spot - Meredith summed it up better than I can:

Software engineers tend to be detached, demotivated and unwilling to care much about the work they're doing beyond their paycheck. Code quality is poor on the whole, made worse by the current spate of vibe coding and whatever other febrile ideas come out of Sam Altman's brain. Much of the software that we write is either useless or actively hurts people. And the talented, creative people that we most need in the industry are pushed to the margins of it.

As for the cause, Iris points to the "teach the mystic incantations" style used in many programming courses, which ignores teaching students how to see through an engineer’s eyes (so to speak), and teaching them the ethics of care necessary to write good code (roughly 90% of what goes into software engineering). As Iris notes:

This tends to lead, as you might expect, to a lot of new engineers being confused, demotivated and struggling to write good code or work effectively in a software environment. [...] It also means, in the end, that a lot of people who'd be brilliant software engineers just bounce off the field completely, and that a lot of people who find no joy in anything and just want a big salary wind up in the field, never realising that they have no liking or aptitude for it.

Meredith’s Idea

Meredith’s solution, in brief, is threefold.

First, she recommends starting people off with HTML as their first language, giving students the tools they need to make something they want and care about (a personal website in this case), and providing a solid bedrock for learning fundamental programming skills

Second, she recommends using “static site generators with templating engines” as an intermediate step between HTML/CSS and full-blown programming, to provide students an intuitive method of understanding basic concepts such as loops, conditionals, data structures and variables.

(As another awful member points out, they provide an easy introduction to performance considerations/profiling by being blazing fast compared to all-too common JS monoliths online, and provide a good starting point for introducing modularity as well.)

Third, and finally, she recommends having students publish their work online right from the start, to give them reason to care about their work as early as possible and give them the earliest possible opportunity to learn about the software development life cycle.

A Complementary Idea

Meredith’s suggested approach to software education is pretty solid on all fronts - it gets students invested in their programming work, and gives them the tools needed to make and maintain high-quality code.

If I were to expand on this a bit, I think the obvious addition would be to provide an arts education to complement Iris’ proposed webdev-based approach

As explicit means of self-expression, the arts provide provide great assistance in highlighting the expressive elements of software Meredith wishes to highlight

An arts education would wonderfully complement the expressive elements of software Meredith wishes to highlight - focusing on webdev, developing students’ art skills would expand their ability to customise their websites to their liking, letting them make something truly unique to themselves.

The skills that students learn through the arts would also complement what they directly learn in programming, too. The critical eye that art critique grants them will come in handy for code review. The creative muscles they build through art will enhance their problem-solving abilities, and so on.

Beyond that, I expect the complementary arts will do a good job attracting creatives to the field, whilst pushing away “people who find no joy in anything and just want a big salary”, which Meredith notes are common in the field. Historically, “learn to code” types have viewed the arts as a “useless” degree, so they’ll near-certainly turn their noses up at having to learn it alongside something more “useful”, leaving the door open for more creatives to join up.

A More Outlandish Idea

For a more outlandish idea, the long-defunct, yet well-beloved multimedia platform Adobe Flash could provide surprisingly useful for a programming education, especially with the complementary arts education I suggested before.

Being effectively an IDE and an animation program combined into one, Flash offers a means of developing and testing a student’s skills in art and programming simultaneously, and provides an easy showcase of how the two can complement each other.

Deploying Flash to a personal website wouldn’t be hard for students either, as the Ruffle emulator allows Flash content to play without having to install Flash player. (Rather helpful, given most platforms don’t accept Flash content these days :P)

32

Another excellent piece from Iris Meredith - strongly recommend reading if you want an idea of how to un-fuck software as a field.

15
22

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

25

Well, it seems the AI bubble’s nearing its end - the Financial Times has reported a recent dive in tech stocks, the mass media has fully soured on AI, and there’s murmurs that the hucksters are pivoting to quantum.

By my guess, this quantum bubble is going to fail to get off the ground - as I see it, the AI bubble has heavily crippled the tech industry’s ability to create or sustain new bubbles, for two main reasons.

No Social License

For the 2000s and much of the 2010s, tech enjoyed a robust social license to operate - even if they weren’t loved per se (e.g. Apple), they were still pretty widely accepted throughout society, and resistance to them was pretty much nonexistent.

Whilst it was starting to fall apart with the “techlash” of the 2020s, the AI bubble has taken what social license tech has had left and put it through the shredder.

Environmental catastrophe, art theft and plagiarism, destruction of livelihoods and corporate abuse, misinformation and enabling fascism, all of this (and so much more) has eviscerated acceptance of the tech industry as it currently stands, inspiring widespread resistance and revulsion against AI, and the tech industry at large.

For the quantum bubble, I expect it will face similar resistance/mockery right out of the gate, with the wider public refusing to entertain whatever spurious claims the hucksters make, and fighting any attempts by the hucksters to force quantum into their lives.

(For a more specific prediction, quantum’s alleged encryption-breaking abilities will likely inspire backlash, being taken as evidence the hucksters are fighting against Internet privacy.)

No Hypergrowth Markets

As Baldur Bjarnason has noted about tech industry valuations:

“Over the past few decades, tech companies have been priced based on their unprecedented massive year-on-year growth that has kept relatively steady through crises and bubble pops. As the thinking goes, if you have two companies—one tech, one not—with the same earnings, the tech company should have a higher value because its earnings are likely to grow faster than the not-tech company. In a regular year, the growth has been much faster.”

For a while, this has held - even as the hypergrowth markets dried up and tech rapidly enshittified near the end of the ‘10s, the gravy train has managed to keep rolling for tech.

That gravy train is set to slam right into a brick wall, however - between the obscenely high costs of both building and running LLMs (both upfront and ongoing), and the virtually nonexistent revenues those LLMs have provided (except for NVidia, who has made a killing in the shovel selling business), the AI bubble has burned billions upon billions of dollars on a product which is practically incapable of making a profit, and heavily embrittled the entire economy in the process.

Once the bubble finally bursts, it’ll gut the wider economy and much of the tech industry, savaging evaluations across the board and killing off tech’s hypergrowth story in the process.

For the quantum bubble, this will significantly complicate attempts to raise investor/venture capital, as the finance industry comes to view tech not as an easy and endless source of growth, but as either a mature, stable industry which won’t provide the runaway returns they’re looking for, or as an absolute money pit of an industry, one trapped deep in a malaise era and capable only of wiping out whatever money you put into it.

(As a quick addendum, it's my 25th birthday tomorrow - I finished this over the course of four hours and planned to release it tomorrow, but decided to post it tonight.)

[-] BlueMonday1984@awful.systems 37 points 6 months ago

Musk says: “At times, I think Grok-3 is kind of scary smart.” Grok is just remixing its training data — but a stochastic parrot is still more reality-based than Elon Musk. [Bloomberg, archive]

If someone roasted me such such surgical precision like that, I'd delete my entire Internet presence out of shame. God damn.

[-] BlueMonday1984@awful.systems 43 points 6 months ago

Baldur Bjarnason's given his thoughts on Bluesky:

My current theory is that the main difference between open source and closed source when it comes to the adoption of “AI” tools is that open source projects generally have to ship working code, whereas closed source only needs to ship code that runs.

I’ve heard so many examples of closed source projects that get shipped but don’t actually work for the business. And too many examples of broken closed source projects that are replacing legacy code that was both working just fine and genuinely secure. Pure novelty-seeking

[-] BlueMonday1984@awful.systems 59 points 10 months ago

To reference a previous sidenote, DeepSeek gives corps and randos a means to shove an LLM into their shit for dirt-cheap, so I expect they're gonna blow up in popularity.

[-] BlueMonday1984@awful.systems 76 points 1 year ago

I've already seen people go absolutely fucking crazy with this - from people posting trans-supportive Muskrat pictures to people making fucked-up images with Nintendo/Disney characters, the utter lack of guardrails has led to predictable chaos.

Between the cost of running an LLM and the potential lawsuits this can unleash, part of me suspects this might end up being what ultimately does in Twitter.

view more: ‹ prev next ›

BlueMonday1984

joined 2 years ago