[-] BlueMonday1984@awful.systems 35 points 1 week ago* (last edited 1 week ago)

Black Ops 7 sold less than half as much as last year’s Battlefield 6.

That's definitely a mistype - the previous CoD game was Black Ops 6, and Battlefield 6 was made by DICE and published by EA. Still, makes a good segue into how Blops 7 (which went all-in on AI) compared against BF6 (which released more-or-less AI-free, going by DICE's own comments).

In terms of sales, Blops 7 sold 63% less on launch than Battlefield 6 did in its opening week, earning Battlefield its first sales victory over CoD in both series' history, and breaking a streak of #1 sales CoD had held since 2006.

On Metacritic, BF6 got "generally favourable" critic scores in the low-80s, whilst Blops 7 hovers around the 67% mark. The user scores are much more favourable - whilst BF6 is hovering at 7/10 as of this writing, BO7 is currently at 1.6/10, beating the notoriously maligned Modern Warfare III to become the lowest-rated entry in the series' history.

On OpenCritic, Battlefield 6 earned a "Strong" rating on all fronts (83% top critic average, 90% Critics Recommend, player rating of 90) whilst Blops 7 got a "Weak" rating (65% top critic average, 35% Critics Recommend, player rating of 20).

Overall, signs are pointing to a historic victory for Battlefield over its long-running rival in the FPS genre, and a very public rejection of AI slop in all its forms.

[-] BlueMonday1984@awful.systems 30 points 1 month ago

“[I]ncreasing the adoption of open source software” is such a bullshit argument here anyway, holy shit lol.

Not to mention, actively welcoming Nazis into the open source bar is gonna achieve the exact goddamn opposite of increasing adoption, by driving away marginalised users, introducing unnecessary security risks, actively rotting communities from the inside, and God-only-knows-what-else.

25

Well, it seems the AI bubble’s nearing its end - the Financial Times has reported a recent dive in tech stocks, the mass media has fully soured on AI, and there’s murmurs that the hucksters are pivoting to quantum.

By my guess, this quantum bubble is going to fail to get off the ground - as I see it, the AI bubble has heavily crippled the tech industry’s ability to create or sustain new bubbles, for two main reasons.

No Social License

For the 2000s and much of the 2010s, tech enjoyed a robust social license to operate - even if they weren’t loved per se (e.g. Apple), they were still pretty widely accepted throughout society, and resistance to them was pretty much nonexistent.

Whilst it was starting to fall apart with the “techlash” of the 2020s, the AI bubble has taken what social license tech has had left and put it through the shredder.

Environmental catastrophe, art theft and plagiarism, destruction of livelihoods and corporate abuse, misinformation and enabling fascism, all of this (and so much more) has eviscerated acceptance of the tech industry as it currently stands, inspiring widespread resistance and revulsion against AI, and the tech industry at large.

For the quantum bubble, I expect it will face similar resistance/mockery right out of the gate, with the wider public refusing to entertain whatever spurious claims the hucksters make, and fighting any attempts by the hucksters to force quantum into their lives.

(For a more specific prediction, quantum’s alleged encryption-breaking abilities will likely inspire backlash, being taken as evidence the hucksters are fighting against Internet privacy.)

No Hypergrowth Markets

As Baldur Bjarnason has noted about tech industry valuations:

“Over the past few decades, tech companies have been priced based on their unprecedented massive year-on-year growth that has kept relatively steady through crises and bubble pops. As the thinking goes, if you have two companies—one tech, one not—with the same earnings, the tech company should have a higher value because its earnings are likely to grow faster than the not-tech company. In a regular year, the growth has been much faster.”

For a while, this has held - even as the hypergrowth markets dried up and tech rapidly enshittified near the end of the ‘10s, the gravy train has managed to keep rolling for tech.

That gravy train is set to slam right into a brick wall, however - between the obscenely high costs of both building and running LLMs (both upfront and ongoing), and the virtually nonexistent revenues those LLMs have provided (except for NVidia, who has made a killing in the shovel selling business), the AI bubble has burned billions upon billions of dollars on a product which is practically incapable of making a profit, and heavily embrittled the entire economy in the process.

Once the bubble finally bursts, it’ll gut the wider economy and much of the tech industry, savaging evaluations across the board and killing off tech’s hypergrowth story in the process.

For the quantum bubble, this will significantly complicate attempts to raise investor/venture capital, as the finance industry comes to view tech not as an easy and endless source of growth, but as either a mature, stable industry which won’t provide the runaway returns they’re looking for, or as an absolute money pit of an industry, one trapped deep in a malaise era and capable only of wiping out whatever money you put into it.

(As a quick addendum, it's my 25th birthday tomorrow - I finished this over the course of four hours and planned to release it tomorrow, but decided to post it tonight.)

24
submitted 3 months ago* (last edited 3 months ago) by BlueMonday1984@awful.systems to c/techtakes@awful.systems

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

8
submitted 3 months ago* (last edited 3 months ago) by BlueMonday1984@awful.systems to c/morewrite@awful.systems

(This is a mega-expanded version of a stubsack comment: https://awful.systems/comment/8327535)

Multiple times before on awful.systems, I’ve claimed the AI bubble would provide the humanities some degree of begrudging respect, at the expense of STEM’s public image taking a nosedive.

In the process of writing this mini-essay, its become clear that I was predicting humanities would cannibalise tech’s public image, rather than STEM’s - I had just failed to recognise tech had made itself utterly synonymous with STEM up until now.

Still, I’ve made this claim, I might as well try to back it up.

High Paying, No More?

One of the major things propping up tech/STEM's public image is the notion that its higher-paying than a humanities degree - that “learning to code” will earn you a high-paying job and financial stability, whilst taking any kind of “useless” arts degree will end with you working some form of low-wage employment (e.g. as a barista).

Between the complete clusterfuck that is the job market, the Trump administration’s war on American science, the use of AI to kill jobs left and right (whilst enshittifying what remains) and the ongoing layoffs ravaging the entire tech industry, the idea that any degree will earn you a stable job has been pretty thoroughly undermined.

And with coding getting the brunt of all of this, thanks to an oversaturated market and the AI bubble hitting tech particularly hard, any notion of tech being an easy road to riches is pretty much dead and buried.

Not Lookin’ So Smart

Another thing propping up tech/STEM’s image was the view of it being more “logical/rational” than the humanities - that it dealt with “objective” matters, compared to the highly-subjective humanities, that it was “apolitical” compared to the deeply-political humanities, that kinda stuff.

On that front, the AI bubble has become tech’s equivalent to the Sokal hoax, deeply undermining any and all notions of rationality tech had built up over the past few decades.

Artistically-speaking, the large-scale art theft committed to create gen-AI, the vapidity and soullessness of the AI slop it produces, the AI bros’ failure to recognise this soullessness (Fig. 1, Fig. 2) and their actions regarding the effects of gen-AI (defending open theft, mocking their victims, cultural vandalism, denigrating human work, etcetera) have deeply undermined tech’s ability to talk on matters of art, with the industry at large viewed as incapable of understanding art at best, and as being hostile to art and artists at worst.

On a more general front, AI’s failures of reasoning (formal and informal, comedic and horrific), plus the tech industry’s refusal to recognise or acknowledge these failures (instead relentlessly hyping up AI’s supposed capabilities, making spurious claims about Incoming Superintelligence™ and doomsaying about how spicy autocomplete might kill us all), have put tech’s “rationality” into serious question, painting the industry at large as out-of-touch with reality and unconcerned with solving actual problems.

For the humanities generally, this bubble is going to make them look relatively grounded and reasonable by comparison, whilst for the arts specifically, they’ll likely be able to point to the slop-nami when their usefulness is questioned.

(Reports of AI usage causing metaphorical and literal brainrot likely aren’t helping, either, as they provide the public an obvious explanation for tech’s disconnection from reality.)

Eau de Fash

Tech has long had to deal with a long-standing “debate bro both sides free speech libertarianism” stench on it, as Soyweiser has noted, but between Silicon Valley’s willing collaboration with the Trump administration, plus fascists’ adoration of AI and AI slop, that stench has evolved into an unignorable smell of Eau de Fash covering the entire industry

As a consequence of this, I expect tech at large will be viewed as a Nazi bar writ large, with tech workers as a group being either willing accomplices to fascism if not outright fascist themselves. As for tech degrees, I expect they’ll be viewed as leaving their holders unequipped to resist fascism, if not outright vulnerable to fascist rhetoric.

Predicting the Job Market

(Disclaimer: This is not financial advice, this is just a shot in the dark from some dipshit with a laptop. I take no credit for whatever financial success my readers earn.)

With tech’s public cachet and “high-paying” reputation going out the window, plus the job market for tech collapsing, I expect a major drop-off in students taking up tech-related degrees, with a smaller drop-off for STEM degrees in general. By my guess, we aren’t gonna see another “learn to code” push for at least a decade. If and when another push starts, it’ll probably take on a completely different form than what we’ve seen before.

Exactly which professions will benefit from the tech crash, I don’t know - I’m not a Superpredictor™, I’m just some dipshit with a laptop. By my guess, professions which can exploit the fallout of AI to their benefit will have the best shot of becoming the next “lucrative cash cows”, so to speak.

For therapists/psychiatrists, the rise of AI psychosis and related mental health crises will likely give them a steady source of clients for the foreseeable future - whether that be because new clients have realised chatbot usage is ruining them, or because people are being involuntarily committed after losing touch with reality.

For those in writing related jobs, they may find lucrative work cleaning up attempts to sidestep them with AI slop, squeezing hefty premiums from desperate clients who find themselves lacking leverage over them.

For programmers (most likely senior programmers, juniors are still likely screwed), the rise of “vibe coding” has created mountains of technical debt and unmaintainable code that will need to be torn down - for those who manage to find themselves a job, they’ll probably make good money tearing those mountains down. For cybercriminals, the aforementioned “vibe coding”, plus the inherently insecure nature of chatbots/agents, will likely give them a lot of low-hanging fruit to go after.

As for degrees, those which can fill skills gaps the bubble has created/widened should benefit the most.

English/Creative Writing looks like an obvious winner - ChatGPT has fried a lot of people’s writing skills, so holding one of those degrees (ideally with a writing portfolio) can help convince an employer you don’t need spicy autocomplete to write for you.

Psychology/psychiatry will likely benefit quite a bit as well - both of those can directly assist in landing you a job as a therapist, which I’ve predicted will become much more lucrative in the coming years.

EDIT: Slightly expanded my prediction about programmers.

14
Sarah Lyons on AI doom crankery (houseofmirrors.substack.com)
8

Recently, I ended up re-reading James Allen-Robertson’s “Devs and the Culture of Tech”, a five-part deep dive into the sci-fi miniseries Devs, and its critiques of the tech industry on a structural level.

In lieu of anything better to do, I’ve decided to pull out a single concept James has touched on, and give my extended thoughts on it.

So, What is Technological Determinism?

In a basic sense, technological determinism (which I’m calling techno-determinism to be more concise) is a worldview that posits technological development as the primary driving force of humanity, and which treats said development as, heavily paraphrasing James, a product of “rational people pursuing the objectively best outcomes”, if not “a process of uncovering [tech] as prior technological discovery begets the next like some inevitable Civ tech-tree” .

For Silicon Valley, the techno-determinist worldview provided two main advantages.

First, it provided an easy accountability sink for when new technological developments screw over some portion of the public - it wasn’t Silicon Valley’s fault that they fucked taxi drivers over with their ride-sharing apps, it was the taxi companies’ fault for getting in the way of Progress™.

Second, it obscures SV’s role in pushing those developments, and whatever reasons they may have had for said developments - those ride-sharing apps didn’t pop up because Silicon Valley wanted to make more money, they popped up because they were The Future™.

These days, techno-determinism has lost a fair bit of its grip on the general public - and I personally believe the NFT bubble is the major cause.

NFTs Killed Techno-Determinism

If you’ve been on the Internet for any length of time in the past few years, you’ve definitely heard of NFTs. They popped up in 2021, became completely fucking inescapable for roughly a year, then died an embarrassing death in 2022, prompting an outpouring of schadenfreude from the general public.

During their bubble, they were hyped to the stars by Silicon Valley, with claims that they were The Future™, that they were Inevitable™, and that you needed to Get On Board Now™ or be Left Behind™. (Sound familiar?)

As you already know, NFTs did not become The Future™. They failed, in spectacular fashion, receiving widespread mockery and rejection from the public, before getting consigned to the dustbin of history after the market imploded.

In that loud, spectacular failure, NFTs fatally undermined any notion showed the public that resistance against SIlicon Valley was anything but futile, that they didn’t need to take Silicon Valley’s attacks on them lying down, and whatever dystopian dreams the Valley had could be strangled in their crib.

On top of that, seeing SV’s rhetoric collapse so spectacularly with NFTs helped to inoculate the public against Silicon Valley’s techno-determinist rhetoric - after seeing artificially scar

On top of that, the failure of NFTs helped to inoculate the public against Silicon Valley’s techno-determinist rhetoric - after witnessing it spectacularly collapse in the face of reality, the public was well-prepared to see through SV’s attempt to recycle their rhetoric when the AI bubble reared its head.

15
12

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

19
submitted 4 months ago* (last edited 3 months ago) by BlueMonday1984@awful.systems to c/morewrite@awful.systems

It’s pretty much a given that we’re in for an AI winter once this bubble bursts - the only thing we can argue on at this point is exactly how everything will shake out. So, let’s beat this dead horse and make some random predictions before it inevitably gets sent to the glue factory. I’ve hardly got anything better to do.

The Death of “Value-Neutral” AI

Before this bubble, artificial intelligence was generally viewed as value-neutral. It was generally viewed as a tool, capable of good or evil, bringing about a futuristic utopia or a Terminator-style apocalypse.

Between the large-scale art theft/plagiarism committed to build the datasets (through coercion, deception, ignoring the victim’s refusal, spamming new scrapers, et cetera), the abused and underpaid workers who classified the datasets, the myriad harms brought by the LLMs themselves (don’t get me fucking started), and the utterly ghoulish acts of the CEOs and AI bros involved (defending open theft, mocking their victims, cultural vandalism, denigrating human work, etcetera), that “value neutral” notion is dead and fucking buried.

Going forward, I expect artificial intelligence to be viewed not as a tool or a technology, but as an enemy (of sorts), built to perpetrate evil, and capable only of evil. As for its users (assuming it still has users), I expect them to be viewed as tech assholes, class traitors, incompetent dipshits, “prompt goblins” craving approval, and generally worthy only of mockery or condemnation.

Confidence: Near-certain. Ali Alkhatib’s “Defining AI” (which called for redefining AI as an ideological project to more effectively resist it) and Matthew Hughes’ “People Are The Point” (a manifesto which opposes AI on principle, calling it “an expression of contempt towards people”) have already provided crystal-clear examples of AI being treated as an evil unto itself, and the links in the previous paragraph already show use of AI being treated as a moral failing as well.

Side-Order of Tech Crash

It’s no secret that the tech industry has put a horrific amount of cash into this AI bubble - every major AI corp burns billions in VC cash with no end in sight, Microsoft performed mass layoffs to throw money at AI (mass layoffs of people making the company money, mind you), NVidia is blowing billions on AI money-burners (to keep making a killing off of selling shovels in this AI gold rush), the fucking works. And all in pursuit of a Hail Mary pass intended to keep the tech industry’s Endless Growth™ going for just a few years more.

(Going by David Gerard, previous AI springs were primarily funded by the Department of Defense, with winter setting in whenever their patience for burning cash ran out.)

With all the billions upon billions thrown into AI, and revenue from said AI being somewhere between Jack and Shit (barring the profits of shovel-sellers like NVidia, as mentioned before), this AI winter will likely kick off with a very wide-ranging tech crash that takes a chunk out of the entire industry, and causes some serious economic woes for good measure.

Confidence: Very high. Ed Zitron’s gone into punishing detail about the utterly fucked economics of basically everyone involved in this bubble, and I’d be here all day if I went over everything he’s written about. Picking just a single article, here’s him talking about OpenAI being a systemic risk to tech.

Scrapers Need Not Apply

Before the AI bubble, scrapers/crawlers were a normal, accepted part of the Internet ecosystem - there was no real incentive to block crawlers by default, since the vast majority were well-behaved and followed robots.txt, and search engine crawlers specifically were something you wanted to welcome, since those earned you traffic from search results.

Come the AI bubble, this status quo would be completely undermined, for three main reasons.

First, and most obviously, there’s the theft - far from having any benevolent purposes, the crawlers employed by AI corps are created to outright steal data off your blog/website, then use it to create a slop generator that claims your work as its own and/or tries to put you out of business, making AI crawlers an long-term existential threat to whatever endeavours you go into.

Second, AI Summary™ services (like Google’s) created through the aforementioned theft have utterly cratered search engine traffic, taking the main upside to allowing crawlers to scrape your site and turning it into a severe downside.

Last, but not least, are the AI crawlers themselves - thanks to how they DDoS whatever sites or FOSS infrastructure they decide to scrape, and the dirty tricks employed in said scraping (ignoring robots.txt, lying about their user agent, spamming new scrapers, using botnets, etcetera), doing anything short of blocking scrapers on sight is not just a long-term liability to you, but an immediate liability to your website as well.

As a response to these crawlers, a cottage industry of anti-scraping solutions cropped up providing a variety of ways to fight back. Between dedicated bot-blockers like Anubis, tarpits like Iocaine and Nepenthes, and media-poisoning tools like Glaze and Nightshade, scrapers of all stripes now face an ever-present risk of being blocked from data (especially high quality data), or force-fed misleading data intended to waste their time and poison their datasets.

As the cherry on top of this anti-scraper shit sundae, the rise of generative AI has flooded the ‘Net with AI slop, which is difficult to identify, near-impossible to avoid, and outright useless (if not dangerous) to scrape. Unless you’re limiting yourself to sources made before 2022 (commonly known as low-background media), chances are you’re gonna have to deal with your dataset getting contaminated.

Given all this, I expect scraper activity in general (malicious or otherwise) to steeply drop during the AI winter, as all scrapers get treated as guilty (of AI fuckery) until proven innocent, and non-malicious scraper activity drops off as developers deem running them to be not worth the hassle.

Confidence: Moderate. I already know of one scraper-based project (wordfreq, to be specific) which shut down as a consequence of the AI bubble - I wouldn’t be shocked to see more cases crop up down the line.

Condemnation and Mockery

For the past two years, the AI bubble has been inescapable for the public at large.

On one front, they’ve spent the past two years being utterly inundated with AI hype of every stripe - AI bros hyping up AI as The Future™, wild and spurious claims of Incoming Superintelligence™, rigged tests and cheated benchmarks made directly by the AI corps, and relentless anthropomorphisation of spicy autocompletes and signal-shaped noise generators.

Especially anthropomorphisation - whether it be painting hallucinations as lies, presenting AI as deceptive or coercive, or pretending they can feel pain, there has been a horrendous amount of time and money spent on trying to deceive the public into believing LLMs are sentient, if not humanlike in their actions.

On another front, the public has bore witness to a wide variety of harms as a direct consequence of AI’s creation.

Local environmental catastrophe, global water loss and sky high emissions, widespread job loss, academic misconduct, nonstop hallucinations and misinformation, voice-cloning scams, programming disasters, damaged productivity, psychosis, outright suicide (on multiple occasions), the list goes on and on and on and on and on.

All of this has been thoroughly burned into the public consciousness over these past two or three years, ensuring AI will retain a major (and deeply negative) presence there, and ensuring AI as a concept will face widespread mockery and condemnation from the public, until long after the bubble bursts.

Giving some more specifics:

Confidence: Completely certain. I’m basically “predicting” something that’s already happening right now, and has a very good chance of continuing months, if not years, down the road.

Arguably, I’m being a bit conservative with this prediction - given the cultural rehabilitation of the Luddites, and the rise of a new Luddite movement in 2024, I could easily argue that the bubble’s started a full-blown resistance movement against the tech industry as a whole.

24

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

13
Godot Showcase - Dogwalk (godotengine.org)

A crossover between Godot and Blender was not on my bingo card for 2025, but I'm still pretty happy to see - not just because we got a cool little game out of it, but because interop between the two got a major boost.

21

Recently, I found myself ruminating on the general lack of AI slop over on Newgrounds (a site I use rather heavily, and have been since I joined in 2023). The only major case I've seen in recent memory was an influx of vibe-coded shovelware I saw last month.

If the title didn't tip you off, I personally believe it to be due to Newgrounds being naturally resistant to contamination with AI slop. Here's a few reasons why I think that is:

An Explicit Stance

First off, I'll get the obvious reason out the way.

Newgrounds has explicitly banned AI slop from being uploaded since September 2022,very early into the bubble. Whilst the guidelines carve out some minor exceptions for using AI to assist human work, simply generating a piece of slop and hitting “Upload” is off the table.

The only real development since then was a site update in early March 2024, which added the option to flag a submission as AI slop.

Both of these moves made the site’s stance loud and clear: AI slop of all stripes is not welcome on NG.

Beyond giving the mods explicit permission to delete slop on sight, the move likely did plenty to deter AI bros from setting up shop - if they weren't gonna get some easy clout from spewing their slop on there, why bother?

DeviantArt provides an obvious point of contrast here - far from take any concrete stance against AI slop, the site actively welcomed it, launching a slop generator of their own in November 2022 and doing nothing to rein in slop as it flooded the site.

Slop-proof Monetization

A second, and arguably less important factor, is Newgrounds’ general approach to monetisation - both in making money and paying it out.

In terms of making money, Newgrounds has pushed heavily towards running ad-free since the start of the decade - as of this writing, Newgrounds relies near-exclusively on Supporter (a subscription service which started in 2012, for context) for revenue. (Right now, adverts run exclusively on A-rated submissions (i.e. porn), which require an account to view.)

At the same time, the site wound down its previous rev-share system (which directly ran on ad revenue), leaving just the monthly cash prizes for payouts.

The overall effect of this change has been to render NG outright inhospitable to content farms (AI-based or otherwise) - being reliant on ad revenue to turn a profit off their low quality Content™, the near-total lack thereof renders running one on there impractical.

(Arguably, this reason isn't a particularly important one - being a niche animation site dwarfed by the likes of YouTube and Instagram, NG likely fell well under the radars of content farms even before its ad-free push.)

DeviantArt, once again, provides an easy point of contrast - as the site itself has proudly paraded, the site's monetisation features have enabled AI bros to make a quick buck off of flooding it with slop.

Judgment/Scouting

Wrapping this up with something that doesn't have a parallel in dA, I'm gonna look at the judgment and scouting systems used on the site. Though originally intended to maintain a minimum level of quality, these systems have helped prevent AI slop from gaining a foothold as well.

Judgment

For the main Portal (which covers animations and games), a simple voting process called judgment is used - users vote from 0 to 5 on uploaded works, with low-scoring submissions being automatically deleted (referred to as being ‘blammed’).

Whilst rather simple, the process has proven effective in keeping low-effort garbage off of Newgrounds - and with “low-effort garbage” being a perfect description of AI slop, the judgment process has enabled users to get rid of AI slop without moderator intervention, reducing the mods’ workload in the process.

Scouting

For the Audio Portal and the Art Portal, a vetting system (referred to as “scouting”) is used instead.

By default, work by unscouted artists appears in the "Undiscovered Artists" section of the Art/Audio Portals, hidden away from public view unless someone actively opts-in to view it or they find it from checking the user’s account.

If an already-scouted user or a moderator sees an unscouted user, they have the option to scout them, essentially vouching that their work follows site guidelines and is of sufficiently high quality. The effects of this are twofold:

  • First, the user’s work is placed into the “Approved Artists” section of the appropriate Portal, granting a large boost to its visibility.

  • Second, the user is granted the ability to scout other users, vouching for their work in turn.

Said ability is something users are required to exercise with caution - if the scouted user is later caught breaking site guidelines (or if their work is deemed too poor-quality), they can be de-scouted by an Art/Audio Moderator, and those who scouted them can be stripped of their ability to scout other users.

This system creates an easy method of establishing trust among the userbase (arguably equivalent to a PGP-style web of trust) - simply by knowing someone's been scouted, you can be confident they're posting human-made work, and scouted users can in turn extend that trust by scouting other users.

Additionally, Art/Audio Moderators are equipped to handle any breaches in said trust, whether by de-scouting users for posting slop, or removing scouting abilities from users who can't be trusted with them, enabling trust to be quickly restored.

As a secondary benefit, any slop which does get submitted is effectively hidden from view by default, making it easy for human-made work to drown out the slop, rather than the other way around.

Conclusion

The large-scale proliferation of generative AI has been a disaster for the Internet at large, flooding practically every corner of it with AI slop of all stripes.

Given all that, its kinda miraculous to know that there's any corner of the ‘Net which has braved the slop-nami and come out unscathed, let alone one as large (if rather niche) as Newgrounds is.

So, if you're looking for human-made work produced after 2022…well, I don't know where to search for most things, but for art, music, games or animation, you already know where to start :P

29

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

[-] BlueMonday1984@awful.systems 31 points 5 months ago

Posted this on a Discord I'm in - one of the near immediate responses was "I'm glad they made a non-invasive procedure to lobotomise people".

Nothing more to add, I just think that's hilarious

[-] BlueMonday1984@awful.systems 35 points 5 months ago

Found a primo response in the replies:

[-] BlueMonday1984@awful.systems 34 points 8 months ago

I've already heard of this - mainly thanks to the nuclear backlash it (and basically anything related to AI) is getting. Pulling out a particular highlight, here's Ashley Lynch tearing the whole thing a new one:

Stuff like this is perfect because it shows how utterly devoid of creativity genAI evangelists are. Great, you recreated a photo that already exists in a drawing style that only has currency because of who you're stealing it from giving the world something with absolutely no value or meaning. I can't tell you how excited I am that we're literally burning the Earth up for this garbage.

Every genAI techbro needs to be sent to the Hague to stand trial for crimes against humanity with how they've traded our future for this absolute bullshit. Straight up capital punishment for every one of these fucking losers responsible for cursing us with this garbage. I have zero chill on this issue anymore. They represent the end of humanity.

[-] BlueMonday1984@awful.systems 33 points 11 months ago

But Apple Intelligence has its good points. “I find the best thing about Apple intelligence is that since I haven’t enabled it, my phone optimized for onboard AI has incredible battery life,” responded another Bluesky user. [Bluesky]

Y'know, if Apple had simply removed the AI altogether and went with that as a marketing point, people would probably buy more iPhones.

At the bare minimum, AI wouldn't be actively driving people away from buying them.

[-] BlueMonday1984@awful.systems 30 points 1 year ago

Here's a better idea - treat anything from ChatGPT as a lie, even if it offers sources

[-] BlueMonday1984@awful.systems 29 points 1 year ago

If this gets CUDA open-sourced, I will be one very happy man.

[-] BlueMonday1984@awful.systems 36 points 1 year ago

On the one hand, Google's still the dominant search engine, having used every dirty trick in the book to reach that position and maintain it. If you aren't on Google, you arguably might as well not exist.

On the other hand, Google's already under heavy scrutiny since being officially declared an illegal monopoly, and the public is pissed with how Google's declined in search quality - and deliberately so.

Part of me says we're about to see some truly wild shit go down.

[-] BlueMonday1984@awful.systems 29 points 1 year ago

AI companies work around this by paying human classifiers in poor but English-speaking countries to generate new data. Since the classifiers are poor but not stupid, they augment their low piecework income by using other AIs to do the training for them. See, AIs can save workers labor after all.

On the one hand, you'd think the AI companies would check to make sure they aren't using AI themselves and damaging their models.

On the other hand, AI companies are being run by some of the laziest and stupidest people alive, and are probably just rubber-stamping everything no matter how blatantly garbage the results are.

[-] BlueMonday1984@awful.systems 31 points 1 year ago

The good news is I barely use Protonmail (or email at all, for that matter).

The bad news is I have a fucking Proton account. Fuck.

[-] BlueMonday1984@awful.systems 33 points 1 year ago

Do you really think "cult" is a useful category/descriptor here?

My view: things identified as "cults" have a bunch of good traits. EA should, where possible, adopt the good traits and reject the bad ones, and ignore whether they're associated with the label "cult" or not.

Yes, this is real

view more: ‹ prev next ›

BlueMonday1984

joined 2 years ago