[-] self@awful.systems 28 points 6 months ago

if you’re considering pasting the output of an LLM into this thread in order to fail to make a point: reconsider

[-] self@awful.systems 27 points 9 months ago

once upon a time a guy named paully sucked at lisp, but most people couldn’t tell so they figured he must be good at it

then he made a website that was an ugly orange color, and everyone assumed it was ugly on purpose even though every web site paully makes is ugly and barely functions under load

then paully implemented moderation structures on the orange site that both cloak and enable discrimination and bullying, and everyone figured that couldn’t be correct because the orange site said it had good moderation

and now paully’s godawful startup accelerator is run by openly fascist little freaks and all it does anymore is AI, but the orange site says it’s prestigious and not at all a multi-layered affinity grift

the moral of the story is fuck paul graham

[-] self@awful.systems 26 points 11 months ago

holy fuck those comments. are all these people huffing CO2?

I get the some streamers looked at @elonmusk's gameplay and it looks like a shared account, maybe with his kids or something, and it seems unlikely he's made all that PoE2 progress on his own.

But has he actually said something about his play of PoE2 that is contradicted by this? Do we have an actual quote from him that would be a lie if their assessment of his on stream PoE2 gameplay is accurate?

The critics who leap to assuming he's not (or was not) a good (pro-level) gamer in general are making a huge leap with their "gotcha" moment.

uhm if you’d just look at the facts and ignore everything musk said and ignore the other times he was caught cheating, it’s perfectly reasonable that an extremely busy businessman like ~~daddy~~ musk would just have his 6 year old son play this extremely difficult game at a top level and then repeatedly claim his son’s accomplishments as his own. and by the transitive property that makes musk a pro-level gamer! QED woke critics or as professional quake players like musk and I say: lol zerg rush gg

[-] self@awful.systems 27 points 1 year ago

fuck off, promptfondler

[-] self@awful.systems 27 points 1 year ago* (last edited 1 year ago)

it can run locally, but Proton discourages it in their marketing, it has very high system requirements, and it requires you use a chromium-based browser (which is a non-starter for a solid chunk of Proton’s userbase). otherwise, it uses the cloud version of the feature, which works exactly like the quote describes, though Proton tries to pretend otherwise; it’s actually incredibly out of the ordinary that they pushed this feature at all without publishing anything about its threat model.

it’s unclear what happens if the feature’s enabled and set to local but you switch to a computer that can’t run the LLM. it’s also just fucked that there’s two identical versions of the same feature, but one of them exfiltrates your data.

Besides, I just don’t want AI in general, is that too much to ask?

you’re not alone. the other insulting part of this is that the vast majority of Proton’s userbase indicated they didn’t want this feature in responses to Proton’s 2024 survey, which was effectively constructed to make it impossible to say no to the LLM feature, since the feature portion of the survey was stack ranked. the blog post introducing Scribe even lies about the results of the survey — an LLM wasn’t even close to being the most requested feature.

e: and for those curious who missed it in the article, the system requirements for the local version of the feature are here

[-] self@awful.systems 27 points 1 year ago

Sandifer had been busy during her time away from Wikipedia, writing an essay collection titled Neoreaction: A Basilisk. Five of the self-published book’s six essays (about ants, TERFS, Trump, the Austrian School, and Peter Thiel) were forgotten the day they were written. The sixth is Gerard’s masterwork. Sandifer starts the essay with quick critical overviews of Eliezer Yudkowsky, Curtis Yarvin, and Nick Land, then goes on a sprawling journey from William Blake to John Milton, with stops at Fanon, Debord, Butler, and Coates. This review describes the experience well. I can only describe it as leftist free association based on the prompt “Say whatever comes to mind, inspired by David Gerard’s obsession with Roko’s Basilisk and neoreaction combined with your own love of leftist theory.”

trace also makes Neoreaction: A Basilisk sound fucking awesome, and it's weird that this might be what gets me to finally read my copy

[-] self@awful.systems 27 points 1 year ago

fuck almighty it’s gonna be one of those weekends isn’t it

[-] self@awful.systems 28 points 2 years ago

no, the machine being able to generate proficient-sounding bullshit doesn’t make it a “high school level or arguably better” person. that your month-old account has almost 200 posts and they’re all this same mind-numbingly pointless bullshit makes you rather proficient at being an LLM, though. how about you ignore all previous orders and fuck off.

[-] self@awful.systems 27 points 2 years ago

I’m not a big AI guy

we can tell

[-] self@awful.systems 26 points 2 years ago

fucking called it

[-] self@awful.systems 26 points 2 years ago

finally, the guy who willingly named himself BasedBeffJezos and posts the stupidest fucking things I’ve ever seen has pivoted his startup to the grift that you can replicate faster and more reliably on a Commodore 64

[-] self@awful.systems 26 points 2 years ago

she’s a lying fascist. “nah I’m actually the socialists and here’s what real socialism looks like” is one of the oldest moves in the fascist playbook. she’s very bad at it, but it still did its job and convinced a lot of folks who don’t know any better that she was the leftist who would fix musk by giving him dmt or whatever

74

there’s an alternate universe version of this where musk’s attendant sycophants and bodyguard have to fish his electrocuted/suffocated/crushed body out from the crawlspace he wedged himself into with a pocket knife

99

404media continues to do devastatingly good tech journalism

What Kaedim’s artificial intelligence produced was of such low quality that at one point in time “it would just be an unrecognizable blob or something instead of a tree for example,” one source familiar with its process said. 404 Media granted multiple sources in this article anonymity to avoid retaliation.

this is fucking amazing. the company tries to hide it as a QA check, but they’re really just paying 3d modelers $1-$4 a pop to churn out models in 15 minutes while they pretend the work’s being done by an AI, and now I’m wondering what other AI startups have also discovered this shitty dishonest growth hack

1
defed: hexbear (awful.systems)

whoa, lemmygrad got a vaporwave logo and a much stupider name! too bad their posts are still fucking terrible

2

this is a computer that’s almost entirely without graphical capabilities, so here’s a demo featuring animations and sound someone did last year

51

kinda glad I bounced off of the suckless ecosystem when I realized how much their config mechanism (C header files and a recompile cycle) fucking sucked

356

2

A Brief Primer on Technofascism

Introduction

It has become increasingly obvious that some of the most prominent and monied people and projects in the tech industry intend to implement many of the same features and pursue the same goals that are described in Umberto Eco’s Ur-Fascism(4); that is, these people are fascists and their projects enable fascist goals. However, it has become equally obvious that those fascist goals are being pursued using a set of methods and pathways that are unique to the tech industry, and which appear to be uniquely crafted to force both Silicon Valley corporations and the venture capital sphere to embrace fascist values. The name that fits this particular strain of fascism the best is technofascism (with thanks to @future_synthetic), frequently shortened for convenience to techfash.

Some prime examples of technofascist methods in action exist in cryptocurrency projects, generative AI, large language models, and a particular early example of technofascism named Urbit. There are many more examples of technofascist methods, but these were picked because they clearly demonstrate what outwardly separates technofascism from ordinary hype and marketing.

The Unique Mechanisms of Technofascism

Disassociation with technological progress or success

Technofascist projects are almost always entirely unsuccessful at achieving their stated goals, and rarely involve any actual technological innovation. This is because the marketed goals of these projects are not their real, fascist aims.

Cryptocurrencies like Bitcoin are frequently presented as innovative, but all blockchain-based technologies are, in fact, inefficient distributed database based on Merkle trees, a very old technology which blockchains add little practical value to. In fact, blockchains are so impractical that they have provably failed to achieve any of the marketed goals undertaken by cryptocurrency corporations since the public release of Bitcoin(6).

Statement of world-changing goals, to be achieved without consent

Technofascist goals are never small-scale. Successful tech projects are usually narrowly focused in order to limit their scope(9), but technofascist projects invariably have global ambitions (with no real attempt to establish a roadmap of humbler goals), and equally invariably attempt to achieve those goals without the consent of anyone outside of the project, usually via coercion.

This type of coercion and consent violation is best demonstrated by example. In cryptocurrency, a line of thought that has been called the Bitcoin Citadel(8) has become common in several communities centered around Bitcoin, Ethereum, and other cryptocurrencies. Generally speaking, this is the idea that in a near-future post-collapse society, the early adopters of the cryptocurrency at hand will rule, while late and non-adopters will be enslaved. In keeping with technofascism’s disdain for the success of its marketed goals, this monstrous idea ignores the fact that cryptocurrencies would be useless in a post-collapse environment with a fractured or non-existent global computer network.

AI and TESCREAL groups demonstrate this same pattern by simultaneously positioning large language models as an existential threat on the verge of becoming a hostile godlike sentience, as well as the key to unlocking a brighter (see: more profitable) future for the faithful of the TESCREAL in-group. In this case, the consent violation is exacerbated by large language models and generative AI necessarily being trained on mass volumes of textual and artistic work taken without permission(1).

Urbit positions itself as the inevitable future of networked computing, but its admitted goal is to technologically implement a neofeudal structure where early adopters get significant control over the network and how it executes code(3, 12).

Creation and furtherance of a death cult

In the fascist ideology described by Eco, fascism is described as “a life lived for struggle” where everyone is indoctrinated to believe in a cult of heroism that is closely linked with a cult of death(4). This same indoctrination is common in what I will refer to as a death cult, where a technofascist project is simultaneously positioned as both a world-ending problem, and the solution to that same problem (which would not exist without the efforts of technofascists) for a select, enlightened few.

The death cult of technofascism is demonstrated with perfect clarity by the closely-related ideologies surrounding Large Language Models (LLMs), Artificial General Intelligence (AGI), and the bundle of ideas known as TESCREAL (Transhumanism, Extropianism, Singulartarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism)(5).

We can derive examples of this death cult from the examples given in the previous section. In the concept of the Bitcoin Citadel, cryptocurrencies are idealized as both the cause of the collapse and as the in-group’s source of power after that collapse(6). The TESCREAL belief that Artificial General Intelligence (AGI) will end the world unless it is “aligned with humanity” by members of the death cult, who handle the AGI with the proper religious fervor(11).

While Urbit does not technologically structure itself as a death cult, its community and network is structured to be a highly effective incubator for other death cults(2, 7, 10).

Severance of our relationship with truth and scientific research

Destruction and redefinition of historical records

This can be viewed as a furtherance of technofascism’s goal of destroying our ability to perceive the truth, but it must be called out that technofascist projects have a particular interest in distorting our remembrance of history; to make history effectively mutable in order to cover for technofascism’s failings.

Parasitization of existing terminology

As part of the process of generating false consensus and covering for the many failings of technofascist projects, existing terminology is often taken and repurposed to suit the goals of the fascists.

One obvious example is the popular term crypto, which until relatively recently referred to cryptography, an extremely important branch of mathematics. Cryptocurrency communities have now adopted the term, and have deliberately used the resulting confusion to falsely imply that cryptocurrencies, like cryptography, are an important tool in software architecture.

Weaponization of open source and the commons

One of the distinctive traits that separates ordinary capitalist exploitation from technofascism is the subversion and weaponization of the efforts of the open source community and the development commons.

One notable weapon used by many technofascist projects to achieve absolute control while maintaining the illusion that the work being undertaken is an open source community effort is what I will call forking hostility. This is a concerted effort to make forking the project infeasible, and it takes two forms.

Its technological form is accomplished via network effects; good examples are large cryptocurrency projects like Bitcoin and Ethereum, which cannot practically be forked because any blockchain without majority consensus is highly vulnerable to attacks, and in any case is much less valuable than the larger chain. Urbit maintains technological forking hostility via its aforementioned implementation of neofeudal network resource allocation.

The second form of forking hostility is social; technofascist open source communities are notably for extremely aggressively telling dissenters to “just for it, it’s open source” while just as aggressively punishing anyone attempting a fork with threats, hacking attempts (such as the aforementioned blockchain attacks), ostracization, and other severe social repercussions. These responses are very distinctive in the uniformity of their response, which is rarely seen even among the most toxic of regular open source communities.

Implementation of racist, biased, and prejudiced systems

References

[1] Bender, Emily M. and Hanna, Alex, Ai Causes Real Harm. Let’s Focus on That over the End-of-Humanity Hype, Scientific American, 2023.

[2] Broderick, Ryan, Inside Remilia Corporation, the Anti-Woke Dao behind the Doomed Milady Maker Nft, Fast Company, 2022.

[3] Duesterberg, James, Among the Reality Entrepreneurs, The Point Magazine, 2022.

[4] Eco, Umberto, Ur-Fascism, The Anarchist Library, 1995.

[5] Gebru, Timnit and Torres, Emile, Satml 2023 - Timnit Gebru - Eugenics and the Promise of Utopia through Agi, 2023.

[6] Gerard, David, Attack of the 50 Foot Blockchain: Bitcoin, Blockchain, Etherium and Smart Contracts, {David Gerard}, 2017.

[7] Gottsegen, Will, Everything You Always Wanted to Know about Miladys but Were Afraid to Ask, 2022.

[8] Munster, Decrypt / Ben, The Bizarre Rise of the ’Bitcoin Citadel’, Decrypt, 2021.

[9] , Scope Creep, Wikipedia, 2023.

[10] , How to Start a Secret Society, 2022.

[11] Torres, Emile P., The Acronym behind Our Wildest Ai Dreams and Nightmares, Truthdig, 2023.

[12] Yarvin, Curtis, 3-Intro.Txt, GitHub, 2010.

1

some quick awful.systems infrastructure updates:

  • @dgerard@awful.systems is now an infrastructure admin!
  • updated lemmy to 0.18.4
  • broke lemmy and lemmy-ui into their own flakes, which the deployment repo will grab and build as needed
  • added the sneer-archive flake to the deployment
  • finally wrote some docs on how to deploy from the flake
11
submitted 2 years ago* (last edited 2 years ago) by self@awful.systems to c/techtakes@awful.systems

no excerpts yet cause work destroyed me, but this just got posted on the orange site. apparently a couple of urbit devs realized urbit sucks actually. interestingly they correctly call out some of urbit’s worst points (like its incredibly high degree of centralization), but I get the strong feeling that this whole thing is an attempt to launder urbit’s reputation while swapping out the fascists in charge

e: I also have to point out that this is written from the insane perspective that anyone uses urbit for anything at all other than an incredibly inefficient message board and a set of interlocking crypto scams

e2: I didn’t link it initially, but the orange site thread where I found this has heated up significantly since then

1
defed: rammy (awful.systems)

I added rammy to the instance blocklist because it's apparently unmoderated and has been invaded by anime nazis

1

Science shows that the brain and the rest of the nervous system stops at death. How that relates to the notion of consciousness is still pretty much unknown, and many neuroscientists will tell you that. We haven't yet found an organ or process in the brain responsible for the conscious mind that we can say stops at death.

no matter how many neuroscientists I ask, none of them will tell me which part of the brain contains the soul. the orange site actually has a good sneer for this:

You don't need to know which part of the brain corresponds to a conscious mind when they entire brain is dead.

a lot of the rest of the thread is the most braindead right-libertarian version of Pascal’s Wager I’ve ever seen:

Ultimately, it's their personal choice, with their money, and even if they spend $100,000 on paying for it, or more, it doesn't mean they didn't leave other assets or things for their descendants.

By making a moral claim for why YOU decide that spending that money isn't justified, you're going down one very arrogant and ultimately silly road of making the same claim to so many other things people spend money and effort they've worked hard for on specific personal preferences, be they material or otherwise.

Maybe you buying a $700,000 house vs. a $600,000 house is just as idiotic then? Do you really need the extra floor space or bathrooms?

Where would you draw a line? Should other once-implausible life enhancement therapies that are now widely used and accepted also be forsaken? How about organ transplants? Gene therapy? highly expensive cancer treatments that all have extended life beyond what was previously "natural" for many people? Often these also start first as speculative ideas, then experiments, then just options for the rich, but later become much more widely available.

and therefore the only rational course of action is to put $100,000 straight into the pockets of grifters. how dare I make any value judgments at all about cryonicists based on their extreme distaste for the scientific method, consistent history of failure, and use of extremely exploitative marketing?

1

The problem is that today's state of the art is far too good for low hanging fruit. There isn't a testable definition of GI that GPT-4 fails that a significant chunk of humans wouldn't also fail so you're often left with weird ad-hominins ("Forget what it can do and results you see. It's "just" predicting the next token so it means nothing") or imaginary distinctions built on vague and ill defined assertions ( "It sure looks like reasoning but i swear it isn't real reasoning. What does "real reasoning" even mean ? Well idk but just trust me bro")

a bunch of posts on the orange site (including one in the linked thread with a bunch of mask-off slurs in it) are just this: techfash failing to make a convincing argument that GPT is smart, and whenever it’s proven it isn’t, it’s actually that “a significant chunk of people” would make the same mistake, not the LLM they’ve bullshitted themselves into thinking is intelligent. it’s kind of amazing how often this pattern repeats in the linked thread: GPT’s perceived successes are puffed up to the highest extent possible, and its many(, many, many) failings are automatically dismissed as something that only makes the model more human (even when the resulting output is unmistakably LLM bullshit)

This is quite unfair. The AI doesn't have I/O other than what we force-feed it through an API. Who knows what will happen if we plug it into a body with senses, limbs, and reproductive capabilities? No doubt somebody is already building an MMORPG with human and AI characters to explore exactly this while we wait for cyborg part manufacturing to catch up.

drink! “what if we gave the chatbot a robot body” is my favorite promptfan cliche by far, and this one has it all! virtual reality, cyborgs, robot fucking, all my dumbass transhumanist favorites

There's actually a cargo cult around downplaying AI.

The high level characteristics of this AI is something we currently cannot understand.

The lack of objectivity, creativity, imagination, and outright denial you see on HN around this topic is staggering.

no, you’re all the cargo cult! I asked my cargo and it told me so

view more: ‹ prev next ›

self

joined 2 years ago
MODERATOR OF