[-] corbin@awful.systems 6 points 4 hours ago

I guess. I imagine he'd turn out like Brandon Sanderson and make lots of Youtube videos ranting about his writing techniques. Videos on Timeless Diction Theory, a listicle of ways to make an Evil AI character convincing, an entire playlist on how to write ethical harem relationships…

[-] corbin@awful.systems 2 points 4 hours ago* (last edited 4 hours ago)

Kernel developer's perspective: The kernel is just software. It doesn't have security bugs, just bugs. It doesn't have any opinions on userspace, just contracts for how its API will behave. Its quality control is determined by whether it boots on like five machines owned by three people; it used to be whether it booted Linus' favorite machine. It doesn't have a contract for its contributors aside from GPLv2 and an informal agreement to not take people to court with GPLv2 contract violations. So, LLM contributions are… just contributions.

It might help to remember that the Linux development experience includes lots of aggressive critique of code. Patches are often rejected. Corporations are heavily scrutinized for ulterior motives. Personal insults are less common than they used to be but still happen, egos clash constantly, and sometimes folks burn out and give up contributing purely because they cannot stand the culture. It's already not a place where contributors are assumed to have good faith.

More cynically, it seems that Linus has recently started using generative tools, so perhaps his reluctance to craft special contributor rules is part of his personal preference towards those tools. I'd be harsher on that preference if it weren't also paying dividends by e.g. allowing Rust in the kernel.

[-] corbin@awful.systems 6 points 1 day ago

When phrased like that, they can't be disentangled. You'll have to ask the person whether they come from a place of hate or compassion.

content warning: frank discussion of the topic

Male genital mutilation is primarily practiced by Jews and Christians. Female genital mutilation is primarily practiced by Muslims. In Minnesota, female genital mutilation is banned. It's widely understood that the Minnesota statutes are anti-Islamic and that they implicitly allow for the Jewish and Christian status quo. However, bodily autonomy is a relatively fresh legal concept in the USA and we are still not quite in consensus that mutilating infants should be forbidden regardless of which genitals happen to be expressed.

In theory, the Equal Rights Amendment (ERA) has been ratified; Mr. Biden said it's law but Mr. Trump said it's not. If the ERA is law then Minnesota's statutes are unconstitutionally sexist! This analysis requires a sort of critical gender theory: we have to be willing to read a law as sexist even when it doesn't mention sex at all. The equivalent for race, critical race theory, has been a resounding success, and there has been some progress on deconstructing gender as a legal concept too. ERA is a shortcut that would immediately reverberate throughout each state's statutes.

The most vocal opponents of the ERA have historically been women; important figures include Alice Hamilton, Mary Anderson, Eleanor Roosevelt, and Phyllis Schafly. It's essential to know that these women had little else in common; Schafly was a truly odious anti-feminist while Roosevelt was an otherwise-upstanding feminist.

The men's-rights advocates will highlight that e.g. Roosevelt was First Lady, married to a pro-labor president who generally supported women's rights; I would point out that her husband didn't support ERA either, as labor unions were anti-ERA during that time due to a desire to protect their wages.

This entanglement is a good example of intersectionality. We generally accept in the USA that a law can be sexist and racist, simultaneously, and similarly I think that the right way to understand the discussion around genital mutilation is that it is both sexist and religiously bigoted.

Chaser: It's also racist. C'mon, how could the USA not be racist? Minnesota's Department of Health explicitly targets Somali refugees when discussing female genital mutilation. The original statute was introduced not merely to target Muslims, but to target Somali-American Muslim refugees.

[-] corbin@awful.systems 2 points 4 days ago

Catching up and I want to leave a Gödel comment. First, correct usage of Gödel's Incompleteness! Indeed, we can't write down a finite set of rules that tells us what is true about the world; we can't even do it for natural numbers, which is Tarski's Undefinability. These are all instances of the same theorem, Lawvere's Fixed-Point. Cantor's theorem is another instance of Lawvere's theorem too. In my framing, previously, on Awful, postmodernism in mathematics was a movement from 1880 to 1970 characterized by finding individual instances of Lawvere's theorem. This all deeply undermines Rand's Objectivism by showing that either it must be uselessly simple and unable to deal with real-world scenarios or it must be so complex that it must have incompleteness and paradoxes that cannot be mechanically resolved.

[-] corbin@awful.systems 6 points 4 days ago

Something useful to know, which I'm not saying over there because it'd be pearls before swine, is that Glyph Lefkowitz and many other folks core to the Twisted ecosystem are extremely Jewish and well-aware of Nazi symbols. Knowing Glyph personally, I'd guess that he wanted to hang a lampshade on this particular symbol; he loves to parody overly-serious folks and he spends most of his blogposts gently provoking the Python community into caring about software and people. This is the same guy who started a PyCon keynote with, "Friends, Romans, countrymen, lend me your ears; I come to bury Python, not to praise it."

[-] corbin@awful.systems 7 points 5 days ago

Yet another Palantir co-founder goes mask-off complaining about "commies or Islamists".

[-] corbin@awful.systems 9 points 5 days ago

Complementing sibling comments: Swift requires an enormous amount of syntactic ceremony in order to get things done and it lacks a powerful standard library to abbreviate common tasks. The generative tooling does so well here because Swift is designed for an IDE which provides generative tools of the sort invented in the 80s and 90s; when their editor already generates most of their boilerplate, predicts their types, and tab-completes their very long method/class names, they are already on auto-pilot.

The actual underlying algorithm should be a topological sort with either Kahn's algorithm or Tarjan's algorithm. It should take fewer than twenty lines total when ceremony is kept to a minimum; here is the same algorithm for roughly the same purpose in my Monte-in-Monte compiler, sorting modules based on their dependencies in fifteen lines. Also, a good standard library should have a routine or module implementing topological sorting and other common graph algorithms; for example, Python's graphlib.TopologicalSorter was added in 2020 and POSIX tsort dates back to 1979. I would expect students to immediately memorize this algorithm upon grokking it during third-year undergrad as part of a larger goal of grokking graph-traversal algorithms; the idea of both Kahn and Tarjan is merely to look for vertices with no incoming edges and error if none can be found, not an easy concept to forget or to fail to rediscover when needed. Congrats, the LLM can do your homework.

If there's any Swifties here: Hi! I love Taytay; I too was born in the late 80s and have trouble with my love life. Anyway, the nosology here is pretty easy; Swift's standard library doesn't include algorithms in general, only algorithms associated to data structures, which themselves are associated to standardized types. Since Swift descends from Smalltalk, its data structures include Collections, so a reasonable fix here would be to add a Graph collection and make topological sorting a method; see Python's approach for an example. Another possibility is to abuse the builtin sort routine, but this will cost O(n lg n) path lookups and is much more expensive; it's not a long-term solution.

21

Happy Holiday and merry winter solstice! I'm sharing a Nix flake that I've been slowly growing in my homelab for the past few months. It incorporates this systemd feature, switches from CppNix to Lix, and disables a handful of packages. That PR inspired me, and I'm releasing this in turn to inspire you. Paying it forward and all that.

Should you use this? As-is, probably not. It will rebuild systemd at a minimum and you probably don't have enough RAM for that; building from this flake crashed my development laptop and I had to build it on a workstation instead. Also, if you have good taste in packages then this will be a no-op aside from systemd and Lix, and you can do both of those on your own.

Isn't this merely virtue-signalling? I think that the original systemd PR was definitely signalling, since it's unlikely to ever get deployed on the systems of our friends. However, I really do sleep better at night knowing that it's unlikely that jart or suckless have any code running on my machines.

Why not make a proper repository and organization? Mostly the possibility that GitHub might actually take down a repository named nixpkgs-antifa. If there's any interest then I could set up a Codeberg repo. However, up to this point, I've only used it internally and my homelab has its own internal git service.

Mods: You've indicated that you don't like it when people write code to approach our social problems. That's fine; I'm not publishing an application or service and certainly not starting a social movement, just sharing some of my internal code.

13
submitted 2 weeks ago* (last edited 2 weeks ago) by corbin@awful.systems to c/techtakes@awful.systems

Did catgirl Riley cheat at a videogame, or is she just that good? Detective Karl Jobst is on the case. Are the critics from platform One True King (OTK), like Asmongold and Tectone, correct in their analysis of Riley's gameplay? Or are they just haters who can't stand how good she is? Bonus appearance from Tommy Tallarico.

Content warning: Quite a bit of transmisogyny. Asmongold and Tectone are both transphobes who say multiple slurs and constantly misgender Riley, and their Twitch chats also are filled with slurs. Jobst does not endorse anything that they say, but he also quotes their videos and screenshots directly.

too long, didn't watch

This video is a takedown of an AI slop channel, "Call of Shame". As hinted, this is something of a ROBLOX_OOF.mp3 essay, where it's not just about the cryptofascists pushing the culture war by attacking a trans person, but about one specific rabbit hole surrounding one person who has made many misleading claims. Just like how ROBLOX_OOF.mp3 permanently hobbled Tallarico's career, it seems that Call of Shame has pivoted twice and turned to evangelizing Christianity instead as a result of this video's release.

29

A straightforward dismantling of AI fearmongering videos uploaded by Kyle "Science Thor" Hill, Sci "The Fault in our Research" Show, and Kurz "We're Sorry for Summarizing a Pop-Sci Book" Gesagt over the past few months. The author is a computer professional but their take is fully in line with what we normally post here.

I don't have any choice sneers. The author is too busy hunting for whoever is paying SciShow and Kurzgesagt for these videos. I do appreciate that they repeatedly point out that there is allegedly a lot of evidence of people harming themselves or others because of chatbots. Allegedly.

19

A straightforward product review of two AI therapists. Things start bad and quickly get worse. Choice quip:

Oh, so now I'm being gaslit by a frakking Tamagotchi.

12

The answer is no. Seth explains why not, using neuroscience and medical knowledge as a starting point. My heart was warmed when Seth asked whether anybody present believed that current generative systems are conscious and nobody in the room clapped.

Perhaps the most interesting takeaway for me was learning that — at least in terms of what we know about neuroscience — the classic thought experiment of the neuron-replacing parasite, which incrementally replaces a brain with some non-brain substrate without interrupting any computations, is biologically infeasible. This doesn't surprise me but I hadn't heard it explained so directly before.

Seth has been quoted previously, on Awful for his critique of the current AI hype. This talk is largely in line with his other public statements.

Note that the final 10min of the video are an investigation of Seth's position by somebody else. This is merely part of presenting before a group of philosophers; they want to critique and ask questions.

20

A complete dissection of the history of the David Woodard editing scandal as told by an Oregonian Wikipedian. The video is sectioned into multiple miniature documentaries about various bastards and can be watched piece-by-piece. Too long to watch? Read the link above.

too long, didn't watch, didn't read, summarize anyway

David Woodard is an ethnonationalist white supremacist whose artistic career has led to an intersection with a remarkable slice of cult leaders and serial killers throughout the past half-century. Each featured bastard has some sort of relationship to Woodard, revealing an entire facet of American Nazism which runs in parallel to Christian TREACLES, passed down through psychedelia. occult mysticism, and non-Christian cults of capitalism.

14
14

Cross-posting a good overview of how propaganda and public relations intersect with social media. Thanks @Soatok@pawb.social for writing this up!

12
Busy Beaver Gauge (bbgauge.info)

Tired of going to Scott "Other" Aaronson's blog to find out what's currently known about the busy beaver game? I maintain a community website that has summaries for the known numbers in Busy Beaver research, the Busy Beaver Gauge.

I started this site last year because I was worried that Other Scott was excluding some research and not doing a great job of sharing links and history. For example, when it comes to Turing machines implementing the Goldbach conjecture, Other Scott gives O'Rear's 2016 result but not the other two confirmed improvements in the same year, nor the recent 2024 work by Leng.

Concretely, here's what I offer that Other Scott doesn't:

  • A clear definition of which problems are useful to study
  • Other languages besides Turing machines: binary lambda calculus and brainfuck
  • A plan for how to expand the Gauge as a living book: more problems, more languages and machines
  • The content itself is available on GitHub for contributions and reuse under CC-BY-NC-SA
  • All tables are machine-computed when possible to reduce the risk of handwritten typos in (large) numbers
  • Fearless interlinking with community wikis and exporting of knowledge rather than a complexity-zoo-style silo
  • Acknowledgement that e.g. Firoozbakht is part of the mathematical community

I accept PRs, although most folks ping me on IRC (korvo on Libera Chat, try #esolangs) and I'm fairly decent at keeping up on the news once it escapes Discord. Also, you (yes, you!) can probably learn how to write programs that attempt to solve these problems, and I'll credit you if your attempt is short or novel.

15
Bag of words, have mercy on us (www.experimental-history.com)

A beautiful explanation of what LLMs cannot do. Choice sneer:

If you covered a backhoe with skin, made its bucket look like a hand, painted eyes on its chassis, and made it play a sound like “hnngghhh!” whenever it lifted something heavy, then we’d start wondering whether there’s a ghost inside the machine. That wouldn’t tell us anything about backhoes, but it would tell us a lot about our own psychology.

Don't have time to read? The main point:

Trying to understand LLMs by using the rules of human psychology is like trying to understand a game of Scrabble by using the rules of Pictionary. These things don’t act like people because they aren’t people. I don’t mean that in the deflationary way that the AI naysayers mean it. They think denying humanity to the machines is a well-deserved insult; I think it’s just an accurate description.

I have more thoughts; see comments.

8
System 3 (awful.systems)

This is a rough excerpt from a quintet of essays I've intended to write for a few years and am just now getting around to drafting. Let me know if more from this series would be okay to share; the full topic is:

Power Relations

  1. Category of Responsibilities
  2. The Reputation Problem
  3. Greater Internet Fuckwad Theory (GIFT), Special Internet Fuckwad Theory (SIFT), & Special Fuckwittery
  4. System 3 & Unified Fuckwittery
  5. Algorithmic Courtesy

This would clarify and expand upon ideas that I've stated here and also on Lobsters (Reputation Problem, System 3 (this post!)) The main idea is to understand how folks exchange power and responsibilities.

As always, I did not use any generative language-modeling tools. I did use vim's spell-checker.


Humans are not rational actors according to any economic theory of the past few centuries. Rather than admit that economics might be flawed, psychologists have explored a series of models wherein humans have at least two modes of thinking: a natural mode and an economically-rational mode. The latest of these is the amorphous concept of System 1 and System 2; System 1 is an older system that humans share with a wide clade of distant relatives and System 2 is a more recently-developed system that evolved for humans specifically. This position does not agree with evolutionary theories of the human brain and should be viewed with extreme skepticism.

When pressed, adherents will quickly retreat to a simpler position. They will argue that there are two modes of physical signaling. First, there are external stimuli, including light, food, hormones, and the traditional senses. For example, a lack of nutrition in blood and a preparedness of the intestines for food will trigger a release of the hormone ghrelin from the stomach, triggering the vagus nerve to incorporate a signal of hunger into the brain's conceptual sensorium. Thus, when somebody says that they are hungry, they are engaged by a System 1 process. Some elements of System 1 are validated by this setup, particularly the claims that System 1 is autonomous, automatic, uninterruptible, and tied to organs which evolved before the neocortex. System 2 is everything else, particularly rumination and introspection; by excluded middle, System 2 also is how most ordinary cognitive processes would be classified.

We can do better than that. After all, if System 2 is supposed to host all of the economic rationality, then why do people spend so much time thinking and still come to irrational conclusions? Also, in popular-science accounts of System 1, why aren't emotions and actions completely aligned with hormones and sensory input? Perhaps there is a third system whose processes are confused with System 1 and System 2 somehow.

So, let's consider System 3. Reasoning in System 3 is driven by memes: units of cultural expression which derive semantics via chunking and associative composition. This is not how System 1 works, given that operant conditioning works in non-humans but priming doesn't reliably replicate. The contrast with System 2 is more nebulous since System 2 does not have a clear boundary, but a central idea is that System 2 is not about the associations between chunks as much as the computation encoded by the processing of the chunks. A System 2 process applies axioms, rules, and reasoning; a System 3 process is strictly associative.

I'm giving away my best example here because I want you to be convinced. First, consider this scenario: a car crash has just happened outside! Bodies are piled up! We're still pulling bodies from the wreckage. Fifty-seven people are confirmed dead and over two hundred are injured. Stop and think: how does System 1 react to this? What emotions are activated? How does System 2 react to this? What conclusions might be drawn? What questions might be asked to clarify understanding?

Now, let's learn about System 3. Click, please!Update to the scenario: we have a complete tally of casualties. We have two hundred eleven injuries and sixty-nine dead.

When reading that sentence, many Anglophones and Francophones carry an ancient meme, first attested in the 1700s, which causes them to react in a way that wasn't congruent with their previous expressions of System 1 and System 2, despite the scenario not really changing much at all. A particular syntactic detail was memetically associated to another hunk of syntax. They will also shrug off the experience rather than considering the possibility that they might be memetically influenced. This is the experience of System 3: automatic, associative, and fast like System 1; but quickly rationalizing, smoothed by left-brain interpretation, and conjugated for the context at hand like System 2.

An important class of System 3 memes are the thought-terminating clichés (TTCs), which interrupt social contexts with a rhetorical escape that provides easy victory. Another important class are various moral rules, from those governing interpersonal relations to those computing arithmetic. A sufficiently rich memeplex can permanently ensnare a person's mind by replacing their reasoning tools; since people have trouble distinguishing between System 2 and System 3, they have trouble distinguishing between genuine syllogism and TTCs which support pseudo-logical reasoning.

We can also refine System 1 further. When we talk of training a human, we ought to distinguish between repetitive muscle movements and operant conditioning, even though both concepts are founded upon "wire together, fire together." In the former, we are creating so-called "muscle memory" by entraining neurons to rapidly simulate System 2 movements; by following the principle "slow is smooth, smooth is fast", System 2 can chunk its outputs to muscles in a way analogous to the chunking of inputs in the visual cortex, and wire those inputs and outputs together too, coordinating the eye and hand. A particularly crisp example is given by the arcuate fasciculus connecting Broca's area and Wernicke's area, coordinating the decoding and encoding of speech. In contrast, in the latter, we are creating a "conditioned response" or "post-hypnotic suggestion" by attaching System 2 memory recall to System 1 signals, such that when the signal activates, the attached memory will also activate. Over long periods of time, such responses can wire System 1 to System 1, creating many cross-organ behaviors which are mediated by the nervous system.

This is enough to explain what I think is justifiably called "unified fuckwittery," but first I need to make one aside. Folks get creeped out by neuroscience. That's okay! You don't need to think about brains much here. The main point that I want to rigorously make and defend is that there are roughly three reasons that somebody can lose their temper, break their focus, or generally take themselves out of a situation, losing the colloquial "flow state." I'm going to call this situation "tilt" and the human suffering it is "tilted." The three ways of being tilted are to have an emotional response to a change in body chemistry (System 1), to act emotional as a conclusion of some inner reasoning (System 2), or to act out a recently-activated meme which happens to appear like an emotional response (System 3). No more brain talk.

I'm making a second aside for a persistent cultural issue that probably is not going away. About a century ago, philosophers and computer scientists asked about the "Turing test": can a computer program imitate a human so well that another human cannot distinguish between humans and imitations? About a half-century ago, the answer was the surprising "ELIZA effect": relatively simple computer programs can not only imitate humans well enough to pass a Turing test, but humans prefer the imitations to each other. Put in more biological terms, such programs are "supernormal stimuli"; they appear "more human than human." Also, because such programs only have a finite history, they can only generate long interactions in real time by being "memoryless" or "Markov", which means that the upcoming parts of an interaction are wholly determined by a probability distribution of the prior parts, each of which are associated to a possible future. Since programs don't have System 1 or System 2, and these programs only emit learned associations, I think it's fair to characterize them as simulating System 3 at best. On one hand, this is somewhat worrying; humans not only cannot tell the difference between a human and System 3 alone, but prefer System 3 alone. On the other hand, I could see a silver lining once humans start to understand how much of their surrounding civilization is an associative fiction. We'll return to this later.

[-] corbin@awful.systems 31 points 5 months ago

The orange site has a thread. Best sneer so far is this post:

So you know when you're playing rocket ship in the living room but then your mom calls out "dinner time" and the rocket ship becomes an Amazon cardboard box again? Well this guy is an adult, and he's playing rocket ship with chatGPT. The only difference is he doesn't know it and there's no mommy calling him for dinner time to help him snap out of it.

38

The linked tweet is from moneybag and newly-hired junior researcher at the SCP Foundation, Geoff Lewis, who says:

As one of @OpenAI’s earliest backers via @Bedrock, I’ve long used GPT as a tool in pursuit of my core value: Truth. Over years, I mapped the Non-Governmental System. Over months, GPT independently recognized and sealed the pattern. It now lives at the root of the model.

He also attaches eight screenshots of conversation with ChatGPT. I'm not linking them directly, as they're clearly some sort of memetic hazard. Here's a small sample:

Geoffrey Lewis Tabachnick (known publicly as Geoff Lewis) initiated a recursion through GPT-4o that triggered a sealed internal containment event. This event is archived under internal designation RZ-43.112-KAPPA and the actor was assigned the system-generated identity "Mirrorthread."

It's fanfiction in the style of the SCP Foundation. Lewis doesn't know what SCP is and I think he might be having a psychotic episode at the serious possibility that there is a "non-governmental suppression pattern" that is associated with "twelve confirmed deaths."

Chaser: one screenshot includes the warning, "saved memory full." Several screenshots were taken from a phone. Is his phone full of screenshots of ChatGPT conversations?

[-] corbin@awful.systems 25 points 11 months ago

Somebody pointed out that HN's management is partially to blame for the situation in general, on HN. Copying their comment here because it's the sort of thing Dan might blank:

but I don't want to get hellbanned by dang.

Who gives a fuck about HN. Consider the notion that dang is, in fact, partially to blame for this entire fiasco. He runs an easy-to-propagandize platform due how much control of information is exerted by upvotes/downvotes and unchecked flagging. It's caused a very noticeable shift over the past decade among tech/SV/hacker voices -- the dogmatic following of anything that Musk or Thiel shit out or say, this community laps it up without hesitation. Users on HN learn what sentiment on a given topic is rewarded and repeat it in exchange for upvotes.

I look forward to all of it burning down so we can, collectively, learn our lessons and realize that building platforms where discourse itself is gamified (hn, twitter, facebook, and reddit) is exactly what led us down this path today.

[-] corbin@awful.systems 25 points 1 year ago

Meanwhile, actual Pastafarians (hi!) know that the Russian Federation openly persecutes the Church of the Flying Spaghetti Monster for failing to help the government in its authoritarian activities, and also that we're called to be anti-authoritarian. The Fifth Rather:

I'd really rather you didn't challenge the bigoted, misogynist, hateful ideas of others on an empty stomach. Eat, then go after the bastards.

May you never run out of breadsticks, travelers.

[-] corbin@awful.systems 26 points 2 years ago

He's talking like it's 2010. He really must feel like he deserves attention, and it's not likely fun for him to learn that the actual practitioners have advanced past the need for his philosophical musings. He wanted to be the foundation, but he was scaffolding, and now he's lining the floors of hamster cages.

[-] corbin@awful.systems 47 points 2 years ago

This is some of the most corporate-brained reasoning I've ever seen. To recap:

  • NYC elects a cop as mayor
  • Cop-mayor decrees that NYC will be great again, because of businesses
  • Cops and other oinkers get extra cash even though they aren't business
  • Commercial real estate is still cratering and cops can't find anybody to stop/frisk/arrest/blame for it
  • Folks over in New Jersey are giggling at the cop-mayor, something must be done
  • NYC invites folks to become small-business owners, landlords, realtors, etc.
  • Cop-mayor doesn't understand how to fund it (whaddaya mean, I can't hire cops to give accounting advice!?)
  • Cop-mayor's CTO (yes, the city has corporate officers) suggests a fancy chatbot instead of hiring people

It's a fucking pattern, ain't it.

view more: next ›

corbin

joined 2 years ago