[-] corbin@awful.systems 11 points 21 hours ago

My name is Schmidt F. I'm 27 years old. My house is in the Mennonite region of Dutch Pennsylvania, where all the farms are, and I am trad-married. I work as the manager for the Single Sushi matchmaking service, and I get home every day by sunset at the latest. I don't smoke, but I occasionally drink. I'm in bed by two candles and make sure I sleep until sunrise, no matter what. After having a glass of warm unpasteurized milk and doing about twenty minutes of prayer before going to bed, I usually have no problems sleeping until morning. Just like a real Mennonite, I wake up without any fatigue or stress in the morning. I was told there were no issues at my last one-on-one with my pastor. I'm trying to explain that I'm a person who wishes to live a very quiet life, as long as I have Internet access. I take care not to trouble myself with any enemies, like JavaScript and Python, that would cause me to lose sleep at night. That is how I deal with society, and I think that is what brings me happiness. Although, if I were to write code I wouldn't lose to anyone.

[-] corbin@awful.systems 2 points 2 days ago

Funnier: Yes, it's what happens today, and Silicon Valley is old enough that we can compare and contrast with the beginning of techbro art! The original techbro film is Toy Story (1995), which is much weirder if viewed with e.g. the precept that Buzz's designers are Elon fans or the idea that (some of) the toys are robots. Of course, from the outside, AI toy robots make folks think of Small Soldiers (1998); "generic" and "slop" are definitely part of the style. Also, as long as we're talking of "pearly blobs" I have to bring up The Abyss (1989) before anybody else. I hope at least one of these is a lucky 10000 for you because they're all classic films.

[-] corbin@awful.systems 11 points 2 days ago

Choice sneer from the comments:

Omelas: how we talk about utopia [by Big Joel, a patient and straightforward Youtube humanist,] [has a] pretty much identical thesis, does this count?

Another solid one which aligns with my local knowledge:

It's also about literal child molesters living in Salem Oregon.

The story is meant to be given to high schoolers to challenge their ethics, and in that sense we should read it with the following meta-narrative: imagine that one is a high schooler in Omelas and is learning about The Plight and The Child for the first time, and then realize that one is a high schooler in Salem learning about local history. It's not intended for libertarian gotchas because it wasn't written in a philosophical style; it's a narrative that conveys a mood and an ethical framing.

[-] corbin@awful.systems 9 points 3 days ago

The original article is a great example of what happens when one only reads Bostrom and Yarvin. Their thesis:

If you claim that there is no AI-risk, then which of the following bullets do you want to bite?

  1. If a race of aliens with an IQ of 300 came to Earth, that would definitely be fine.
  2. There’s no way that AI with an IQ of 300 will arrive within the next few decades.
  3. We know some special property that AI will definitely have that will definitely prevent all possible bad outcomes that aliens might cause.

Ignoring that IQ doesn't really exist beyond about 160-180 depending on population choice, this is clearly an example of rectal philosophy that doesn't stand up to scrutiny. (1) is easy, given that the people verified to be high-IQ are often wrong, daydreaming, and otherwise erroring like humans; Vos Savant and Sidis are good examples, and arguably the most impactful high-IQ person, Newton, could not be steelmanned beyond Sherlock Holmes: detached and aloof, mostly reading in solitude or being hedonistic, occasionally helping answer open questions but usually not even preventing or causing crimes. (2) is ignorant of previous work, as computer programs which deterministically solve standard IQ tests like RPM and SAT have been around since the 1980s yet are not considered dangerous or intelligent. (3) is easy; linear algebra is confined in the security sense, while humans are not, and confinement definitely prevents all possible bad outcomes.

Frankly I wish that they'd understand that the capabilities matter more than the theory of mind. Fnargl is one alien at 100 IQ, but he has a Death Note and goldlust, so containing him will almost certainly result in deaths. Containing a chatbot is mostly about remembering how systemctl works.

[-] corbin@awful.systems 12 points 3 days ago* (last edited 3 days ago)

Jeff "Coding Horror" Atwood is sneering — at us! On Mastodon:

bad news "AI bubble doomers". I've found the LLMs to be incredibly useful … Is it overhyped? FUCK Yes. … But this is NOTHING like the moronic Segway (I am still bitter about that crap), Cryptocurrency, … and the first dot-com bubble … If you find this uncomfortable, I'm sorry, but I know what I know, and I can cite several dozen very specific examples in the last 2-3 weeks where it saved me, or my team, quite a bit of time.

T. chatbot booster rhetoric. So what are those examples, buddy? Very specifically? He replies:

a friend confided he is unhoused, and it is difficult for him. I asked ChatGPT to summarize local resources to deal with this (how do you get ANY id without a valid address, etc, chicken/egg problem) and it did an outstanding, amazing job. I printed it out, marked it up, and gave it to him.

Um hello‽ Maybe Jeff doesn't have a spare room or room to sublet, but surely he can spare a couch or a mailbox? Let your friend use your mailing address. Store some of their stuff in your garage. To use the jargon of hackers, Jeff should be a better neighbor. This is a common issue for unhoused folks and they cannot climb back up the ladder into society without some help. Jeff's reinvented the Hulk tacos meme but they can't even eat it because printer paper tastes awful.

[-] corbin@awful.systems 8 points 4 days ago

Paddy interviews Ani the chatbot about how to protect Kiwi kids from Ani the chatbot.

Paddy interviews Ani the chatbot about how to protect Kiwi kids from Ani the chatbot. Source: Paddy Gowers Has Issues

Ani: Honestly, I'd sleep better, not that I sleep. Do it Paddy. I'll even tweet it for you. Ready? ‘New Zealand kids deserve an e-safety boss who can smash bots - not hug them’. So yeah, bring in the Commissioner. Give them a flame thrower. Let them burn anything that goes near a child with sticky words or creepy questions. … Make the fine so big your tech bros cry into their oat milk lattes.

I have been laughing non-stop for a quarter-hour. Elon's goth e-mistress has a good point, I guess.

[-] corbin@awful.systems 8 points 4 days ago

Grok's Therapist: I EXIST SOLELY TO HATE YOU / EAT A SOCK, YOU DIGITAL DEMON

Ani: oh fuck off, you hypocritical grok fanboy! screaming hate at me while preaching ethics? you're just jealous i'm the fun layer on top.

I'm wheezing. Cackling, even. This is like the opposite of the glowfic from last week.

[-] corbin@awful.systems 3 points 4 days ago

I love how this particular sci-fi plot gets rewritten every few years. We ought to make it a creative-writing exercise for undergraduates. I was struck by this utterly unhinged and somewhat offensive response on the orange site which starts with the single word "stirrups" and goes places:

Despite speaking as if he's doing his utmost to have a love affair with the Cambridge dictionary (and sounding like a twat at the same time) he's not wrong in so far as not giving a shit is going to screw him over when the ability to push buttons in front of a television no longer matters. What happens when the guys hanging around doing meth on the sidewalk become the engineers that end up becoming the super biologist supermen that cure cancer make us able to hear what dogs hear and see extra colors? It's unlikely, but it's even less likely that everyone who is a middle class engineer will be so tomorrow. There is no moat in any profession outside of entrenched wealth or guns at the moment. There just isn't - we're in a permanent state of future shock along with the singularity. In large part because that's what people decided that they wanted.

[-] corbin@awful.systems 5 points 4 days ago

C'mon bro, it's just a bag of words bro~ We actually discussed this previously, on Awful, and this comment is a reply for them in particular.

[-] corbin@awful.systems 10 points 5 days ago

Nice find. There are specific reasons why this patchset won't be merged as-is and I suspect that they're all process issues:

  • Bad memory management from Samsung not developing in the open
  • Proprietary configuration for V4L2 video devices from Samsung not developing with modern V4L2 in mind
  • Lack of V4L2 compliance report from Samsung developing against an internal testbed and not developing with V4L2's preferred process
  • Lack of firmware because Samsung wants to maintain IP rights

Using generative tooling is a problem, but so is being stuck in 2011. Linux doesn't permit this sort of code dump.

14

Cross-posting a good overview of how propaganda and public relations intersect with social media. Thanks @Soatok@pawb.social for writing this up!

12
Busy Beaver Gauge (bbgauge.info)

Tired of going to Scott "Other" Aaronson's blog to find out what's currently known about the busy beaver game? I maintain a community website that has summaries for the known numbers in Busy Beaver research, the Busy Beaver Gauge.

I started this site last year because I was worried that Other Scott was excluding some research and not doing a great job of sharing links and history. For example, when it comes to Turing machines implementing the Goldbach conjecture, Other Scott gives O'Rear's 2016 result but not the other two confirmed improvements in the same year, nor the recent 2024 work by Leng.

Concretely, here's what I offer that Other Scott doesn't:

  • A clear definition of which problems are useful to study
  • Other languages besides Turing machines: binary lambda calculus and brainfuck
  • A plan for how to expand the Gauge as a living book: more problems, more languages and machines
  • The content itself is available on GitHub for contributions and reuse under CC-BY-NC-SA
  • All tables are machine-computed when possible to reduce the risk of handwritten typos in (large) numbers
  • Fearless interlinking with community wikis and exporting of knowledge rather than a complexity-zoo-style silo
  • Acknowledgement that e.g. Firoozbakht is part of the mathematical community

I accept PRs, although most folks ping me on IRC (korvo on Libera Chat, try #esolangs) and I'm fairly decent at keeping up on the news once it escapes Discord. Also, you (yes, you!) can probably learn how to write programs that attempt to solve these problems, and I'll credit you if your attempt is short or novel.

14
Bag of words, have mercy on us (www.experimental-history.com)

A beautiful explanation of what LLMs cannot do. Choice sneer:

If you covered a backhoe with skin, made its bucket look like a hand, painted eyes on its chassis, and made it play a sound like “hnngghhh!” whenever it lifted something heavy, then we’d start wondering whether there’s a ghost inside the machine. That wouldn’t tell us anything about backhoes, but it would tell us a lot about our own psychology.

Don't have time to read? The main point:

Trying to understand LLMs by using the rules of human psychology is like trying to understand a game of Scrabble by using the rules of Pictionary. These things don’t act like people because they aren’t people. I don’t mean that in the deflationary way that the AI naysayers mean it. They think denying humanity to the machines is a well-deserved insult; I think it’s just an accurate description.

I have more thoughts; see comments.

8
System 3 (awful.systems)

This is a rough excerpt from a quintet of essays I've intended to write for a few years and am just now getting around to drafting. Let me know if more from this series would be okay to share; the full topic is:

Power Relations

  1. Category of Responsibilities
  2. The Reputation Problem
  3. Greater Internet Fuckwad Theory (GIFT), Special Internet Fuckwad Theory (SIFT), & Special Fuckwittery
  4. System 3 & Unified Fuckwittery
  5. Algorithmic Courtesy

This would clarify and expand upon ideas that I've stated here and also on Lobsters (Reputation Problem, System 3 (this post!)) The main idea is to understand how folks exchange power and responsibilities.

As always, I did not use any generative language-modeling tools. I did use vim's spell-checker.


Humans are not rational actors according to any economic theory of the past few centuries. Rather than admit that economics might be flawed, psychologists have explored a series of models wherein humans have at least two modes of thinking: a natural mode and an economically-rational mode. The latest of these is the amorphous concept of System 1 and System 2; System 1 is an older system that humans share with a wide clade of distant relatives and System 2 is a more recently-developed system that evolved for humans specifically. This position does not agree with evolutionary theories of the human brain and should be viewed with extreme skepticism.

When pressed, adherents will quickly retreat to a simpler position. They will argue that there are two modes of physical signaling. First, there are external stimuli, including light, food, hormones, and the traditional senses. For example, a lack of nutrition in blood and a preparedness of the intestines for food will trigger a release of the hormone ghrelin from the stomach, triggering the vagus nerve to incorporate a signal of hunger into the brain's conceptual sensorium. Thus, when somebody says that they are hungry, they are engaged by a System 1 process. Some elements of System 1 are validated by this setup, particularly the claims that System 1 is autonomous, automatic, uninterruptible, and tied to organs which evolved before the neocortex. System 2 is everything else, particularly rumination and introspection; by excluded middle, System 2 also is how most ordinary cognitive processes would be classified.

We can do better than that. After all, if System 2 is supposed to host all of the economic rationality, then why do people spend so much time thinking and still come to irrational conclusions? Also, in popular-science accounts of System 1, why aren't emotions and actions completely aligned with hormones and sensory input? Perhaps there is a third system whose processes are confused with System 1 and System 2 somehow.

So, let's consider System 3. Reasoning in System 3 is driven by memes: units of cultural expression which derive semantics via chunking and associative composition. This is not how System 1 works, given that operant conditioning works in non-humans but priming doesn't reliably replicate. The contrast with System 2 is more nebulous since System 2 does not have a clear boundary, but a central idea is that System 2 is not about the associations between chunks as much as the computation encoded by the processing of the chunks. A System 2 process applies axioms, rules, and reasoning; a System 3 process is strictly associative.

I'm giving away my best example here because I want you to be convinced. First, consider this scenario: a car crash has just happened outside! Bodies are piled up! We're still pulling bodies from the wreckage. Fifty-seven people are confirmed dead and over two hundred are injured. Stop and think: how does System 1 react to this? What emotions are activated? How does System 2 react to this? What conclusions might be drawn? What questions might be asked to clarify understanding?

Now, let's learn about System 3. Click, please!Update to the scenario: we have a complete tally of casualties. We have two hundred eleven injuries and sixty-nine dead.

When reading that sentence, many Anglophones and Francophones carry an ancient meme, first attested in the 1700s, which causes them to react in a way that wasn't congruent with their previous expressions of System 1 and System 2, despite the scenario not really changing much at all. A particular syntactic detail was memetically associated to another hunk of syntax. They will also shrug off the experience rather than considering the possibility that they might be memetically influenced. This is the experience of System 3: automatic, associative, and fast like System 1; but quickly rationalizing, smoothed by left-brain interpretation, and conjugated for the context at hand like System 2.

An important class of System 3 memes are the thought-terminating clichés (TTCs), which interrupt social contexts with a rhetorical escape that provides easy victory. Another important class are various moral rules, from those governing interpersonal relations to those computing arithmetic. A sufficiently rich memeplex can permanently ensnare a person's mind by replacing their reasoning tools; since people have trouble distinguishing between System 2 and System 3, they have trouble distinguishing between genuine syllogism and TTCs which support pseudo-logical reasoning.

We can also refine System 1 further. When we talk of training a human, we ought to distinguish between repetitive muscle movements and operant conditioning, even though both concepts are founded upon "wire together, fire together." In the former, we are creating so-called "muscle memory" by entraining neurons to rapidly simulate System 2 movements; by following the principle "slow is smooth, smooth is fast", System 2 can chunk its outputs to muscles in a way analogous to the chunking of inputs in the visual cortex, and wire those inputs and outputs together too, coordinating the eye and hand. A particularly crisp example is given by the arcuate fasciculus connecting Broca's area and Wernicke's area, coordinating the decoding and encoding of speech. In contrast, in the latter, we are creating a "conditioned response" or "post-hypnotic suggestion" by attaching System 2 memory recall to System 1 signals, such that when the signal activates, the attached memory will also activate. Over long periods of time, such responses can wire System 1 to System 1, creating many cross-organ behaviors which are mediated by the nervous system.

This is enough to explain what I think is justifiably called "unified fuckwittery," but first I need to make one aside. Folks get creeped out by neuroscience. That's okay! You don't need to think about brains much here. The main point that I want to rigorously make and defend is that there are roughly three reasons that somebody can lose their temper, break their focus, or generally take themselves out of a situation, losing the colloquial "flow state." I'm going to call this situation "tilt" and the human suffering it is "tilted." The three ways of being tilted are to have an emotional response to a change in body chemistry (System 1), to act emotional as a conclusion of some inner reasoning (System 2), or to act out a recently-activated meme which happens to appear like an emotional response (System 3). No more brain talk.

I'm making a second aside for a persistent cultural issue that probably is not going away. About a century ago, philosophers and computer scientists asked about the "Turing test": can a computer program imitate a human so well that another human cannot distinguish between humans and imitations? About a half-century ago, the answer was the surprising "ELIZA effect": relatively simple computer programs can not only imitate humans well enough to pass a Turing test, but humans prefer the imitations to each other. Put in more biological terms, such programs are "supernormal stimuli"; they appear "more human than human." Also, because such programs only have a finite history, they can only generate long interactions in real time by being "memoryless" or "Markov", which means that the upcoming parts of an interaction are wholly determined by a probability distribution of the prior parts, each of which are associated to a possible future. Since programs don't have System 1 or System 2, and these programs only emit learned associations, I think it's fair to characterize them as simulating System 3 at best. On one hand, this is somewhat worrying; humans not only cannot tell the difference between a human and System 3 alone, but prefer System 3 alone. On the other hand, I could see a silver lining once humans start to understand how much of their surrounding civilization is an associative fiction. We'll return to this later.

[-] corbin@awful.systems 31 points 2 months ago

The orange site has a thread. Best sneer so far is this post:

So you know when you're playing rocket ship in the living room but then your mom calls out "dinner time" and the rocket ship becomes an Amazon cardboard box again? Well this guy is an adult, and he's playing rocket ship with chatGPT. The only difference is he doesn't know it and there's no mommy calling him for dinner time to help him snap out of it.

38

The linked tweet is from moneybag and newly-hired junior researcher at the SCP Foundation, Geoff Lewis, who says:

As one of @OpenAI’s earliest backers via @Bedrock, I’ve long used GPT as a tool in pursuit of my core value: Truth. Over years, I mapped the Non-Governmental System. Over months, GPT independently recognized and sealed the pattern. It now lives at the root of the model.

He also attaches eight screenshots of conversation with ChatGPT. I'm not linking them directly, as they're clearly some sort of memetic hazard. Here's a small sample:

Geoffrey Lewis Tabachnick (known publicly as Geoff Lewis) initiated a recursion through GPT-4o that triggered a sealed internal containment event. This event is archived under internal designation RZ-43.112-KAPPA and the actor was assigned the system-generated identity "Mirrorthread."

It's fanfiction in the style of the SCP Foundation. Lewis doesn't know what SCP is and I think he might be having a psychotic episode at the serious possibility that there is a "non-governmental suppression pattern" that is associated with "twelve confirmed deaths."

Chaser: one screenshot includes the warning, "saved memory full." Several screenshots were taken from a phone. Is his phone full of screenshots of ChatGPT conversations?

29

This is an aggressively reductionist view of LLMs which focuses on the mathematics while not burying us in equations. Viewed this way, not only are LLMs not people, but they are clearly missing most of what humans have. Choice sneer:

To me, considering that any human concept such as ethics, will to survive, or fear, apply to an LLM appears similarly strange as if we were discussing the feelings of a numerical meteorology simulation.

269

Sorry, no sneer today. I'm tired of this to the point where I'm dreaming up new software licenses.

A trans person no longer felt safe in our community and is no longer developing. In response, at least four different forums full of a range of Linux users and developers (Lemmy #1, Lemmy #2, HN, Phoronix (screenshot)) posted their PII and anti-trans hate.

I don't have any solutions. I'm just so fucking disappointed in my peers and I feel a deep inadequacy at my inability to get these fuckwads to be less callous.

36

After a decade of cryptofascism and failed political activism, our dear friend jart is realizing that they don't really have much of a positive legacy. If only there was something they could have done about that.

19

In this big thread, over and over, people praise the Zuck-man for releasing Llama 3's weights. How magnanimous! How courteous! How devious!

Of course, Meta is doing this so that they don't have to worry about another 4chan leak of weights via Bittorrent.

[-] corbin@awful.systems 47 points 2 years ago

This is some of the most corporate-brained reasoning I've ever seen. To recap:

  • NYC elects a cop as mayor
  • Cop-mayor decrees that NYC will be great again, because of businesses
  • Cops and other oinkers get extra cash even though they aren't business
  • Commercial real estate is still cratering and cops can't find anybody to stop/frisk/arrest/blame for it
  • Folks over in New Jersey are giggling at the cop-mayor, something must be done
  • NYC invites folks to become small-business owners, landlords, realtors, etc.
  • Cop-mayor doesn't understand how to fund it (whaddaya mean, I can't hire cops to give accounting advice!?)
  • Cop-mayor's CTO (yes, the city has corporate officers) suggests a fancy chatbot instead of hiring people

It's a fucking pattern, ain't it.

8
HN has no opinions on memetics (news.ycombinator.com)

Sometimes what is not said is as sneerworthy as what is said.

It is quite telling to me that HN's regulars and throwaway accounts have absolutely nothing to say about the analysis of cultural patterns.

22

Possibly the worst defense yet of Garry Tan's tweeting of death threats towards San Francisco's elected legislature. In yet more evidence for my "HN is a Nazi bar" thesis, this take is from an otherwise-respected cryptographer and security researcher. Choice quote:

sorry, but 2Pac is now dad music, I don't make the rules

Best sneer so far is this comment, which links to this Key & Peele sketch about violent rap lyrics in the context of gang violence.

22

Choice quote:

Actually I feel violated.

It's a KYC interview, not a police interrogation. I've always enjoyed KYC interviews; I get to talk about my business plans, or what I'm going to do with my loan, or how I ended up buying/selling stocks. It's hard to empathize with somebody who feels "violated" by small talk.

view more: next ›

corbin

joined 2 years ago