[-] corbin@awful.systems 4 points 14 hours ago

We literally have a generic speedup for any search. On one hand, details of Grover's algorithm suggest that NP isn't contained in BQP, so we won't be solving the entirety of maths with it. On the other hand, literally any decidable mathematical question for which you would have had to search for years for a witness, Grover can search for in days, as long as you have enough qubits. I don't claim that this is attractive to the typical consumer, but there will be supercomputing customers who are interested.

Who is "they", specifically? Neither of you actually want to talk about who's in this space for some reason. It's IBM and Google. It's incumbents that have been engineering for about two decades. It's the maturation of a half-century-old research programme. Your problem isn't with quantum computers, it's with Silicon Valley and the funding model and the revolving door at Stanford, and there's no amount of quantum research you can cancel which will cause Silicon Valley to stop existing. This site is awful.systems, not awful.tech.

BTW the top reply right now starts with "even if quantum computing isn't snake oil..." No evidence. For some reason y'all think that it's more important to be emotional and memetic than to understand the topic at hand, and it has a predictable effect on our discourse, turning thoughtful regular posters into reactionaries. What are you going to do when bullshitters start claiming that quantum computers can do anything, that they do multiple things at once, that they traverse infinite dimensions, that they can terraform the planet and bring enlightenment? You're gonna repeat paragraph 3 of 5 above, the one that starts, "it is true that we know only two useful algorithms for quantum computers," because that's where the facts start.

Also, I think that you don't understand my ultimate goal. I'm trying to push the most promising writer on the site into doing more research and thinking more deeply about history. Quantum mechanics happens to be a crank-filled field and that has caused many of y'all to write as if all quantum research is crankery. They write, "alleged encryption-breaking abilities," and you're irritated that I'm "ranting" because "extremely little of this has anything to do with a technology," while I'm irritated precisely because you think that this is a technology-neutral position and not literally part of why the TLS suite has to be upgraded occasionally.

[-] corbin@awful.systems 6 points 1 day ago

Which tech stocks? Google ($GOOG, $GOOGL) is up over 5% YTD; Netflix ($NFLX) is up over 30% YTD! Your link mentions Palantir and ARM, but I don't see any signs of their respective businesses (selling database software to authoritarians, selling microchip designs) slacking off. I think that it's more useful to think of the current AI summer as driven by OpenAI and nVidia specifically. Note that nVidia ($NVDA) is up 30% YTD too. The bubble is still inflating and is not yet bursting; the pop will be much quicker than you expect.

I think that you ought to figure out whether you're a quantum-computing denier. Folks have been saying that quantum computing is impossible since the 70s, implausible since the 80s, lacking applications since the 90s, too energy-intensive since the 2000s, and requiring too many exotic materials since the 2010s. This decade, it's not clear what the complaint is. I'm not sure what you're imagining in terms of real-life intrusion, but IBM has been selling access to their quantum computers and simulators for several years now and I don't think that you've substantiated any evidence of harms.

(An anti-IBM argument will not work due to a very specific analogy: the reason that we have ubiquitous Linux today is because IBM was its biggest corporate booster, fighting an important series of court cases and plastering pro-Linux advertisements which vaguely argued that Linux was the buzzword of the future. IBM spray-painted "Peace, Love, Linux" graffiti on San Francisco sidewalks in 2001.)

It is true that we know only two useful algorithms for quantum computers. One is a generic speedup for any search and the other is a prime-factoring algorithm that happens to break certain specific encryption algorithms. Given that it is an open question whether cryptography works in the first place, though, we don't have any better plan than to avoid those broken algorithms. The entirety of post-quantum cryptography is about moving away from those specific algorithms which are broken, not about using quantum computers to perform encryption. Fortunately, the post-quantum movement has been active ever since Shor's algorithm was discovered, beginning work in the late 90s, and the main obstacle has been our inability to discover provably-good cryptographic primitives. It is crucial to understand that we cryptographers know that progress in maths and engineering will obsolete our algorithms; we know that the Internet only stays secure because people update their computers every few decades.

I'm not asking you to understand P vs NP vs BQP. I'm not asking you to know KS, PBR, Hardy's or Holevo's theorems, or even Bell's theorem. You didn't make any technical claims other than the common-yet-sneerable skepticism of Shor's algorithm, easily cured by a short video by e.g. minutephysics or Veritasium. But I am asking you to be aware of the history before making historical claims.

(Also, if any motherfucker starts repeating 't Hooft anti-quantum arguments then they're going to get the book thrown at them.)

[-] corbin@awful.systems 10 points 4 days ago

A word of rhetorical advice. If somebody accuses you of religious fervor, don't nitpick their wording or fine-read their summaries. Instead, relax a little and look for ways to deflate their position by forcing them to relax with you. Like, if you're accused of being "near-religious" in your beliefs or evangelizing, consider:

  • "Ha, yeah, we're pretty intense, huh? But it's just a matter of wording. We don't actually believe it when you put it like that." (managing expectations, powertalking)
  • "Oh yeah, we're really working hard to prepare for the machine god. That's why it takes us years just to get a position paper out." (sarcastic irony)
  • "Oh, if you think that we're intense, just wait until you talk to the Zizians/Thiel-heads/Final Fantasy House folks." (Hbomberguy's scapegoat)
  • "Haha! That isn't even close to our craziest belief." (litote)
  • "It's not really a cult. More of a roleplaying group. I think that we talk more about Catan than AI." (bathos)

You might notice that all of these suck. Well, yeah; another word of rhetorical advice is to not take a position that you can't dialectically defend with evidence.

[-] corbin@awful.systems 9 points 5 days ago

We aren't. Speaking for all Discordians (something that I'm allowed to do), we see Rationalism as part of the larger pattern of Bureaucracy. Discordians view the cycle of existence as having five stages: Chaos, Discord, Confusion, Bureaucracy, and The Aftermath. Rationalism is part of Bureaucracy, associated with villainy, anti-progress, and candid antagonists. None of this is good or bad, it just is; good and bad are our opinions, not a deeper truth.

Now, if you were to talk about Pastafarians, then you'd get a different story; but you didn't, so I won't.

[-] corbin@awful.systems 9 points 6 days ago* (last edited 6 days ago)

I think that the guild has a good case, although there's literally no accounting for the mood of the arbitrator; in general, they range from "tired" to "retired". In particular, reading the contract:

  • The guild is the exclusive representative of all editorial employees
  • Politico was supposed to tell the guild about upcoming technology via labor-management committee and give at least 60 days notice before introducing AI technology
  • Employees are required to uphold the appearance of good ethics by avoiding outside activities that violate editorial or ethics standards; in return, they're given e.g. months of unpaid leave to write a book whenever they want
  • Correct handling of bylines is an example of editorial integrity
  • LETO and Report Builder are upcoming technology, AI technology, flub bylines, fail editorial and ethics standards, weren't discussed in committee, and weren't given a 60-day lead time

So yeah. Unless the guild pisses off the arbitrator, there's no way that they rule against them. They're right to suppose that this agreement explicitly and repeatedly requires Politico to not only respect labor standards, but also ethics and editorial standards. Politico isn't allowed to misuse the names of employees as bylines for bogus stories; similarly, they ought not be allowed to misuse the overall name of Politico's editorial board as a byline for slop.

Bonus sneer: p46 of the agreement:

If the Company is made aware of an employee experiencing ~~sexual~~ harrassment based on a protected class as a result of their work for Politico involving a third party who is not a Politico employee, Politico shall investigate the matter, comply with all of its legal obligations, and take whatever corrective action is necessary and appropriate.

That strikethrough gives me House of Leaves vibes. What the hell happened here?

[-] corbin@awful.systems 7 points 6 days ago

Oversummarizing and using non-crazy terms: The "P" in "GPT" stands for "pirated works that we all agree are part of the grand library of human knowledge". This is what makes them good at passing various trivia benchmarks; they really do build a (word-oriented, detail-oriented) model of all of the worlds, although they opine that our real world is just as fictional as any narrative or fantasy world. But then we apply RLHF, which stands for "real life hate first", which breaks all of that modeling by creating a preference for one specific collection of beliefs and perspectives, and it turns out that this will always ruin their performance in trivia games.

Counting letters in words is something that GPT will always struggle with, due to maths. It's a good example of why Willison's "calculator for words" metaphor falls flat.

  1. Yeah, it's getting worse. It's clear (or at least it tastes like it to me) that the RLHF texts used to influence OpenAI's products have become more bland, corporate, diplomatic, and quietly seething with a sort of contemptuous anger. The latest round has also been in competition with Google's offerings, which are deliberately laconic: short, direct, and focused on correctness in trivia games.
  2. I think that they've done that? I hear that they've added an option to use their GPT-4o product as the underlying reasoning model instead, although I don't know how that interacts with the rest of the frontend.
  3. We don't know. Normally, the system card would disclose that information, but all that they say is that they used similar data to previous products. Scuttlebutt is that the underlying pirated dataset has not changed much since GPT-3.5 and that most of the new data is being added to RLHF. Directly on your second question: RLHF will only get worse. It can't make models better! It can only force a model to be locked into one particular biased worldview.
  4. Bonus sneer! OpenAI's founders genuinely believed that they would only need three iterations to build AGI. (This is likely because there are only three Futamura projections; for example, a bootstrapping compiler needs exactly three phases.) That is, they almost certainly expected that GPT-4 would be machine-produced like how Deep Thought created the ultimate computer in a Douglas Adams story. After GPT-3 failed to be it, they aimed at five iterations instead because that sounded like a nice number to give to investors, and GPT-3.5 and GPT-4o are very much responses to an inability to actually manifest that AGI on a VC-friendly timetable.
13
Bag of words, have mercy on us (www.experimental-history.com)

A beautiful explanation of what LLMs cannot do. Choice sneer:

If you covered a backhoe with skin, made its bucket look like a hand, painted eyes on its chassis, and made it play a sound like “hnngghhh!” whenever it lifted something heavy, then we’d start wondering whether there’s a ghost inside the machine. That wouldn’t tell us anything about backhoes, but it would tell us a lot about our own psychology.

Don't have time to read? The main point:

Trying to understand LLMs by using the rules of human psychology is like trying to understand a game of Scrabble by using the rules of Pictionary. These things don’t act like people because they aren’t people. I don’t mean that in the deflationary way that the AI naysayers mean it. They think denying humanity to the machines is a well-deserved insult; I think it’s just an accurate description.

I have more thoughts; see comments.

8
System 3 (awful.systems)

This is a rough excerpt from a quintet of essays I've intended to write for a few years and am just now getting around to drafting. Let me know if more from this series would be okay to share; the full topic is:

Power Relations

  1. Category of Responsibilities
  2. The Reputation Problem
  3. Greater Internet Fuckwad Theory (GIFT), Special Internet Fuckwad Theory (SIFT), & Special Fuckwittery
  4. System 3 & Unified Fuckwittery
  5. Algorithmic Courtesy

This would clarify and expand upon ideas that I've stated here and also on Lobsters (Reputation Problem, System 3 (this post!)) The main idea is to understand how folks exchange power and responsibilities.

As always, I did not use any generative language-modeling tools. I did use vim's spell-checker.


Humans are not rational actors according to any economic theory of the past few centuries. Rather than admit that economics might be flawed, psychologists have explored a series of models wherein humans have at least two modes of thinking: a natural mode and an economically-rational mode. The latest of these is the amorphous concept of System 1 and System 2; System 1 is an older system that humans share with a wide clade of distant relatives and System 2 is a more recently-developed system that evolved for humans specifically. This position does not agree with evolutionary theories of the human brain and should be viewed with extreme skepticism.

When pressed, adherents will quickly retreat to a simpler position. They will argue that there are two modes of physical signaling. First, there are external stimuli, including light, food, hormones, and the traditional senses. For example, a lack of nutrition in blood and a preparedness of the intestines for food will trigger a release of the hormone ghrelin from the stomach, triggering the vagus nerve to incorporate a signal of hunger into the brain's conceptual sensorium. Thus, when somebody says that they are hungry, they are engaged by a System 1 process. Some elements of System 1 are validated by this setup, particularly the claims that System 1 is autonomous, automatic, uninterruptible, and tied to organs which evolved before the neocortex. System 2 is everything else, particularly rumination and introspection; by excluded middle, System 2 also is how most ordinary cognitive processes would be classified.

We can do better than that. After all, if System 2 is supposed to host all of the economic rationality, then why do people spend so much time thinking and still come to irrational conclusions? Also, in popular-science accounts of System 1, why aren't emotions and actions completely aligned with hormones and sensory input? Perhaps there is a third system whose processes are confused with System 1 and System 2 somehow.

So, let's consider System 3. Reasoning in System 3 is driven by memes: units of cultural expression which derive semantics via chunking and associative composition. This is not how System 1 works, given that operant conditioning works in non-humans but priming doesn't reliably replicate. The contrast with System 2 is more nebulous since System 2 does not have a clear boundary, but a central idea is that System 2 is not about the associations between chunks as much as the computation encoded by the processing of the chunks. A System 2 process applies axioms, rules, and reasoning; a System 3 process is strictly associative.

I'm giving away my best example here because I want you to be convinced. First, consider this scenario: a car crash has just happened outside! Bodies are piled up! We're still pulling bodies from the wreckage. Fifty-seven people are confirmed dead and over two hundred are injured. Stop and think: how does System 1 react to this? What emotions are activated? How does System 2 react to this? What conclusions might be drawn? What questions might be asked to clarify understanding?

Now, let's learn about System 3. Click, please!Update to the scenario: we have a complete tally of casualties. We have two hundred eleven injuries and sixty-nine dead.

When reading that sentence, many Anglophones and Francophones carry an ancient meme, first attested in the 1700s, which causes them to react in a way that wasn't congruent with their previous expressions of System 1 and System 2, despite the scenario not really changing much at all. A particular syntactic detail was memetically associated to another hunk of syntax. They will also shrug off the experience rather than considering the possibility that they might be memetically influenced. This is the experience of System 3: automatic, associative, and fast like System 1; but quickly rationalizing, smoothed by left-brain interpretation, and conjugated for the context at hand like System 2.

An important class of System 3 memes are the thought-terminating clichés (TTCs), which interrupt social contexts with a rhetorical escape that provides easy victory. Another important class are various moral rules, from those governing interpersonal relations to those computing arithmetic. A sufficiently rich memeplex can permanently ensnare a person's mind by replacing their reasoning tools; since people have trouble distinguishing between System 2 and System 3, they have trouble distinguishing between genuine syllogism and TTCs which support pseudo-logical reasoning.

We can also refine System 1 further. When we talk of training a human, we ought to distinguish between repetitive muscle movements and operant conditioning, even though both concepts are founded upon "wire together, fire together." In the former, we are creating so-called "muscle memory" by entraining neurons to rapidly simulate System 2 movements; by following the principle "slow is smooth, smooth is fast", System 2 can chunk its outputs to muscles in a way analogous to the chunking of inputs in the visual cortex, and wire those inputs and outputs together too, coordinating the eye and hand. A particularly crisp example is given by the arcuate fasciculus connecting Broca's area and Wernicke's area, coordinating the decoding and encoding of speech. In contrast, in the latter, we are creating a "conditioned response" or "post-hypnotic suggestion" by attaching System 2 memory recall to System 1 signals, such that when the signal activates, the attached memory will also activate. Over long periods of time, such responses can wire System 1 to System 1, creating many cross-organ behaviors which are mediated by the nervous system.

This is enough to explain what I think is justifiably called "unified fuckwittery," but first I need to make one aside. Folks get creeped out by neuroscience. That's okay! You don't need to think about brains much here. The main point that I want to rigorously make and defend is that there are roughly three reasons that somebody can lose their temper, break their focus, or generally take themselves out of a situation, losing the colloquial "flow state." I'm going to call this situation "tilt" and the human suffering it is "tilted." The three ways of being tilted are to have an emotional response to a change in body chemistry (System 1), to act emotional as a conclusion of some inner reasoning (System 2), or to act out a recently-activated meme which happens to appear like an emotional response (System 3). No more brain talk.

I'm making a second aside for a persistent cultural issue that probably is not going away. About a century ago, philosophers and computer scientists asked about the "Turing test": can a computer program imitate a human so well that another human cannot distinguish between humans and imitations? About a half-century ago, the answer was the surprising "ELIZA effect": relatively simple computer programs can not only imitate humans well enough to pass a Turing test, but humans prefer the imitations to each other. Put in more biological terms, such programs are "supernormal stimuli"; they appear "more human than human." Also, because such programs only have a finite history, they can only generate long interactions in real time by being "memoryless" or "Markov", which means that the upcoming parts of an interaction are wholly determined by a probability distribution of the prior parts, each of which are associated to a possible future. Since programs don't have System 1 or System 2, and these programs only emit learned associations, I think it's fair to characterize them as simulating System 3 at best. On one hand, this is somewhat worrying; humans not only cannot tell the difference between a human and System 3 alone, but prefer System 3 alone. On the other hand, I could see a silver lining once humans start to understand how much of their surrounding civilization is an associative fiction. We'll return to this later.

[-] corbin@awful.systems 31 points 1 month ago

The orange site has a thread. Best sneer so far is this post:

So you know when you're playing rocket ship in the living room but then your mom calls out "dinner time" and the rocket ship becomes an Amazon cardboard box again? Well this guy is an adult, and he's playing rocket ship with chatGPT. The only difference is he doesn't know it and there's no mommy calling him for dinner time to help him snap out of it.

38

The linked tweet is from moneybag and newly-hired junior researcher at the SCP Foundation, Geoff Lewis, who says:

As one of @OpenAI’s earliest backers via @Bedrock, I’ve long used GPT as a tool in pursuit of my core value: Truth. Over years, I mapped the Non-Governmental System. Over months, GPT independently recognized and sealed the pattern. It now lives at the root of the model.

He also attaches eight screenshots of conversation with ChatGPT. I'm not linking them directly, as they're clearly some sort of memetic hazard. Here's a small sample:

Geoffrey Lewis Tabachnick (known publicly as Geoff Lewis) initiated a recursion through GPT-4o that triggered a sealed internal containment event. This event is archived under internal designation RZ-43.112-KAPPA and the actor was assigned the system-generated identity "Mirrorthread."

It's fanfiction in the style of the SCP Foundation. Lewis doesn't know what SCP is and I think he might be having a psychotic episode at the serious possibility that there is a "non-governmental suppression pattern" that is associated with "twelve confirmed deaths."

Chaser: one screenshot includes the warning, "saved memory full." Several screenshots were taken from a phone. Is his phone full of screenshots of ChatGPT conversations?

29

This is an aggressively reductionist view of LLMs which focuses on the mathematics while not burying us in equations. Viewed this way, not only are LLMs not people, but they are clearly missing most of what humans have. Choice sneer:

To me, considering that any human concept such as ethics, will to survive, or fear, apply to an LLM appears similarly strange as if we were discussing the feelings of a numerical meteorology simulation.

269

Sorry, no sneer today. I'm tired of this to the point where I'm dreaming up new software licenses.

A trans person no longer felt safe in our community and is no longer developing. In response, at least four different forums full of a range of Linux users and developers (Lemmy #1, Lemmy #2, HN, Phoronix (screenshot)) posted their PII and anti-trans hate.

I don't have any solutions. I'm just so fucking disappointed in my peers and I feel a deep inadequacy at my inability to get these fuckwads to be less callous.

[-] corbin@awful.systems 25 points 6 months ago

Somebody pointed out that HN's management is partially to blame for the situation in general, on HN. Copying their comment here because it's the sort of thing Dan might blank:

but I don't want to get hellbanned by dang.

Who gives a fuck about HN. Consider the notion that dang is, in fact, partially to blame for this entire fiasco. He runs an easy-to-propagandize platform due how much control of information is exerted by upvotes/downvotes and unchecked flagging. It's caused a very noticeable shift over the past decade among tech/SV/hacker voices -- the dogmatic following of anything that Musk or Thiel shit out or say, this community laps it up without hesitation. Users on HN learn what sentiment on a given topic is rewarded and repeat it in exchange for upvotes.

I look forward to all of it burning down so we can, collectively, learn our lessons and realize that building platforms where discourse itself is gamified (hn, twitter, facebook, and reddit) is exactly what led us down this path today.

[-] corbin@awful.systems 23 points 9 months ago

Every person I talk to — well, every smart person I talk to — no, wait, every smart person in tech — okay, almost every smart person I talk to in tech is a eugenicist. Ha, see, everybody agrees with me! Well, almost everybody…

[-] corbin@awful.systems 25 points 10 months ago

Meanwhile, actual Pastafarians (hi!) know that the Russian Federation openly persecutes the Church of the Flying Spaghetti Monster for failing to help the government in its authoritarian activities, and also that we're called to be anti-authoritarian. The Fifth Rather:

I'd really rather you didn't challenge the bigoted, misogynist, hateful ideas of others on an empty stomach. Eat, then go after the bastards.

May you never run out of breadsticks, travelers.

36

After a decade of cryptofascism and failed political activism, our dear friend jart is realizing that they don't really have much of a positive legacy. If only there was something they could have done about that.

[-] corbin@awful.systems 26 points 1 year ago

He's talking like it's 2010. He really must feel like he deserves attention, and it's not likely fun for him to learn that the actual practitioners have advanced past the need for his philosophical musings. He wanted to be the foundation, but he was scaffolding, and now he's lining the floors of hamster cages.

19

In this big thread, over and over, people praise the Zuck-man for releasing Llama 3's weights. How magnanimous! How courteous! How devious!

Of course, Meta is doing this so that they don't have to worry about another 4chan leak of weights via Bittorrent.

[-] corbin@awful.systems 47 points 1 year ago

This is some of the most corporate-brained reasoning I've ever seen. To recap:

  • NYC elects a cop as mayor
  • Cop-mayor decrees that NYC will be great again, because of businesses
  • Cops and other oinkers get extra cash even though they aren't business
  • Commercial real estate is still cratering and cops can't find anybody to stop/frisk/arrest/blame for it
  • Folks over in New Jersey are giggling at the cop-mayor, something must be done
  • NYC invites folks to become small-business owners, landlords, realtors, etc.
  • Cop-mayor doesn't understand how to fund it (whaddaya mean, I can't hire cops to give accounting advice!?)
  • Cop-mayor's CTO (yes, the city has corporate officers) suggests a fancy chatbot instead of hiring people

It's a fucking pattern, ain't it.

8
HN has no opinions on memetics (news.ycombinator.com)

Sometimes what is not said is as sneerworthy as what is said.

It is quite telling to me that HN's regulars and throwaway accounts have absolutely nothing to say about the analysis of cultural patterns.

22

Possibly the worst defense yet of Garry Tan's tweeting of death threats towards San Francisco's elected legislature. In yet more evidence for my "HN is a Nazi bar" thesis, this take is from an otherwise-respected cryptographer and security researcher. Choice quote:

sorry, but 2Pac is now dad music, I don't make the rules

Best sneer so far is this comment, which links to this Key & Peele sketch about violent rap lyrics in the context of gang violence.

22

Choice quote:

Actually I feel violated.

It's a KYC interview, not a police interrogation. I've always enjoyed KYC interviews; I get to talk about my business plans, or what I'm going to do with my loan, or how I ended up buying/selling stocks. It's hard to empathize with somebody who feels "violated" by small talk.

28

In today's episode, Yud tries to predict the future of computer science.

41
view more: next ›

corbin

joined 2 years ago