Is that the guy who's always trying to use LessWrong as preemptive conversion therapy to cure him of having trans thoughts, and they're actually having none of it?
I mean it's so cut and dried you had to invent a disadvantage for pushing the red button.
Maybe the catch is that picking red means you are basically ok with offing people who don't think like you do en masse, even though it's posited like a dilemma between securing the lives of your family vs giving a chance to hypothetical people who are heavily OCD in favor of blue buttons.
If this isn't pure engagement bait, what's the real world situation this is supposed to map to? Pressing red means you always live, and if everyone pushes red everyone lives so...
I mean if blue is supposed to be a proxy for altruism, that usually doesn't come with a certain death conditional.
Apparently, you buy some currency type thing called AI Units and this is the rate the different LLMs consume them. The multipliers used to represent requests I think, i.e. times you triggered inference, but ai units are a proxy for token burn in a somewhat vague way, which makes me think there will be rate limit related controversies similar to what's now happening with anthropic.
Existing enterprise users will get double the AIUs for three months to ease them to the new pricing model, so autumn (when the enterprise AIU pools get effectively halved) is gonna be fun.
New copilot pricing just dropped, takes effect after June 1:
Some highlights for copilot pro and pro+:

most of them were not able to share their favourite brainworm without being infected by the others which were being passed around
The good old cultic milieu.
This makes so much sense, and also explains why siskind's readers are fine with him being ~~openly disingenuous~~ sorry I meant amenable to straussian readings.
I use AI sparingly to make sure the company-paid subscription is a net loss for the AI vendor.
Hey, it could happen.
Overall, I think it was a bit cookie cutter for an article of this type, but maybe It's just the preaching to the choir effect. Even the fact that he ostensibly quit his job over this stuff doesn't hit as hard as it should, it comes off as if he could have done so at any time but this way he gets to grandstand about it.
Also stuff like this:
It wasn’t a bad job, not by most metrics. It ticked the boxes a job is supposed to tick: good pay. Health insurance. Remote work. Time off. Nice coworkers.
sounds like it should be in a how do you do, fellow workers copypasta.
The poster goes on to whine about how it sucks it's not the same as solving puzzles and they're just QAing all day now and calls the LLM a slot machine, so at least they're not boosting.
Still, they don't go so far as to say they were forced to work this way, so their not even looking at the code either means they're a lazy bum or that it's too far past incoherent to be worth it.
Saw a remarkable take on the pro-AI parts of bsky, that since DeepSeek420.69 can offer the model at like 15% of Claude's pricing, that must mean that Anthropic is operating at an at least 80% positive margin on inference, so things will work out.
In the same thread they complained about Zitron's math being dodgy.

Good find, this is never-take-me-seriously stupid, and also does the beigeness thing of trying to gradually work around an accepted definition in order to almost make a point at the last minute, here being that since (we have apparently concluded that) (because of uh hypothetical brain surgery and stuff) accountability = improvability + punishability and nothing else so of course software can be held "accountable" in all the ways that matter.
His big mistake is not doing it at novel length so it's really obvious that he's being willfully stupid about it.