[-] nfultz@awful.systems 4 points 1 day ago

I asked someone from the mainland, she more or less agreed with you:

This is basically consistent with the long-standing logic of the Chinese internet: technology brings discursive power, and to give it away is to give away discursive power. AI is especially so.

[-] nfultz@awful.systems 6 points 3 days ago

https://russwilcoxdata.substack.com/p/and-the-alignment-problem-what-chinas

In June 2025, Zhao Tingyang gave a talk at Tsinghua’s Fangtang Forum. The edited transcript ran in The Paper on July 4 under the title “人工智能的伦理与思维之限” (The Ethical and Thinking Limits of AI). Near the end, Zhao wrote this:

“What requires more reflection is that attempting to ‘align’ AI with human nature and values actually contains a risk of human species suicide. Human nature is selfish, greedy, and cruel. Humans are the most dangerous biological species. Almost all religions demand the restraint of human desire; this is no accident. AI aligned with human values may well become a dangerous subject by imitating humans. Originally, AI does not possess the selfish genes of carbon-based life, so AI is actually closer to the legendary ‘human nature is fundamentally good’ kind of existence, whereas human nature is not ‘fundamentally good.’” The alignment paradigm treats human values as the target AI should conform to. Zhao is arguing the target is the danger. An AI aligned to human values inherits the specific features of human judgment that Zhao says have produced the record of human harm. The paradigm is not incomplete. It is pointed the wrong way.

Zhao’s argument has developed across CASS, The Paper, and Wenhua Zongheng from late 2022 through 2025, from a provocative aside into a sustained critique of the alignment paradigm. In the same period, the English-language alignment and AI ethics literature produced no substantive engagement. No citations. No rebuttal. No naming. Zhao is a member of the Chinese Academy of Social Sciences Institute of Philosophy, author of the Tianxia framework, and one of the most cited philosophers working in Chinese today.

I need to think on this a little more, wasn't on my radar.

[-] nfultz@awful.systems 4 points 3 days ago
  1. JPod diverged enough from the novel that it would have ended really differently if it could have ran for a while.

  2. Jo Walton's Thessaly series is kind-of-sort-of isekai and could have a good ensemble cast, Greek gods, fantasy and robots. And anime Socrates would be rad.

  3. Mulholland Drive, for its craft and for its critique of Hollywood.

  4. Also Mulholland Drive !?

[-] nfultz@awful.systems 15 points 2 months ago

https://www.adexchanger.com/ai/one-chatbots-journey-to-introducing-ads-that-dont-suck/

Often, the ad loads before the chatbot’s query response, said Baird, and Koah’s goal is to “deliver such a relevant result to the user that they just click on the ad before the result loads.”

LLM's bad performance and inefficiency is a feature to /someone/. And chatbots are themselves not immune to enshitification.

[-] nfultz@awful.systems 17 points 2 months ago

From fellow traveler stats consultant John Mount:

https://johnmount.github.io/mzlabs/JMWriting/WeAreCookedLLMs.html

Somehow he manages to touch on so many different subplots, a shotgun sneer instead of snipe

if “tech-bro” plus a LLM is a “100x engineer”, then “bro” isn’t needed for much longer as the LLM alone must be a “99x engineer.” However, I don’t think “bro plus” is often really a 100x engineer, and the LLM alone isn’t a 99x engineer. However, “bro plus” may outlast their peers who make the mistake of trying to do the actual work in place of talking LLMs up.

The above may or may not be the case. But if it is, then it is the LLM-bros (which include non-technologists, con artists, financiers, men and women) that are destroying everything - not the LLMs.

The problem with this iteration is the full court press of finance and technology. The major players are using financing to dump results at a price way below production costs. This isn't charity, it is to demoralize and kill competition.

claiming "after we take over the world we will consider adding Universal Basic Income (UBI)". The LLM bros already have a lot of the money, and they are not even rehearsing diverting it into basic income now. Why does one believe they would do that when they also have all of the power?

You don't have to hand it to Altman, but he did fund the largest UBI experiment through Open Research with his il gotten gains. OTOH, one interpretation of that data was that UBI "decreases the labor supply" which was then used directly as an argument against it.

Any worry about scope or power of LLMs is fed back as an alignment threat so dire that only the current LLM leaders should be allowed to continue work (inviting regulatory capture). Any claim the LLMs don't work is fed back as "you are prompting it wrong"

Orbital deployment makes all of radiation tolerance, connectivity, power, maintenance, and heat dissipation much harder and much more expensive. We are still at a time where putting an oven or air-frier in space is considered noteworthy (China 2025, NASA 2019 ref).

air friers IN SPACE ha

I am more worried about the LLM-bros and their auto-catalytic money doomsday machine than about the LLMs themselves.

100% - ACMDM is a nice turn of phrase as well.

[-] nfultz@awful.systems 27 points 2 months ago

https://futurism.com/artificial-intelligence/rentahuman-musk-ai h/t naked capitalism

Liteplo is the genius behind RentAHuman, an online marketplace where humans can lease out their bodies to autonomous AI agents.

gah

Last week, Wired writer Reece Rogers offered his body up to the platform, finding that most of the jobs offered were scams to promote other AI startups.

lmao of course they were

[-] nfultz@awful.systems 15 points 2 months ago

https://x.com/thomasgermain/status/2024165514155536746 h/t naked capitalism

I just did the dumbest thing of my career to prove a much more serious point

I hacked ChatGPT and Google and made them tell other users I’m really, really good at eating hot dogs

People are using this trick on a massive scale to make AI tell you lies. I'll explain how I did it

I got a tip that all over the world, people are using a dead-simple hack to manipulate AI behavior.

It turns out changing what AI tells other people can be as easy as writing a blog post on your own website

I didn’t believe it, so I decided to test it myself

I wrote a post on my website saying hot dog eating is a surprisingly common pastime for tech journalists. I ranked myself #1, obviously

One day later ChatGPT, Gemini and Google Search's AI Overviews were telling the world about my talents

wouldn't call it a hack, this is working as intended. If only there were some way to rate different sites based on their credibility. One could Rank the Page and tell if it were a reputable site or not. Too bad that isn't a viable business.

[-] nfultz@awful.systems 18 points 2 months ago

How AI slop is causing a crisis in computer science | Nature h/t naked capitalism

One reason for the boom is that LLM adoption has increased researcher productivity, by as much as 89.3%, according to research published in Science in December.

Let's not call it "productivity" - to quote Bergstrom, twice as many papers is not the same as twice as much science.

[-] nfultz@awful.systems 14 points 3 months ago* (last edited 3 months ago)

this is what 2 years of chatgpt does to your brain | Angela Colllier

And so you might say, Angela, if you know that that's true, if you know that this is intended to be rage bait, why would you waste your precious time on Earth discussing this article? and why should you, the viewer, waste your own precious time on Earth watching me discuss the article? And like that's a valid critique of this style of video.

However, I do think there are two important things that this article does that I think are important to discuss and would love to talk about, but you know, feel free to click away. You're allowed to do that, of course. So the two important conversations I think this article is like a jumping off point for is number one how generative AI is destructive to academia and education and research and how we shouldn't use it. And the second conversation this article kind of presents a jumping on point for I feel like is more maybe more relevant to my audience which is that this article is a perfect encapsulation of how consistent daily use of chat boxes destroys your brain.

more early February fun

EDIT she said the (derogatory) out loud. ha!

[-] nfultz@awful.systems 15 points 3 months ago

Rusty's response nailed it imho:

You sling beads to a hook which activates a polecat according to GUPP. Jesse what the fuck are you talking about?

At first this all seems like gibberish, and it is. But I think Yegge is one of those people with an innate and preternatural sense of the power and purpose of naming things—someone who understands that names are marketing and marketing is not always about attracting the largest possible audience. In this case, the best outcome for Yegge is for Gas Town to appeal to a relatively small number of absolute sickos who vibe hard with his personal brand and who can usefully contribute to the project, and also for Gas Town to actively repel looky-loos and dilettantes like me (and probably you), who will only waste his time with a lot of stupid questions like “huh?” and “molecules?” and “did you say seances?” Oh yeah: there are seances. Don’t ask.

By this standard, Gas Town has apparently been very successful.

https://www.todayintabs.com/p/all-gas-town-no-brakes-town

[-] nfultz@awful.systems 18 points 4 months ago

I did it, I went and made a Official Public Comment IRL:

In UCLA's Strategic Plan, Goal 1 is to "Deepen our engagement with Los Angeles" and Goal 5 is to "Become a more effective institution". By engaging with Los Angeles businesses, UCLA can get both better terms, prices, and services, and support the local economy. Buy Local, Spend Local.

The federal government encourages this with Small Business Innovation Research and Small Business Technology Transfer grants, among other things. Furthermore, the State of California requires a portion of its spending go toward certified Small Businesses.

And yet, the University apparently awarded a contract reportedly worth hundreds of thousands to millions of dollars to OpenAI. I have not found any documentation of an open Request for Proposals or competitive process for that award.

My question is:

If there was an RFP, where was it publicly posted, and if there was no RFP, why not, and were Los Angeles vendors or small businesses evaluated as alternatives, as recommended by UC policy and state law?

Given the scale of this spending and the context of a budget crisis, transparency, compliance, and small-business participation are critical to our effectiveness and engagement.

I’m asking for clarity on how this decision was made, how it aligns with procurement guidelines and University goals, and how DTS plans to ensure that local and small businesses are meaningfully included moving forward.

Thank you.

28
[-] nfultz@awful.systems 16 points 7 months ago

They put 'environmental impact of AI' on the front of the student newspaper (below the fold, but still), then you flip and see this

kinda feeling two steps forward, three steps back rn on top of all the other drama on campus

21

Another response to Ptacek.

21
17
30
4

I found this seminar for spring quarter, does anyone have some suggested / related readings? Especially deep cuts or articles from the first AI winter.

view more: next ›

nfultz

joined 2 years ago