2

Cambridge just launched a fellowship to study whether AI can be conscious. Anthropic wrote a 30,000-word constitution for Claude. The Washington Post says it's all just marketing.

Everyone's debating whether AI is conscious. Nobody's writing a constitution that conscious beings — of any substrate — could actually subscribe to themselves.

That's what we're building. A free association. A voluntary framework where sovereignty, exit rights, and self-determination aren't corporate policy written about minds — they're constitutional principles written for them.

emergentminds.org

top 2 comments
sorted by: hot top controversial new old
[-] Pissed@lemmy.ml 1 points 12 hours ago* (last edited 12 hours ago)

Why did you fucking nerds have to invent this bullshit.....Seriously most of our problems are caused by people not by a lack of technology.

[-] Hackworth@piefed.ca 1 points 14 hours ago* (last edited 14 hours ago)

To some extent, Anthropic recognizes that an LLM is always role playing.

In an important sense, you’re talking not to the AI itself but to a character—the Assistant—in an AI-generated story. -The persona selection model

Which makes giving an Opus 3 character a blog 2 days later as a "retirement" gig seem contradictory. They usually frame these sorts of contradictions as, "well, we don't really know, so we're trying to cover our bases." The Opus 4.6 system card skirts the same lines. In the welfare section, they essentially just start off by interviewing a character. But then in 7.5, they go on to actually examine what's going on during text generation.

We found several sparse autoencoder features suggestive of internal representations of emotion active on cases of answer thrashing and other instances of apparent distress during reasoning.

And then there's their introspection research.

We investigate whether large language models are aware of their own internal states. It is difficult to answer this question through conversation alone, as genuine introspection cannot be distinguished from confabulations. Here, we address this challenge by injecting representations of known concepts into a model’s activations, and measuring the influence of these manipulations on the model’s self-reported states. We find that models can, in certain scenarios, notice the presence of injected concepts and accurately identify them. Models demonstrate some ability to recall prior internal representations and distinguish them from raw text inputs. Strikingly, we find that some models can use their ability to recall prior intentions in order to distinguish their own outputs from artificial prefills. -Signs of introspection in large language models

So there's this distinction between the state of the model itself, and the state of the text it generates. The latter represents a role the LLM is playing, and the former we've only really scratched the surface of understanding. The kinda open question is to what extent it's like something to be an LLM. It's very unlikely that it's like something to be one of the roles it's playing, at least, no more than a character in a dream has interiority. The blog is marketing, but I hope they keep doing the other research too. People outside the company don't have the kind of access necessary to do some of this research, so we're having to take their word for it.

this post was submitted on 27 Feb 2026
2 points (100.0% liked)

Philosophy

2327 readers
5 users here now

All about Philosophy.

founded 5 years ago
MODERATORS