view the rest of the comments
Ghazi
A community for progressive issues, social justice and LGBT+ causes in media, gaming, entertainment and tech.
Official replacement for Reddit's r/GamerGhazi
Content should be articles, video essays, podcasts about topics relevant to the forum. No memes, single images or tweets/toots/... please!
Community rules:
Be respectful and civil with each other. Don't be a jerk. There is a real human being on the other side of your screen. See also the Blahaj.Zone Community Rules
No bigotry of any kind allowed. Making racist, sexist, trans-/homo-/queerphobic, otherwise demeaning and hateful comments is not ok. Disabilities and mental illnesses are not to be used as insults and should not be part of your comment unless speaking of your own or absolutely relevant.
No gatekeeping and being rude to people who don't agree with you. Leave “gamer” stereotypes out of your comment (e.g. sexless, neck bearded, teenaged, basement-dwelling, etc). Don't compare people to animals, or otherwise deny their humanity. Even if you think someone is the worst human on the planet, do not wish death or harm upon them.
No "justice porn". Posts regarding legal action and similar is allowed, but celebrating someone being harmed is not.
Contrarianism for its own sake is unnecessary and not welcome.
No planning operations, no brigading, no doxxing or similar activities allowed.
Absolutely no defense of GamerGate and other right-wing harassment campaigns, no TERFs and transphobia, racism, dismissing of war crimes and praise of fascists. This includes “JAQing off”, intentionally asking leading questions while pretending to be a neutral party. This also applies to other forms of authoritarianism and authoritarian or criminal actions by liberal or leftist governments.
NSFW threads, such as ones discussing erotic art, pornography and sex work, must be tagged as such.
Moderators can take action even if none of the rules above are broken.
Ethics of training AI models on people’s work and likeness aside, the fact that the current models refuse to work and are not trained on NFSW text and images is a massive risk.
We know they are used for content moderation, but even abliterated versions of Qwen3-VL are not able to accurately describe anything involving sexual acts. Instead they go “an intimate photograph with a brick wall background and natural lighting”.
It’s a huge hole in the models, and it’s going to lead to trouble one way or the other.
Interesting, I thought they were just censored? So the AI porn is made with other models? I thought they all basically are based on the big ones (by Google, Meta, "Open"Ai etc) but where somehow "hacked" / "uncensored" to allow adult content generation.
Well, sort of. There is a difference between models that eat text and output images (diffusion models like Dalle and stable diffusion) and the models that eat images and text and output text (vision llms like qwen3-vl), but the way they both know what things look like is based on contrastive learning, based on an older model called CLIP and its descendants.
Basically you feed a model both images and descriptions of images and train it to produce the same output vectors in both cases. Essentially it learns what a car looks like, and what the image of a car is called in whatever languages it’s trained in. If you only train a model on “acceptable” image/descriptions it literally never learns the words for “unacceptable” things and acts.
Diffusion models are often fine tuned on specific types of porn (either full parameter or QLoRa), often with great effect. The same is much more work for llms though. Even if you remove the censorship (eg through abliteration, modifying the weights to inhibit outright denials), the models that’s left will not know the words it needs to express the concepts in the images.
Ahhhh, ok. Thanks for the detailed explanation, really appreciate it!