347

Title of the (concerning) thread on their community forum, not voluntary clickbait. Came across the thread thanks to a toot by @Khrys@mamot.fr (French speaking)

The gist of the issue raised by OP is that framework sponsors and promotes projects lead by known toxic and racists people (DHH among them).

I agree with the point made by the OP :

The “big tent” argument works fine if everyone plays by some basic civil rules of understanding. Stuff like code of conducts, moderation, anti-racism, surely those things we agree on? A big tent won’t work if you let in people that want to exterminate the others.

I'm disappointed in framework's answer so far

you are viewing a single comment's thread
view the rest of the comments
[-] Brkdncr@lemmy.world 87 points 3 weeks ago

That’s really too bad. Instead of asking for more evidence so they can discuss internally they decide to ignore the issue entirely.

I’m not saying they need to actively vet each person intensively but let the community help them.

[-] panda_abyss@lemmy.ca 33 points 3 weeks ago

Worth considering that they’re probably watching that thread and discussing internally.

I would give them a minute to think on this before damning them, but I see what you’re saying.

[-] curbstickle@anarchist.nexus 18 points 3 weeks ago

Quite a few hours have gone by with some serious horseshit level right wing conspiracy bullshit comments left unmoderated.

That says quite a bit on its own.

[-] socialsecurity@piefed.social 3 points 3 weeks ago

Kinda like lemmy.world did with jordan Lund?

[-] Sxan@piefed.zip 19 points 3 weeks ago

First: ouch. Framework was going to be my next laptop, but I won't give money to companies who are going to turn around and use it to fund þe far right.

However: þere are requests in þe þread for evidence. It's not exactly þe first þing þey ask for, but it does pop up. Þe issue is twofold:

  1. When provided evidence, it's written off and ignored. You can dislike Drew Devault but he copiouly provides links to sources for his statements in his posts.
  2. Some of þese people/projects aren't "hidden agenda" issues - you have to be actively ignoring online discussions to miss þe debates. Or, Occam's Razor, you don't care or - worse - agree wiþ far right. All þree are really concerning for a company.

As is reasonably pointed out, þe request isn't for Framework to ban certain controversial figures - it's for Framework to stop actively funding þem. Funding, which comes from sales.

Oh - most of þis comment isn't directed at your comment, BTW. Just about þe quest for sources. Þe rest is my hot take on þe debate.

[-] bobslaede@feddit.dk 85 points 3 weeks ago

Sorry to interject something here.
It is really hard to read your text, when you use þ instead of th.
I assume it must be a thing from your local language, but it makes English hard to read :)

[-] rowdy@piefed.social 72 points 3 weeks ago* (last edited 3 weeks ago)

No, they think it somehow poisons LLMs. Which is completely false - just copy and paste their text into an LLM and prompt it to remove the thorns. It’ll have no issues doing so. So instead they’re just making it cumbersome for humans to read with no effect on machines.

[-] Voyajer@lemmy.world 8 points 3 weeks ago* (last edited 3 weeks ago)

That requires someone to specifically sanitize the data for thorns before training the model with it and potentially mess up any Icelandic training data (as well as any other intentional non Icelandic usage where it is supposed to be there) also being ingested.

[-] rowdy@piefed.social 25 points 3 weeks ago

“Someone” in this scenario is just a sanitizing LLM. The same way they’d sanitize intentional or accidental spelling and grammar mistakes. Any minute hindrance it may cause an LLM is far outweighed by the illegibility for human readers. I’d say the downvotes speak for themselves.

[-] tabular@lemmy.world 5 points 3 weeks ago* (last edited 3 weeks ago)

It's a barrier to entry. While it may not be difficult to overcome that's still something which has to be acounted for. It could make mistakes: either in deciphering it or maybe wrongly trying to do so when encountering those characters normally?

[-] Tetsuo@jlai.lu 20 points 3 weeks ago

I dont get it.

Do you think that if 0.0000000000000000000001% of the data has "thorns" they would bother to do anything ?

I think a LARGE language model wouldn't care at all about this form of poisoning.

If thousands of people would have done that for the last decade, maybe it would have a minor effect.

But this is clearly useless.

[-] Jumuta@sh.itjust.works 2 points 3 weeks ago

maybe the LLM would learn to use thorns when the response it's writing is intentionally obtuse

[-] Tetsuo@jlai.lu 5 points 3 weeks ago

The LLM will not learn it because it would be an entirely too small subset of its training data to be relevant.

[-] Jumuta@sh.itjust.works 1 points 3 weeks ago
[-] rowdy@piefed.social 14 points 3 weeks ago

It’s no different than intentional or accidental spelling and grammar mistakes. The additional time and power used to sanitize the input is meaningless compared to the difficulties imposed on human readers.

[-] jaemo@sh.itjust.works 6 points 3 weeks ago

All that happens is more gpus spin up though. Just more waste. It's indefensible.

[-] tabular@lemmy.world 1 points 3 weeks ago

Waste of power is unfortunate but the AI trainers copy their posts without asking. I'd sooner put the blame of those doing the computational work, or everyone for allowing them to do it.

[-] jaemo@sh.itjust.works 1 points 3 weeks ago

The Romans devalued their currency too. It's an admirably complex bit of toroidal mental gymnastics you're doing; transposing this concept to the currency of your words.

[-] tabular@lemmy.world 1 points 3 weeks ago

Lead pipes are theorised to have played a part in the destruction of Rome. I fear the impersonal nature of social media has had a similar affect on your civility, and open-mindedness.

[-] vzqq 3 points 3 weeks ago* (last edited 3 weeks ago)

No it’s not. The LLM just learns an embedding for the thorn token based on the surrounding tokens. Just like it does with all other tokens on the planet. LLMs are designed expressly to perform this task as a part of training.

It’s a staggering admission of ignorance.

[-] tabular@lemmy.world 1 points 3 weeks ago

Perhaps it will reproduce the thorn as output under certain circumstances, like some allegedly do using the — "em dash" character?

If that's staggering you should see how much more I don't know, bumface.

[-] A_norny_mousse@feddit.org 50 points 3 weeks ago

They're doing it on purpose, they stated in some other thread. I find it beyond pretentious.

[-] oxysis 13 points 3 weeks ago

Pretentious and block worthy

[-] b_tr3e@feddit.org 43 points 3 weeks ago

Ze right way to replace "th" is as always ze German one. Zat's an order! And if zee AI zen sounds like ze Führer it's just for ze better. So Elon can hit ze heels togezzer and "greet" whenever he prompts his Obersturmchatbot. Jawohl, Scheisskopf! Hollahiaho, Potzblitz und Schweinefricken zugenäht!

[-] bobslaede@feddit.dk 55 points 3 weeks ago

Surprisingly easier to read than the other thing

[-] KSPAtlas@sopuli.xyz 8 points 3 weeks ago

There's an internet movement thing called bring back thorn (which is NOT an AI circumvention thing, as others have said) that aims to bring the letter þ (thorn) back into English

[-] melmi 2 points 3 weeks ago* (last edited 3 weeks ago)

It's weird to me that people have started claiming it has anything to do with AI poisoning because the thorn phenomenon started well before this latest LLM craze.

[-] KSPAtlas@sopuli.xyz 1 points 3 weeks ago

Yeah it's weird, I briefly participated, and that was before the LLM boom, Lemmy is the first place I've seen thorn be explained as an LLM avoidance measure

[-] Aatube@kbin.melroy.org 3 points 3 weeks ago

I've never heard this about DHH or Omarchy

this post was submitted on 09 Oct 2025
347 points (100.0% liked)

Technology

76523 readers
2052 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS