792
submitted 2 weeks ago by not_IO to c/fuck_ai@lemmy.world
top 50 comments
sorted by: hot top controversial new old
[-] hperrin@lemmy.ca 93 points 2 weeks ago

If that’s not illegal, it certainly should be.

[-] skisnow@lemmy.ca 35 points 2 weeks ago

For sure they know they shouldn't be doing it, otherwise they wouldn't be trying to hide it.

load more comments (8 replies)
[-] skisnow@lemmy.ca 80 points 2 weeks ago* (last edited 2 weeks ago)

It's a great way to get free training for their next model, courtesy of unwitting OSS reviewers.

Spam all the open source projects with slop, mark which ones get rejected and which ones get accepted, and bam there's some new training data for Claude Villanelle, and the only time they've wasted is other people's.

[-] rumba@lemmy.zip 10 points 2 weeks ago

I've been pondering why all the FOSS PR slop for ages, this HAS to be it.

[-] tristan@tarte.nuage-libre.fr 54 points 2 weeks ago

PSA: Prompting an LLM at length about what not to do is the best way to prime it to do that very thing. You’re loading a lot of tokens in memory and expecting a single “not” to do all the heavy lifting.

This is adjacent to ironic process theory.

[-] te_abstract_art@lemmy.world 14 points 2 weeks ago

Is this necessarily true? I remember seeing an article a while back suggesting that prompting "do not hallucinate" is enough to meaningfully reduce the risk of hallucinations in the output.

From my fairly superficial understanding of how LLMs work, "don't do X" will plot a completely different vector for the "X" semantic dimension than prompting "do X". This is different to telling a human, for example, to not think about elephants (congratulations, you're now thinking about elephants. Aren't they cute. Look at that little trunk and smiley mouth)

[-] tristan@tarte.nuage-libre.fr 6 points 1 week ago

Thank you for your reply. I realised I don’t have enough deep knowledge about LLMs apart from empirical experience from working with it to confidently answer your question. It would be interesting to find (or create if it doesn’t exist) more research on the subject.

load more comments (3 replies)
[-] kogasa@programming.dev 10 points 2 weeks ago

It's possible that whatever prompt enhancement and processing happens around the LLM part of the application addresses this somewhat.

[-] eestileib 35 points 2 weeks ago

One of my loved ones is defending this and I am having a moral crisis over my relationship with her because of that.

[-] SaharaMaleikuhm@feddit.org 13 points 2 weeks ago

Have AI write any message to her, see if she likes it.

[-] Yttra@lemmy.world 4 points 2 weeks ago

They probably will, that's the riskiest part

[-] jj4211@lemmy.world 9 points 2 weeks ago

Yeah, it is hard grasping why online commenters that are fans are fans, but in my real world interactions, I get a better feel for it.

The people that are all in on the AI, slop and all, are the people I really found annoying to begin with. They tend to think everyone is desperate to hear what they say, that verbosity is king, and generally don't really know what they are talking about. They are the sort that would spend a ton of time fretting over some 'design document' that when finally shared is absolutely nothing actionable, despite 10 pages with of gorp. Any specific outcome has nothing to do with the document, but they'll take credit for "thought leadership" if it works, and blame the "inadequate team" if it fails. They are used to and cherish verbose yes men and are used to making vague statements and getting results they can't judge already.

Or on the other end, people who endlessly fell for clickbait. Slop before AI was really a factor in slop. People forwarding those chain letters back in the day.

The people I have held long respect for tend to range between "too annoying to even deal with" to "it's a little useful in key circumstances". I have yet to personally meet someone I had long respected who went all in on AI.

The insidious thing is I'm pretty sure they both outnumber and tend to have more power. Those folks who "thought lead" without actionable direction nor even a vague understanding of how the work happens? Those are the ones that got promoted, with the good ones largely overlooked for promotions, mainly because at a certain point promotion requires "professional networking" and making the executives happier with themselves than it is about good work. Now we are in a position where those people who never "got" the work are telling themselves that the LLMs can replace those annoying "nerds" that have leverage over them, and if there's one thing they can't stand, it's having people they don't understand having anything looking like leverage over them.

load more comments (1 replies)
[-] sheetzoos@lemmy.world 6 points 2 weeks ago

Have you tried inviting them to this echo chamber to see if that will convince them?

[-] Epp@lemmus.org 4 points 1 week ago

Quick, call her a slob that slops on her slot at slats! Then she'll know you're a true member of the erudite luddites.

[-] eestileib 4 points 1 week ago

I did tell her "I don't enjoy having my ass kissed by a machine" and that had approximately the effect you're looking for.

[-] Epp@lemmus.org 5 points 1 week ago

Some are actually pretty good at it. Have you tried the Lovense models? They've really got the feeling of a tongue down.

[-] TheDoctorDonna@piefed.ca 27 points 2 weeks ago

The company I work for keeps trying to push Claude on us, even is company "social" situations. I never bothered to sign up for an account back when we were prompted so I guess I miss out...oh no?

No, wait - the opposite of oh no.

[-] hexagonwin@lemmy.today 10 points 2 weeks ago
[-] Infrapink@thebrainbin.org 8 points 2 weeks ago
[-] yuriRO@lemmy.dbzer0.com 24 points 2 weeks ago

Why are anthropic employees contributing on open source projects? Aren't they super busy with being at the company? How the repo owner knows they are anthropic employees? Maybe im over thinking this, please explain >;0

[-] BradleyUffner@lemmy.world 32 points 2 weeks ago

It's all tests to see if the AI can go undetected. They are using it as a measure of "quality".

[-] GreenBeanMachine@lemmy.world 13 points 2 weeks ago* (last edited 2 weeks ago)

That's how they train their new models, get them to generate code and then have FOSS contributors do the work of reviewing AI slop.

[-] ParadoxSeahorse@lemmy.world 7 points 2 weeks ago

They’re not, they’re just allowing AI to use their credentials essentially

[-] JackbyDev@programming.dev 7 points 2 weeks ago

Many corporations contribute back to open source projects they use. That in itself is not anything new or even shady. Microsoft really put a lot of work into git (not to be confused with buying GitHub). But being opaque about how you're making the code is, at the very least, disingenuous.

load more comments (2 replies)
[-] redsand@infosec.pub 24 points 2 weeks ago

They're fluffing their résumé before the bubble pops. Don't hire these clowns, interview them and ask about their code.

load more comments (1 replies)
[-] r1veRRR@feddit.org 24 points 2 weeks ago

I get the idea of hating this, but there's really absolutely nothing revolutionary about this. Being "undercover" is as trivial as "commit this, do not mention AI".

In the end, at least with code, it's the actual resulting quality that is the main determinant of what should be accepted or not.

[-] ayyy@sh.itjust.works 51 points 2 weeks ago

You sound like someone who hasn’t had to waste countless hours of their life wading through bullshit merge request spam.

[-] Cellari@lemmy.world 25 points 2 weeks ago

So... you think ignoring the rules set by others is allowed if you can bypass them? Because it really does tell much if a repo states it does not want AI generated code, but Claude hides the fact.

[-] CultLeader4Hire@lemmy.world 11 points 2 weeks ago

I feel like you’re responding to a person who doesn’t understand consent is about saying yes not about saying no

[-] grrgyle@slrpnk.net 11 points 1 week ago

Not trying to be glib, but I don't think you do get the idea of hating this.

load more comments (2 replies)
[-] prenatal_confusion@feddit.org 23 points 2 weeks ago

Isn't it crazy that 5 years ago we struggled to have a software understand normal sentences? Now this block of text is parsed and the instructions followed. Impressive!

Not trying to flame, honestly Impressed by some aspects of AI. And I know I am using the the term understood loosely.

[-] excursion22@piefed.ca 17 points 2 weeks ago

Are the instructions followed, though?

[-] jj4211@lemmy.world 6 points 2 weeks ago

Yeah, that's the thing where we get into what I call "superstitious prompting", like when people say "And make sure you don't make mistakes" or "Include only factual data without hallucinations" and think it works, until it doesn't.

It will at least reply in a way that is narratively consistent with being told to do something or another, and will do things like emit the words "Ok, I understand and will promise to only provide fact based feedback", but doesn't "understand" at all. It works surprisingly well as being narratively consistent with the prompt frequently looks exactly like following instructions.

With people getting all the more frustrated when their superstitious prompt fails, they told the LLM to do something or specifically not do something and it even promised exactly to do as directed and then it just proceeds to be normal LLM anyway.

[-] Anivia@feddit.org 11 points 2 weeks ago

Isn't it crazy that 5 years ago we struggled to have a software understand normal sentences?

I mean, that's not really true at all? Ai dungeon released 7 years ago, based on GPT-2, and already worked impressively well

https://en.wikipedia.org/wiki/AI_Dungeon

load more comments (7 replies)
[-] real_squids@sopuli.xyz 10 points 2 weeks ago

I kinda miss gpt-2 days, it's output was so interesting/funny compared to what llms produce now. Even with image generation I feel like it's been downhill for years

[-] JackbyDev@programming.dev 5 points 2 weeks ago

🗝️ Keys to success, here's why

  • Everything looks like this now
  • Not just emojis in headers—it's em dashes too
  • Delve

(Honestly it's mostly the emojis in headers that disgust me.)

load more comments (2 replies)
[-] slaacaa@lemmy.world 20 points 2 weeks ago

Just what the internet needed, more AI slop

[-] GalacticGrapefruit@lemmy.world 20 points 2 weeks ago

Oh, that is slimy as fuck. 😡

[-] JackbyDev@programming.dev 17 points 2 weeks ago

Lmao, the bad example "1-shotted by Claude"

[-] GreenBeanMachine@lemmy.world 15 points 2 weeks ago* (last edited 2 weeks ago)

Interesting comments in the mastodon thread, some idiot people will bend over backwards to defend AI slop.

[-] StarryPhoenix97@lemmy.world 15 points 1 week ago

I don't have a problem with AI assisting with open source projects. On its face, it could be helpful to clean up some basic coding problems so a person with skill can come in and update later or remove it if it's truly awful code. But then I remember that there's always an angle. On top of all the other issues with AI coding, what happens if Anthropic tries to pull some legal shenanigans and say that they wrote most of the code, so they own the project? What if they are writing in backdoors and vulnerabilities?

Like I said, on its face it sounds okay, but any time a corporation tries to touch a public project, things go wonky.

load more comments (1 replies)
[-] MousePotatoDoesStuff@lemmy.world 10 points 1 week ago

Ai pushers are dishonest and malicious.

In other news, water is liquid. More at 11.

[-] blazeknave@lemmy.world 8 points 2 weeks ago

Maybe I'm naive... I have a Claude skill to generate long form research prompts to take to chatgpt, Gemini and Kagi assistant (glm, deepseek etc) to save credits and optimize best in class. Part of the instructions are about removing PII in a very long winded verbose fashion.

Is it possible that's their intent? Don't leak?

[-] eestileib 9 points 2 weeks ago

I don't think so. Notice how there's no concern about pii being exposed, only internal code names and authorship credit to Claude.

[-] DylanMc6@lemmy.dbzer0.com 7 points 1 week ago

The open-source developers should fight back with anti-AI spam

load more comments
view more: next ›
this post was submitted on 01 Apr 2026
792 points (100.0% liked)

Fuck AI

6735 readers
1345 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS