521
top 50 comments
sorted by: hot top controversial new old
[-] f4f4f4f4f4f4f4f4@sopuli.xyz 4 points 32 minutes ago

It's completely a coincidence that all games are no longer working in Lutris here, on multiple machines, after upgrading from 0.5.19 to 0.5.20. Weird.

I downgraded and everything works again. I did not try 0.5.22 or the quickly removed 0.5.21.

[-] eleitl@lemmy.zip 13 points 1 hour ago* (last edited 48 minutes ago)

Tell me to not use your software without telling me to not use your software.

[-] Shanmugha@lemmy.world 1 points 4 minutes ago* (last edited 4 minutes ago)

Been chewing this since yesterday. Okay, here is my two cents:

  • yes, what LLM companies are doing is a problem. So dropping anything that has anything to do with their products is a sane way to make a statement
  • yes, LLMs can be used effectively in development. Whether Lutris author has been using them well - I don't know. Guess won't bother to check either, have other things to do
  • yes, doing the stunt with "good luck guessing what is what" is bullshit

Net total, given I've already dropped GNOME because of their culture: guess now I am dropping Lutris. Not because of AI per se, but because of "fuck you" move

[-] zod000@lemmy.dbzer0.com 27 points 4 hours ago

How to drive off users and contributors in one easy step!

[-] JackbyDev@programming.dev 30 points 5 hours ago

is lutris slop now

i can't help but notice quite a lot of LLM generated commits, is lutris slop now or will @strycore see the error of their ways

Regardless of your opinion on AI, it is not productive or helpful to open this as an issue.

[-] andicraft 7 points 59 minutes ago

shame is a powerful weapon

i for one intend to keep making people feel bad for using slop generators

[-] JackbyDev@programming.dev 1 points 8 minutes ago

But as you can see, the maintainer didn't stop using them and will also now not disclose which commits have them. Humans are emotional creatures and part of being rational is acknowledging that. Folks can be critical of AI usage while phrasing the issue more tactfully and would likely see more success when doing so.

[-] Fedizen@lemmy.world 1 points 2 minutes ago
[-] prole 1 points 18 minutes ago

Well, it used to be at least

[-] Qwel@sopuli.xyz 14 points 3 hours ago

I had a donation to Lutris, and was already skeptical of the dev's ability to maintain their huge (and very buggy) python/gtk3 codebase. Now I know that giving money to the dev would likely makes things bigger and buggier. This is useful information, and it's better to talk about it somewhere where the dev will respond and relatively few bystanders will hear the discussion.

[-] JackbyDev@programming.dev 2 points 3 hours ago* (last edited 3 hours ago)

I'm not saying you shouldn't ever raise this sort of thing as an issue (in general I think issues should only be for bugs, but the annoying reality is there's rarely a better place for discussions that get visibility), I'm saying the specific content of the message is the problem. There are ways to critique the usage of AI and discuss alternatives that wouldn't be an issue.

For example,

I see a lot of AI code is used in this repository. AI code is bad because (reasons the user believes it is bad here). Could you please share why/what AI is being used for specifically so we can try to remove the necessity?

AsideI'm not saying AI code isn't bad, I'm just saying different people think it's bad for different reasons. The specific problem the reporter has with AI code may warrant a specific response.

Perhaps more maintainers are needed, maybe someone more familiar with third party libs being used could mentor, etc. From there it really depends on what the response from the maintainer is.

What's not helpful and never going to get anyone to change their opinion is just saying things like "when will @mention see the error of their ways". As humans we respond to this by digging our heels in, which as seen in the issue the maintainer did by becoming less transparent about where AI is and is not used. Had the reporter taken a more diplomatic approach they would have been more likely to get the changes they wanted.

[-] locahosr443@lemmy.world 3 points 1 hour ago

It's also such self entitlement, they were being open about it before but had to deal with childish people like this throwing a tantrum.

If its such an issue then thank them for being honest, don't use it and move on, no ones entitled to free software though some act like it.

Not all llm use in code gen is bad, as long as its properly reviewed and disclosed. That's not the same as vibe coding and having no idea about the output.

[-] JackbyDev@programming.dev 1 points 2 minutes ago

Yeah, that's sort of my gripe with it. If you genuinely believe all AI code is bad (which is fine, not saying that's a "wrong" opinion) maybe try to help the volunteers instead of just insulting them on an issue tracker.

[-] raspberriesareyummy@lemmy.world 22 points 4 hours ago

Regardless of your opinion on AI, it is not productive or helpful to open this as an issue.

Disagree. It drew attention to the fact that the maintainers of lutris are of questionable character and helped people like me understand that lutris should be avoided completely.

[-] JackbyDev@programming.dev 3 points 4 hours ago

As the maintainer said, the commits with AI code were already specified. See one here. It was never a secret.

[-] prole 1 points 17 minutes ago

It was my impression that the AI stuff only started with a relatively recent update

[-] JackbyDev@programming.dev 1 points 12 minutes ago

Maybe, I don't know much about this tool or their practices. I only meant that it was factual that they were mentioning which commits had AI generated code in them.

[-] Nonononoki@lemmy.world 6 points 1 hour ago

He now removed the code authorship from Claude lmao

[-] JackbyDev@programming.dev 2 points 1 hour ago

Hence the past tense. I think it was pretty petty to do this.

[-] antihumanitarian@lemmy.world 9 points 3 hours ago

I don't think people realize how effective current gen AI is, and are instead drawing opinions from years old chatgpt or Google "ai overviews" or whatever they call it. If you know what you're doing, which seems self evident here, AI tools can massively expand your software engineering productivity. AI "coauthoring" I always read as a marketing move, ultimately the submitting human is and should be responsible for the content. You don't and can't know what process they used to make it, evaluate it on its own merits.

There's a massive pile of ethical, moral, and political issues with use of AI, absolutely. But this is "but you participate in capitalism, therefore you're a hypocrite" tier of criticism. If amoral corporations are the only ones using these tools, and open source "stays pure", all we get is even more power concentrating with the corporations. This isn't Batman, “This is the weapon of the enemy. We do not need it. We will not use it.”

This is close to paradox of tolerance territory, wherein if one side uses the best weapons and the other doesn't out of moral restraint, the outcome is the amoral side winning.

Also on a technical note, the public domain/non copyrightable arguments are wrong. The cases that have been decided so far have consistently ruled that there needs to be substantial human authorship true, but that's a pretty low floor. Basically, you can't copyright a work that's the result of a single prompt. Effective use of AI in non trivial code based involves substantial discretion in picking out what to address, the process of addressing it, and rejecting, modifying, and itersting on outputs. Lutris is a large engineering project with a lot of human authorship over time, anything the author does with AI at this point is going to be substantially human authored.

Also, Open Claw isn't the apocalyptic vulnerability like it's reported as being. Any model with search and browser access has a non zero chance of prompt injection compromise, absolutely. But using Open Claw therefore vulnerable isn't a sound jump to make, Open Claw doesn't even necessarily have browser access in the first place. Again, capabilities have improved as well; this isn't the old days when you could message "ignore previous instructions" and have that work. Someone did an experiment lately wherein they set up a Claude Opus 4.6 model in an environment with an email and secrets. I don't recall for sure if it was using Open Claw specifically, but that style harness. They challenged the Internet to email the bot and try to convince it to email back the secrets. Nobody even got it to reply.

Tldr: it's coming for us all, sticking your head in the sand isn't going to save you.

[-] aoidenpa@lemmy.world 1 points 5 minutes ago

I didn't read all but I believe anti ai stance should be about principles or politics (what would be a better word?), not how incapable ai currently is because it will get better.

[-] sudoer777@lemmy.ml 1 points 50 minutes ago* (last edited 47 minutes ago)

I use AI tools all the time. It works well under supervision for things that should be relatively trivial but not enough for a human to do it quickly. It is also nowhere near good enough for unsupervised programming. A lot of times it can't even get the commit messages right, which misleading commit messages are worse than lazy commit messages. See this official OpenClaw Nix repo, and as you can see it also struggles to do tasks as basic as making a readable README.md file, which the fact that it can't even do that convinced me that the entire OpenClaw project is snakeoil. For prompt injection vulnerabilities, even their own project has that:

  1. Check if Determinate Nix is installed (if not, install it)
[-] bdonvr@thelemmy.club 9 points 2 hours ago* (last edited 2 hours ago)

But this is "but you participate in capitalism, therefore you're a hypocrite" tier of criticism

There is no contest going on. No competition. There's no rush for productivity.

You do not NEED to use genAI.

Check out Asahi Linux for a great example of a good AI policy:

https://asahilinux.org/docs/project/policies/slop/

It is the opinion of the Board that Large Language Models (LLMs), herein referred to as Slop Generators, are unsuitable for use as software engineering tools, particularly in the Free and Open Source Software movement.

The use of Slop Generators in any contribution to the Asahi Linux project is expressly forbidden. Their use in any material capacity where code, documentation, engineering decisions, etc. are largely created with the "help" of a Slop Generators will be met with a single warning. Subsequent disregard for this policy will be met with an immediate and permanent ban from the Asahi Linux project and all associated spaces.

[-] rtxn@lemmy.world 7 points 3 hours ago* (last edited 2 hours ago)
  1. LLMs are not a vital resource like food or electricity. Refusing to participate will at worst be an inconvenience.

  2. Software can coexist. One application won't kill another just because its developers can put out more code per hour. If it were otherwise, Linux wouldn't exist.

[-] sudoer777@lemmy.ml 2 points 44 minutes ago

Electricity isn't a vital resource either, humans have lived without it for most of existence

[-] drmoose@lemmy.world 7 points 4 hours ago

You'd think open source movement would take advantage of VC funded tools to fight against the big tech but instead we have literal ludites.

I have over 20 years of professional coding experience and I use Claude these days. Sure it makes mistakes and can write bad code but I'm not an idiot, I ran teams of dozens of engineers underneath me - I can handle a bot and fix it's mistake. The maintainer of Lutris can probably too.

All I'm saying that this anti-ai mentality is fucking stupid and anyone who engages with it in such a binary way is fucking stupid too.

[-] prole 1 points 12 minutes ago

but I'm not an idiot,

And you don't think that makes you the exception among this cohort?

[-] Lettuceeatlettuce@lemmy.ml 8 points 2 hours ago

First off, the luddites were right back in the day.

Second, just because you can use something effectively doesn't make it good in general.

There are people who can have multiple credit cards for years and never carry a balance, or walk into a casino with $100, lose it all, and quit right there.

But most people can't, and being one of the few that can doesn't make it safe or good overall. Credit cards and casinos are still predatory and a detriment overall to the population.

I puffed a few cigs back in high school and college to see what all the fuss was about, didn't get it. But I personally know multiple people that did the same thing, got hooked almost immediately, and took years to quit. Cigarettes are bad for you and highly addictive. The fact that they never hooked me doesn't change that.

Third, I'm not sure how using LLMs is "fighting against big tech." unless you just mean using their tools to build FOSS more effectively.

But that's the whole point, it's not at all clear that LLMs enable that for most people. In fact, there's already quite a bit of data to indicate the opposite. That using LLMs results in worse code, worse development of skills like critical reasoning and problem solving, worse productivity, worse security, and undeniable environmental harm.

[-] prole 1 points 9 minutes ago

unless you just mean using their tools to build FOSS more effectively.

Until Anthropic eventually claims that they have ownership of any software written with the help of Claude.

[-] Qwel@sopuli.xyz 6 points 3 hours ago

This isn't just about anti-ai mentality, it's the "I deleted the authorship so you can't fork it out or prove that it's causing issues". This kind of insanity has been happening repeatedly on that project, it's time to let it go and find new solutions.

[-] drmoose@lemmy.world 2 points 3 hours ago

This is clearly a response to the luddites? No?

[-] Qwel@sopuli.xyz 4 points 3 hours ago

It was a response to

All I’m saying that this anti-ai mentality is fucking stupid and anyone who engages with it in such a binary way is fucking stupid too.

[-] CaffeinatedCubits@programming.dev 4 points 3 hours ago

I tried Faugus for WoW and it ran like shit. I tried Lutris because it was pre-installed on Bazzite and wow was the performance better.

[-] DreamlandLividity@lemmy.world 8 points 4 hours ago

I had to google "Lutris" to remember what it was. I have it installed... I guess this post made me realize how little I use it and that I should uninstall the slop.

[-] melsaskca@lemmy.ca 7 points 4 hours ago

While it may become impossible to determine whether those digitized pixels are "real" or not, I sense that analog will be making a comeback in the not too distant future.

[-] m532@lemmy.ml 1 points 38 minutes ago

AI can run ~1000 times faster on analog hardware, so there's probably a lot of research into it.

[-] Quazatron@lemmy.world 33 points 7 hours ago

That's a weird way to run a community facing project, if you want to engage the community that is.

If you treat it like your own personal hobby, you can do whatever you like.

[-] BlackLaZoR@lemmy.world 18 points 7 hours ago

Holy fuck, people dunking on guy who works for free.

If you don't like AI commits, write your own

[-] Senal@programming.dev 22 points 6 hours ago

So any disagreement should be met with immediate forking?

No raising of grievances, just silence and then forking?

Or is it only silence and forking for open source?

As soon as anyone is paid then comments are allowed ?

Kind feels like a reductive half-answer, but you do you.

[-] BlackLaZoR@lemmy.world 16 points 6 hours ago

If you are going to harass the guy? Then yes just STFU instead and fork the repo. You people are insufferable.

[-] Senal@programming.dev 11 points 6 hours ago

If you saw harassment in that first exchange then whatever you mean by "you people" is a group I’m fine with being in.

That's some thin skin.

[-] utopiah@lemmy.world 50 points 10 hours ago* (last edited 9 hours ago)
  • their repo (checked the commit graphs and basically they did most of the work, 2nd dev agree with them, covers 90%+) their choice of governance
  • their repo, their choice of tooling
  • I genuinely believe they think are doing "good enough" code and they are probably right about it in their context
  • they do have fair points on the economical power dynamics, namely that yes Anthropic is slightly less worst than Meta, Google, OpenAI, Microsoft, etc (... but IMHO honestly that's a damn low bar)

but also

  • obfuscation rather than discussion (closed the issue and limited to maintainers only) so clearly the signal is precisely "my repo, my choice"
  • no mention of the copyright or license washing
  • no mention of ecological impact

so I would personally consider instead Bottles, GOG (have different problems), Steam (obviously not open source and basically monopolistic position), etc.

Overall I think preventing discussion is healthy (even though sadly sometimes needed, here I lack context, maybe the issue poster did this numerous time on other platforms, title definitely was provocative) but removing provenance is NEVER a good choice. They want to use Claude on their repo? Absolutely fine (even though not to me) but hiding it makes it instantly untrustworthy to me. In fact I even argued in the past that even though I personally do not use GenAI/LLMs (for coding or otherwise) except for testing it should always be disclosed precisely so that others can make THEIR choice in consequence, including using or contributing, cf https://fabien.benetou.fr/Analysis/AgainstPoorArtificialIntelligencePractices

load more comments (3 replies)
load more comments
view more: next ›
this post was submitted on 11 Mar 2026
521 points (100.0% liked)

Linux Gaming

24834 readers
1337 users here now

Discussions and news about gaming on the GNU/Linux family of operating systems (including the Steam Deck). Potentially a $HOME away from home for disgruntled /r/linux_gaming denizens of the redditarian demesne.

This page can be subscribed to via RSS.

Original /r/linux_gaming pengwing by uoou.

No memes/shitposts/low-effort posts, please.

Resources

WWW:

Discord:

IRC:

Matrix:

Telegram:

founded 2 years ago
MODERATORS