584
Lutris is AI slop now
(lemmy.nz)
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.
Tangent: crediting Claude with co-authorship? wtf?
I can totally see the mega-techs trying to push that in EULAs, but for an individual to do it seems strange, even though there's a kernel of honesty behind it. It also seems risky as far as OpenMetaMicroogleAI finding future loopholes to steal your shit.
This dev talks like they are doing everything else the right way, as far as reviewing and understanding the code regardless of its source. In that situation I'd look at blocks of LLM-generated code the same way as ones copy/pasted from stackoverflow or 3rd party example code. At BEST you have "here's something that might work" which is nowhere close to actually being done if you're any good. (insert joke about "it compiles, ship it!)
Cory Doctorow explores this in his most recent column, describing the concept of "Centaurs" and "Reverse Centaurs".
A centaur is a human piloting a machine body. They're faster and stronger because of the machine, but the human is in charge. A reverse centaur is a machine piloting a human body; a brainless head on a largely inferior body.
In the context of generative AI, centaurs are people use AI tools carefully, intentionally, and of their own volition. They have the knowledge necessary to assess when the output of the machine tools is good or bad, and the machine simply becomes, like any other tool, a way of leveraging their abilities more efficiently.
A reverse centaur is when you have a "human in the loop." An intern told to write a stack of columns that would take ten experienced writers a week, in only a few days, but don't worry you can just use ChatGPT it'll be so fast. That person really only exists for two purposes; to push the buttons that make the machine go, and, far more importantly, to eat the blame when the machine fucks up. They were the "human in the loop" so they were supposed to catch the bad output, but they were never given the time or the expertise to do so, and they were placed in a scenario where using genAI was the only possible choice to get the outcome that was demanded of them.
I don't see the use of AI tools, especially in areas that they are well suited to like coding, as automatically befitting the "AI slop" descriptor. Gen AI can be extremely effective as a coding assistant, when used with care, and by someone with enough knowledge to read the output and understand it completely. As you say, a huge amount of normal everyday coding has, for decades, been copy and pasting code blocks because why the fuck would repeat work that someone else has already done??? And for decades bad coders have screwed themselves over by copy-pasting code they don't understand or didn't bother to properly read and parse. That's nothing new.
Now, it's also completely reasonable for people to hold ethical objections to genAI that are entirely separate from any practical concerns. If someone's position is "I do not care how good the output is, because I believe it comes from a fundamentally immoral technology", I think that's a completely cogent moral stance. I have no argument against that. I'd just ask to not use the term "AI slop" when describing that objection, because I think it really muddies the waters and makes it extremely unclear what you're actually objecting to. If your problem is one of ethics, say that. Don't just re-use a term you heard elsewhere that's tangentially related.