36
submitted 1 day ago* (last edited 1 day ago) by diz@awful.systems to c/techtakes@awful.systems

There's a very long history of extremely effective labor saving tools in software.

Writing in C rather than Assembly, especially for more than 1 platform.

Standard libraries. Unix itself. More recently, developing games in Unity or Unreal instead of rolling your own engine.

And what happened when any of these tools come on the scene is that there is a mad gold rush to develop products that weren't feasible before. Not layoffs, not "we don't need to hire junior developers any more".

Rank and file vibe coders seem to perceive Claude Code (for some reason, mostly just Claude Code) as something akin to the advantage of using C rather than Assembly. They are legit excited to code new things they couldn't code before.

Boiling the rivers to give them an occasional morale boost with "You are absolutely right!" is completely fucked up and I dread the day I'll have to deal with AI-contaminated codebases, but apart from that, they have something positive going for them, at least in this brief moment. They seem to be sincerely enthusiastic. I almost don't want to shit on their parade.

The AI enthusiast bigwigs on the other hand, are firing people, closing projects, talking about not hiring juniors any more, and got the media to report on it as AI layoffs. They just gleefully go on about how being 30% more productive means they can fire a bunch of people.

The standard answer is that they hate having employees. But they always hated having employees. And there were always labor saving technologies.

So I have a thesis here, or a synthesis perhaps.

The bigwigs who tout AI (while acknowledging that it needs humans for now) don't see AI as ultimately useful, in the way in which C compiler was useful. Even if its useful in some context, they still don't. They don't believe it can be useful. They see it as more powerfully useless. Each new version is meant to be a bit more like AM or (clearly AM-inspired, but more familiar) GLaDOS, that will get rid of all the employees once and for all.

you are viewing a single comment's thread
view the rest of the comments
[-] jackalope@lemmy.ml 2 points 1 day ago

From what I've read monthsl's worth of Claude queries through github copilot is estimated to have the same carbon footprint as driving 12 miles.

I do not care about IP law. My greater concern is how this stuff furthers consolidation in the tech industry.

[-] mawhrin@awful.systems 2 points 3 hours ago

you also clearly don't care about labour (as in labour costs being driven down, people's labour being used without consent, people's labour being stolen and used for enrichment of few.)

[-] self@awful.systems 13 points 1 day ago

ah right, you only care about vague consolidation in the tech industry, but will take the industry’s word at their self-reported energy usage (while they build massive datacenters and construct or reopen polluting energy sources, all specifically to scale out LLMs) and don’t care about the models being fed massive amounts of plagiarized work at great cost to independent website operators, both of which are mechanisms by which LLMs are being used as a weapon with which to consolidate the tech industry under the rule of a handful of ethically bankrupt billionaires. but it’s ok, Claude Code is a massive improvement over the garbage that came before it — and it’s still a steaming pile of shit! but I’m sure going to bat for this absolute bullshit won’t have any negative consequences at all.

how about you fuck off, bootlicker.

[-] diz@awful.systems 12 points 1 day ago* (last edited 1 day ago)

In case of code, what I find the most infuriating is that they didn't even need to plagiarize. Much of open source code is permissively enough licensed, requiring only attribution.

Anthropic plagiarizes it when they prompt their tool to claim that it wrote the code from some sort of general knowledge, it just learned from all the implementations blah blah blah to make their tool look more impressive.

I don't need that, in fact it would be vastly superior to just "steal" from one particularly good implementation that has a compatible license you can just comply with. (And better yet to try to avoid copying the code and to find a library if at all possible). Why in the fuck even do the copyright laundering on code that is under MIT or similar license? The authors literally tell you that you can just use it.

[-] BlueMonday1984@awful.systems 10 points 1 day ago

I don’t need that, in fact it would be vastly superior to just “steal” from one particularly good implementation that has a compatible license you can just comply with. (And better yet to try to avoid copying the code and to find a library if at all possible). Why in the fuck even do the copyright laundering on code that is under MIT or similar license? The authors literally tell you that you can just use it.

I'd say its a combo of them feeling entitled to plagiarise people's work and fundamentally not respecting the work of others (a point OpenAI's Studio Ghibli abomination machine demonstrated at humanity's expense.

On a wider front, I expect this AI bubble's gonna cripple the popularity of FOSS licenses - the expectation of properly credited work was a major aspect of the current FOSS ecosystem, and that expectation has been kneecapped by the automated plagiarism machines, and programmers are likely gonna be much stingier with sharing their work because of it.

[-] diz@awful.systems 8 points 22 hours ago* (last edited 22 hours ago)

I’d say its a combo of them feeling entitled to plagiarise people’s work and fundamentally not respecting the work of others (a point OpenAI’s Studio Ghibli abomination machine demonstrated at humanity’s expense.

Its fucking disgusting how they denigrate the very work on which they built their fucking business on. I think its a mixture of the two though, they want it plagiarized so that it looks like their bot is doing more coding than it is actually capable of.

On a wider front, I expect this AI bubble’s gonna cripple the popularity of FOSS licenses - the expectation of properly credited work was a major aspect of the current FOSS ecosystem, and that expectation has been kneecapped by the automated plagiarism machines, and programmers are likely gonna be much stingier with sharing their work because of it.

Oh absolutely. My current project is sitting in a private git repo, hosted on a VPS. And no fucking way will I share it under anything less than GPL3 .

We need a license with specific AI verbiage. Forbidding training outright won't work (they just claim fair use).

I was thinking adding a requirement that the license header should not be removed unless a specific string ("This code was adapted from libsomeshit_6.23") is included in the comments by the tool, for the purpose of propagation of security fixes and supporting a consulting market for the authors. In the US they do own the judges, but in the rest of the world the minuscule alleged benefit of not attributing would be weighted against harm to their customers (security fixes not propagated) and harm to the authors (missing out on consulting gigs).

edit: perhaps even an explainer that authors see non attribution as fundamentally fraudulent against the user of the coding tool: the authors of libsomeshit routinely publish security fixes and the user of the coding tool, who has been defrauded to believe that the code was created de-novo by the coding tool, is likely to suffer harm from misuse of published security fixes by hackers (which wouldn't be possible if the code was in fact created de-novo).

[-] flizzo@awful.systems 7 points 1 day ago

I've been thinking about this topic along similar lines. There's a race to the bottom, and someone's fully licit corpus is just going to end up being worse (in user perception, not in whatever gamed metrics/benchmarks are de rigueur) than someone else's illicit corpus, even if they are broadly similar at a glance.

There are also likely other, smaller factors that make this difficult. I expect my small body of public work with permissive licensing has been an input to one or more of these things at some point, but I would not be surprised if the fidelity of provenance were not maintained consistently, because that is not easy to do, and it doesn't buy you much if you don't intend to provide attribution.

[-] diz@awful.systems 10 points 1 day ago

I think provenance has value outside copyright... here's a hypothetical scenario:

libsomeshit is licensed under MIT-0 . It does not even need attribution. Version 3.0 has introduced a security exploit. It has been fixed in version 6.23 and widely reported.

A plagiaristic LLM with training date cutoff before 6.23 can just shit out the exploit in question, even though it already has been fixed.

A less plagiaristic LLM could RAG in the current version of libsomeshit and perhaps avoid introducing the exploit and update the BOM with a reference to "libsomeshit 6.23" so that when version 6.934 fixes some other big bad exploit an automated tool could raise an alarm.

Better yet, it could actually add a proper dependency instead of cut and pasting things.

And it would not need to store libsomeshit inside its weights (which is extremely expensive) at the same fidelity. It just needs to be able to shit out a vector database's key.

I think the market right now is far too distorted by idiots with money trying to build the robot god. Code plagiarism is an integral part of it, because it makes the LLM appear closer to singularity (it can write code for itself! it is gonna recursively self-improve!).

this post was submitted on 03 Aug 2025
36 points (100.0% liked)

TechTakes

2097 readers
200 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS