746
top 50 comments
sorted by: hot top controversial new old
[-] pezhore@infosec.pub 172 points 1 week ago

I was just commenting on how shit the Internet has become as a direct result of LLMs. Case in point - I wanted to look at how to set up a router table so I could do some woodworking. The first result started out halfway decent, but the second section switched abruptly to something about routers having wifi and Ethernet ports - confusing network routers with the power tool. Any human/editor would catch that mistake, but here it is.

I can only see this get worse.

[-] null_dot@lemmy.dbzer0.com 108 points 1 week ago

It's not just the internet.

Professionals (using the term loosely) are using LLMs to draft emails and reports, and then other professionals (?) are using LLMs to summarise those emails and reports.

I genuinely believe that the general effectiveness of written communication has regressed.

[-] MisterFrog@lemmy.world 1 points 4 days ago* (last edited 4 days ago)

I honestly wonder what these sorts of jobs are. I feel like I have barely any reason to use AI ever in my job.

But this may because I'm not summarising much, if ever

AI can't think, and how long emails are people writing to ever make the effort of asking the AI to write something for you worth it?

By the time you've asked it to include everything you wanted, you could have just written the damn email

[-] pezhore@infosec.pub 51 points 1 week ago

I've tried using an LLM for coding - specifically Copilot for vscode. About 4 out of 10 times it will accurately generate code - which means I spend more time troubleshooting, correcting, and validating what it generates instead of actually writing code.

[-] kurwa@lemmy.world 26 points 1 week ago

I feel like it's not that bad if you use it for small things, like single lines instead of blocks of code, like a glorified auto complete.

Sometimes it's nice to not use it though because it can feel distracting.

[-] Swedneck@discuss.tchncs.de 33 points 1 week ago

truly who could have predicted that a glorified autocomplete program is best at performing autocompletion

seriously the world needs to stop calling it "AI", it IS just autocomplete!

[-] Phen@lemmy.eco.br 17 points 1 week ago

I find it most useful as a means of getting answers for stuff that have poor documentation. A couple weeks ago chatgpt gave me an answer whose keyword had no matches on Google at all. No idea where it took that from (probably some private codebase), but it worked.

[-] sem 4 points 1 week ago

I'm glad you had some independent way to verify that it was correct. Because I've asked it stuff Google doesn't know, and it just invents plausible but wrong answers.

[-] TheBrideWoreCrimson@sopuli.xyz 6 points 1 week ago

I use it to construct regex's which, for my use cases, can get quite complicated. It's pretty good at doing that.

[-] DogWater@lemmy.world 5 points 1 week ago* (last edited 1 week ago)

Apparently Claude sonnet 3.7 is the best one for coding

[-] piccolo@sh.itjust.works 5 points 1 week ago

I like using gpt to generate powershell scripts, surprisingly its pretty good at that. It is a small task so unlikely to go off in the deepend.

[-] FauxLiving@lemmy.world 6 points 1 week ago

Like all tools, it is good for some things and not others.

"Make me an OS to replace Windows" is going to fail "Tell me the terminal command to rename a file" will succeed.

It's up to the user to apply the tool in a way that it is useful. A person simply saying 'My hammer is terrible at making screw holes' doesn't mean that the hammer is a bad tool, it tells you the user is an idiot.

load more comments (2 replies)
[-] FauxLiving@lemmy.world 6 points 1 week ago

The Internet was shit before LLMs

[-] ICastFist@programming.dev 12 points 1 week ago

It had its fair share of shit and that gradually increased with time, but LLMs are like a whole new level of flooding everything with zero effort

load more comments (1 replies)
[-] taiyang@lemmy.world 108 points 1 week ago

I'm the type to be in favor of new tech but this really is a downgrade after seeing it available for a few years. Midterms hit my classes this week and I'll be grading them next week. I'm already seeing people try to pass off GPT as their own, but the quality of answers has really dropped in the past year.

Just this last week, I was grading a quiz on persuasion and for fun, I have students pick an advertisement to analyze. You know, to personalize the experience, this was after the super bowl so we're swimming in examples. Can even be audio, like a podcast ad, or a fucking bus bench or literally anything else.

60% of them used the Nike Just Do It campaign, not even a specific commercial. I knew something was amiss, so I asked GPT what example it would probably use it asked. Sure enough, Nike Just Do It.

Why even cheat on that? The universe has a billion ad examples. You could even feed GPT one and have it analyze for you. It'd be wrong, cause you have to reference the book, but at least it'd not be at blatant.

I didn't unilaterally give them 0s but they usually got it wrong anyway so I didn't really have to. I did warn them that using that on the midterm in this way will likely get them in trouble though, as it is against the rules. I don't even care that much because again, it's usually worse quality anyway but I have to grade this stuff, I don't want suffer like a sci-fi magazine getting thousands of LLM submissions trying to win prizes.

[-] Shou@lemmy.world 31 points 1 week ago

As someone who has been a teenager. Cheating is easy, and class wasn't as fun as video games. Plus, what teenager understands the importance of an assignment? Of the skill it is supposed to make them practice?

That said, I unlearned to copy summaries when I heard I had to talk about the books I "read" as part of the final exams in high school. The examinor would ask very specific plot questions often not included in online summaries people posted... unless those summaries were too long to read. We had no other option but to take it seriously.

As long as there isn't something that GPT can't do the work for, they won't learn how to write/do the assignment.

Perhaps use GPT to fail assignments? If GPT comes up with the same subject and writing style/quality, subract points/give 0s.

[-] taiyang@lemmy.world 16 points 1 week ago

I have a similar background and no surprise, it's mostly a problem in my asynchronous class. The ones who have my in person lectures are much more engaged, since it is a fun topic and I don't enjoy teaching unless I'm also making them laugh. No dice with asynchronous.

And yeah, I'm also kinda doing that with my essay questions, requiring stuff you sorta can't just summarize. Important you critical thinking, even if you're not just trying to detect GPT.

I remember reading that GPT isn't really foolproof on verifying bad usage, and I am not willing to fail anyone over it unless I had to. False positives and all that. Hell, I just used GPT as a sounding board for a few new questions I'm writing, and it's advice wasn't bad. There's good ways to use it, just... you know, not so stupidly.

[-] ICastFist@programming.dev 9 points 1 week ago

Last November, I gave some volunteer drawing classes at a school. Since I had limited space, I had to pick and choose a small number of 9-10yo kids, and asked the students interested to do a drawing and answer "Why would you like to participate in the drawing classes?"

One of the kids used chatgpt or some other AI. One of the parts that gave it away was that, while everyone else wrote something like "I want because", he went on with "By participating, you can learn new things and make friends". I called him out in private and he tried to bullshit me, but it wasn't hard to make him contradict himself or admit to "using help". I then told him that it was blatantly obvious that he used AI to answer for him and what really annoyed me wasn't so much the fact he used it, but that he managed to write all of that without reading, and thought that I would be too dumb or lazy to bother reading or to notice any problems.

load more comments (3 replies)
load more comments (5 replies)
[-] msage@programming.dev 79 points 1 week ago

I just want to point out that there were text generators before ChatGPT, and they were ruining the internet for years.

Just like there are bots on social media, pushing a narrative, humans are being alienated from every aspect of modern society.

What is a society for, when you can't be a part of it?

[-] Schadrach@lemmy.sdf.org 20 points 1 week ago

I just want to point out that there were text generators before ChatGPT, and they were ruining the internet for years.

Hey now, King James Programming was pretty funny.

For those unfamiliar, King James Programming is a Markov chain trained on the King James Bible and the Structure and Interpretation of Computer Programs, with quotes posted at https://kingjamesprogramming.tumblr.com/

4:24 For the LORD will work for each type of data it is applied to.

In APL all data are represented as arrays, and there shall they see the Son of man, in whose sight I brought them out

3:23 And these three men, Noah, Daniel, and Job were in it, and all the abominations that be done in (log n) steps.

I was first introduced to it when I started reading UNSONG.

[-] Fedop@slrpnk.net 5 points 1 week ago

This was such a good idea, so many of these are fire.

then shall they call upon me, but I will not cause any information to be accumulated on the stack.

How much more are ye better than the ordered-list representation

evaluating the operator might modify env, which will be the hope of unjust men

[-] kameecoding@lemmy.world 7 points 1 week ago

I feel like the term "touch grass" applies to this comment more than anything.

load more comments (8 replies)
[-] merc@sh.itjust.works 55 points 1 week ago

All this really does is show areas where the writing requirements are already bullshit and should be fixed.

Like, consumer financial complaints. People feel they have to use LLMs because when they write in using plain language they feel they're ignored, and they're probably right. It suggests that these financial companies are under regulated and overly powerful. If they weren't, they wouldn't be able to ignore complaints when they're not written in lawyerly language.

Press releases: we already know they're bullshit. No surprise that now they're using LLMs to generate them. These shouldn't exist at all. If you have something to say, don't say it in a stilted press-release way. Don't invent quotes from the CEO. If something is genuinely good and exciting news, make a blog post about it by someone who actually understands it and can communicate their excitement.

Job postings. Another bullshit piece of writing. An honest job posting would probably be something like: "Our sysadmin needs help because he's overworked, he says some of the key skills he'd need in a helper are X, Y and Z. But, even if you don't have those skills, you might be useful in other ways. It's a stressful job, and it doesn't pay that well, but it's steady work. Please don't apply if you're fresh out of school and don't have any hands-on experience." Instead, job postings have evolved into some weird cargo-culted style of writing involving stupid phrases like "the ideal candidate will..." and lies about something being a "fast paced environment" rather than simply "disorganized and stressful". You already basically need a "secret decoder ring" to understand a job posting, so yeah, why not just feed a realistic job posting to an LLM and make it come up with some bullshit.

[-] laserm@lemmy.world 1 points 4 days ago

I mean there are court documents written with the help of AI.

[-] merc@sh.itjust.works 3 points 4 days ago

And there are lawyers who have been raked over the coals by judges when the lawyers have submitted AI-generated documents where the LLM "hallucinated" cases that didn't exist which were used as precedents.

[-] ilovepiracy@lemmy.dbzer0.com 17 points 1 week ago

Exactly. LLM's assisting people in writing soul-sucking corporate drivel is a good thing, I hope this changes the public perception on the umbrella of 'formal office writing'. (including: internal emails, job applications etc.) So much time-wasting bullshit to form nothing productive.

[-] merc@sh.itjust.works 7 points 1 week ago

LLM's assisting people in writing soul-sucking corporate drivel is a good thing

I don't think so, not if the alternative is simply getting rid of that soul-sucking corporate drivel.

[-] JackbyDev@programming.dev 12 points 1 week ago

Reminds me of the one about

  1. See? The AI expands the bullet point into a full email.
  2. See? The AI summarizes the email into a single bullet point.
[-] JackbyDev@programming.dev 5 points 1 week ago

Job postings are wild. Like, "Java Spring Boot developer with 8+ years experience" would be fine 90% of the time.

load more comments (1 replies)
[-] T156@lemmy.world 38 points 1 week ago

How did they estimate whether an LLM was used to write the text or not? Did they do it by hand, or using a detector?

Since detectors are notorious for picking up ESL writers, or professionally written text as AI-Generated.

[-] sober_monk@lemmy.world 25 points 1 week ago* (last edited 1 week ago)

They developed their own detector described in another paper. Basically, this reverse-engineers texts based on their vocabulary to provide an estimate on how much of them were ChatGPT.

load more comments (1 replies)
[-] Bob_Robertson_IX@lemmy.world 18 points 1 week ago

They just asked a few people if they thought it was written by an LLM. /s

I mean, you can tell when something is written from ChatGPT, especially if the person isn't using it for editing, but is just asking it to write a complaint or request. It is likely they are only counting the most obvious, so the actual count is higher.

[-] hypna@lemmy.world 12 points 1 week ago

I don't know of any reason that the proportion of ESL writers would have started trending up in 2022.

[-] ayyy@sh.itjust.works 25 points 1 week ago

Llm detectors are always snake oil 100% of the time. Anyone claiming otherwise is lying for personal gain.

load more comments (3 replies)
[-] TropicalDingdong@lemmy.world 20 points 1 week ago

This is the top result on duck duck go for how tall does a soursop tree get:

https://livetoplant.com/soursop-plant-size-get-the-right-size-for-you/

Gee thanks, I'm cured.

Btw does any one know if Soursops have an aggressive root system?

load more comments (1 replies)
[-] Vespair@lemm.ee 19 points 1 week ago

I am not saying the two are equally comparable, but I wonder if the same "most rapid change in human written communication" could also have been said with the proliferation of computer-based word processors equipped with spelling and grammar checks.

[-] slartibartfast@lemm.ee 8 points 1 week ago

Who wants to be licked by an emo?

[-] surph_ninja@lemmy.world 7 points 1 week ago

BREAKING NEWS: Since the invention of calculators, less people using abacus!

[-] taiyang@lemmy.world 23 points 1 week ago

Not a good analogy, except there is one interesting parallel. My students who overuse a calculator in stats tend to do fine on basic arithmetic but it does them a disservice when trying to do anything more elaborate. Granted, it should be able to follow PEDMAS but for whatever weird reason, it doesn't sometimes. And when there's a function that requires a sum and maybe multiple steps? Forget about it.

Similarly, GPT can make cliche copy writing, but good luck getting it to spit out anything complex. Trust me, I'm grading that drinble. So in that case, the analogy works.

load more comments (3 replies)
[-] Lucky_777@lemmy.world 4 points 1 week ago

This. It's a tool, embrace it and learn the limitations....or get left behind and become obsolete. You won't be able to keep up with people that do use it.

[-] ggtdbz@lemmy.dbzer0.com 15 points 1 week ago* (last edited 1 week ago)

The invention of the torque wrench didn’t severely impede my ability to retrieve stored information, and everyone else’s, affecting me by proxy.

The tech four years ago was impressive but for me it’s only done two things since becoming widely available: thinned the soup of Internet fun things, and made some people, disproportionally executives at my work, abandon a solid third of their critical thinking skills.

I use AI models locally, to turn around little jokes for friends, you could say I’ve put more effort into machine learning tools than many daily AI users. And I’ll be the first to call the article described by OP as a true, shameful indictment of us as a species.

[-] Swedneck@discuss.tchncs.de 10 points 1 week ago

dude you figuring out how to make the AI shit out something half-passable isn't making you clever and superior, it's just sad

[-] IAmVeraGoodAtThis 4 points 1 week ago

What a dumb comparison. Calculators are just tools to do the same mechanical action as abaci, which were also just tools to speed up human mechanical actions of calculation.

Writing, drawing, research are creative, not mechanical, and offloading them to a tool is very different from offloading calculations to integrated circuits

load more comments (3 replies)
[-] HeyThisIsntTheYMCA@lemmy.world 6 points 1 week ago

Well if your books start talking back you should get help. The computer just started getting good (I remember Dr Sbaitso)

load more comments
view more: next ›
this post was submitted on 02 Mar 2025
746 points (100.0% liked)

Science Memes

13143 readers
803 users here now

Welcome to c/science_memes @ Mander.xyz!

A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.



Rules

  1. Don't throw mud. Behave like an intellectual and remember the human.
  2. Keep it rooted (on topic).
  3. No spam.
  4. Infographics welcome, get schooled.

This is a science community. We use the Dawkins definition of meme.



Research Committee

Other Mander Communities

Science and Research

Biology and Life Sciences

Physical Sciences

Humanities and Social Sciences

Practical and Applied Sciences

Memes

Miscellaneous

founded 2 years ago
MODERATORS