934
top 50 comments
sorted by: hot top controversial new old
[-] hiramfromthechi@lemmy.world 2 points 2 days ago

Added to idcaboutprivacy (which is open source). If there are any other similar links, feel free to add them or send them my way.

[-] MagicShel@lemmy.zip 249 points 5 days ago

There's no guarantee anyone on there (or here) is a real person or genuine. I'll bet this experiment has been conducted a dozen times or more but without the reveal at the end.

[-] RustyShackleford@literature.cafe 66 points 5 days ago

I've worked in quite a few DARPA projects and I can almost 100% guarantee you are correct.

[-] Forester@pawb.social 27 points 5 days ago

Some of us have known the internet has been dead since 2014

load more comments (4 replies)
load more comments (14 replies)
load more comments (10 replies)
[-] Donkter@lemmy.world 44 points 4 days ago

This is a really interesting paragraph to me because I definitely think these results shouldn't be published or we'll only get more of these "whoopsie" experiments.

At the same time though, I think it is desperately important to research the ability of LLMs to persuade people sooner rather than later when they become even more persuasive and natural-sounding. The article mentions that in studies humans already have trouble telling the difference between AI written sentences and human ones.

[-] FourWaveforms@lemm.ee 13 points 4 days ago

This is certainly not the first time this has happened. There's nothing to stop people from asking ChatGPT et al to help them argue. I've done it myself, not letting it argue for me but rather asking it to find holes in my reasoning and that of my opponent. I never just pasted what it said.

I also had a guy post a ChatGPT response at me (he said that's what it was) and although it had little to do with the point I was making, I reasoned that people must surely be doing this thousands of times a day and just not saying it's AI.

To say nothing of state actors, "think tanks," influence-for-hire operations, etc.

The description of the research in the article already conveys enough to replicate the experiment, at least approximately. Can anyone doubt this is commonplace, or that it has been for the last year or so?

[-] justdoitlater@lemmy.world 58 points 4 days ago

Reddit: Ban the Russian/Chinese/Israeli/American bots? Nope. Ban the Swiss researchers that are trying to study useful things? Yep

[-] Ilandar@lemm.ee 48 points 4 days ago

Bots attempting to manipulate humans by impersonating trauma counselors or rape survivors isn't useful. It's dangerous.

[-] endeavor@sopuli.xyz 21 points 4 days ago

Humans pretend to be experts infront of eachother and constantly lie on the internet every day.

Say what you want about 4chan but the disclaimer it had ontop of its page should be common sense to everyone on social media.

[-] acosmichippo@lemmy.world 17 points 4 days ago

that doesn't mean we should exacerbate the issue with AI.

load more comments (2 replies)
load more comments (1 replies)
[-] Ledericas@lemm.ee 20 points 3 days ago

as opposed to thousands of bots used by russia everyday on politics related subs.

load more comments (1 replies)
[-] VampirePenguin@midwest.social 44 points 4 days ago

AI is a fucking curse upon humanity. The tiny morsels of good it can do is FAR outweighed by the destruction it causes. Fuck anyone involved with perpetuating this nightmare.

[-] 13igTyme@lemmy.world 18 points 4 days ago* (last edited 4 days ago)

Todays "AI" is just machine learning code. It's been around for decades and does a lot of good. It's most often used for predictive analytics and used to facilitate patient flow in healthcare and understand volumes of data fast to provide assistance to providers, case manager, and social workers. Also used in other industries that receive little attention.

Even some language learning machines can do good, it's the shitty people that use it for shitty purposes that ruin it.

[-] VampirePenguin@midwest.social 13 points 4 days ago

Sure I know what it is and what it is good for, I just don't think the juice is worth the squeeze. The companies developing AI HAVE to shove it everywhere to make it feasible, and the doing of that is destructive to our entire civilization. The theft of folks' work, the scamming, the deep fakes, the social media propaganda bots, the climate raping energy consumption, the loss of skill and knowledge, the enshittification of writing and the arts, the list goes on and on. It's a deadend that humanity will regret pursuing if we survive this century. The fact that we get a paltry handful of positives is cold comfort for our ruin.

load more comments (2 replies)
load more comments (7 replies)
[-] LovingHippieCat@lemmy.world 132 points 5 days ago* (last edited 5 days ago)

If anyone wants to know what subreddit, it's r/changemyview. I remember seeing a ton of similar posts about controversial opinions and even now people are questioning Am I Overreacting and AITAH a lot. AI posts in those kind of subs are seemingly pretty frequent. I'm not surprised to see it was part of a fucking experiment.

[-] jonne@infosec.pub 47 points 5 days ago

AI posts or just creative writing assignments.

[-] paraphrand@lemmy.world 40 points 5 days ago

Right. Subs like these are great fodder for people who just like to make shit up.

[-] eRac@lemmings.world 25 points 5 days ago

This was comments, not posts. They were using a model to approximate the demographics of a poster, then using an LLM to generate a response counter to the posted view tailored to the demographics of the poster.

load more comments (1 replies)
[-] TheObviousSolution@lemm.ee 68 points 4 days ago

The reason this is "The Worst Internet-Research Ethics Violation" is because it has exposed what Cambridge Analytica's successors already realized and are actively exploiting. Just a few months ago it was literally Meta itself running AI accounts trying to pass off as normal users, and not an f-ing peep - why do people think they, the ones who enabled Cambridge Analytica, were trying this shit to begin with. The only difference now is that everyone doing it knows to do it as a "unaffiliated" anonymous third party.

load more comments (3 replies)
[-] TwinTitans@lemmy.world 104 points 5 days ago* (last edited 4 days ago)

Like the 90s/2000s - don’t put personal information on the internet, don’t believe a damned thing on it either.

Yeah, it’s amazing how quickly the “don’t trust anyone on the internet” mindset changed. The same boomers who were cautioning us against playing online games with friends are now the same ones sharing blatantly AI generated slop from strangers on Facebook as if it were gospel.

[-] Serinus@lemmy.world 42 points 4 days ago

Back then it was just old people trying to groom 16 year olds. Now it's a nation's intelligence apparatus turning our citizens against each other and convincing them to destroy our country.

I wholeheartedly believe they're here, too. Their primary function here is to discourage the left from voting, primarily by focusing on the (very real) failures of the Democrats while the other party is extremely literally the Nazi party.

load more comments (5 replies)
load more comments (2 replies)
load more comments (4 replies)
[-] ImplyingImplications@lemmy.ca 83 points 4 days ago

The ethics violation is definitely bad, but their results are also concerning. They claim their AI accounts were 6 times more likely to persuade people into changing their minds compared to a real life person. AI has become an overpowered tool in the hands of propagandists.

[-] jbloggs777@discuss.tchncs.de 20 points 4 days ago

It would be naive to think this isn't already in widespread use.

load more comments (1 replies)
[-] nodiratime@lemmy.world 37 points 4 days ago

Reddit’s chief legal officer, Ben Lee, wrote that the company intends to “ensure that the researchers are held accountable for their misdeeds.”

What are they going to do? Ban the last humans on there having a differing opinion?

Next step for those fucks is verification that you are an AI when signing up.

[-] conicalscientist@lemmy.world 47 points 4 days ago

This is probably the most ethical you'll ever see it. There are definitely organizations committing far worse experiments.

Over the years I've noticed replies that are far too on the nose. Probing just the right pressure points as if they dropped exactly the right breadcrumbs for me to respond to. I've learned to disengage at that point. It's either they scrolled through my profile. Or as we now know it's a literal psy-op bot. Already in the first case it's not worth engaging with someone more invested than I am myself.

[-] skisnow@lemmy.ca 18 points 4 days ago

Yeah I was thinking exactly this.

It's easy to point to reasons why this study was unethical, but the ugly truth is that bad actors all over the world are performing trials exactly like this all the time - do we really want the only people who know how this kind of manipulation works to be state psyop agencies, SEO bros, and astroturfing agencies working for oil/arms/religion lobbyists?

Seems like it's much better long term to have all these tricks out in the open so we know what we're dealing with, because they're happening whether it gets published or not.

load more comments (1 replies)
load more comments (3 replies)
[-] SolNine@lemmy.ml 38 points 4 days ago

Not remotely surprised.

I dabble in conversational AI for work, and am currently studying its capabilities for thankfully (imo at least) positive and beneficial interactions with a customer base.

I've been telling friends and family recently that for a fairly small amount of money and time investment, I am fairly certain a highly motivated individual could influence at a minimum a local election. Given that, I imagine it would be very easy for Nations or political parties to easily manipulate individuals on a much larger scale, that IMO nearly everything on the Internet should be suspect at this point, and Reddit is atop that list.

[-] aceshigh@lemmy.world 30 points 4 days ago

This isn’t even a theoretical question. We saw it live in the last us elections. Fox News, TikTok, WaPo etc. are owned by right wing media and sane washed trump. It was a group effort. You need to be suspicious not only of the internet but of tv and newspapers too. Old school media isn’t safe either. It never really was.

But I think the root cause is that people don’t have the time to really dig deep to get to the truth, and they want entertainment not be told about the doom and gloom of the actual future (like climate change, loss of the middle class etc).

load more comments (2 replies)
[-] deathbird@mander.xyz 28 points 4 days ago

Personally I love how they found the AI could be very persuasive by lying.

[-] acosmichippo@lemmy.world 33 points 4 days ago

why wouldn't that be the case, all the most persuasive humans are liars too. fantasy sells better than the truth.

load more comments (1 replies)
[-] teamevil@lemmy.world 60 points 4 days ago* (last edited 4 days ago)

Holy Shit... This kind of shit is what ultimately broke Tim(very closely ralated to ted) kaczynski.... He was part of MKULTRA research while a student at Harvard, but instead of drugging him, they had a debater that was a prosecutor pretending to be a student.... And would just argue against any point he had to see when he would break....

And that's how you get the Unabomber folks.

[-] Knock_Knock_Lemmy_In@lemmy.world 38 points 4 days ago

The key result

When researchers asked the AI to personalize its arguments to a Redditor’s biographical details, including gender, age, and political leanings (inferred, courtesy of another AI model, through the Redditor’s post history), a surprising number of minds indeed appear to have been changed. Those personalized AI arguments received, on average, far higher scores in the subreddit’s point system than nearly all human commenters

load more comments (3 replies)
[-] umbrella@lemmy.ml 40 points 4 days ago
load more comments (10 replies)
[-] MonkderVierte@lemmy.ml 28 points 4 days ago* (last edited 4 days ago)

When Reddit rebranded itself as “the heart of the internet” a couple of years ago, the slogan was meant to evoke the site’s organic character. In an age of social media dominated by algorithms, Reddit took pride in being curated by a community that expressed its feelings in the form of upvotes and downvotes—in other words, being shaped by actual people.

Not since the APIcalypse at least.

Aside from that, this is just reheated news (for clicks i assume) from a week or two ago.

load more comments (1 replies)
[-] paraphrand@lemmy.world 49 points 5 days ago* (last edited 5 days ago)

I’m sure there are individuals doing worse one off shit, or people targeting individuals.

I’m sure Facebook has run multiple algorithm experiments that are worse.

I’m sure YouTube has caused worse real world outcomes with the rabbit holes their algorithm use to promote. (And they have never found a way to completely fix the rabbit hole problems without destroying the usefulness of the algorithm completely.)

The actions described in this article are upsetting and disappointing, but this has been going on for a long time. All in the name of making money.

load more comments (3 replies)
[-] flango@lemmy.eco.br 25 points 4 days ago

[...] I read through dozens of the AI comments, and although they weren’t all brilliant, most of them seemed reasonable and genuine enough. They made a lot of good points, and I found myself nodding along more than once. As the Zurich researchers warn, without more robust detection tools, AI bots might “seamlessly blend into online communities”—that is, assuming they haven’t already.

[-] FatTony@lemmy.world 11 points 4 days ago

You know what Pac stands for? PAC. Program and Control. He’s Program and Control Man. The whole thing’s a metaphor. All he can do is consume. He’s pursued by demons that are probably just in his own head. And even if he does manage to escape by slipping out one side of the maze, what happens? He comes right back in the other side. People think it’s a happy game. It’s not a happy game. It’s a fucking nightmare world. And the worst thing is? It’s real and we live in it.

load more comments (1 replies)
[-] thedruid@lemmy.world 20 points 4 days ago

Fucking a. I. And their apologist script kiddies. worse than fucking Facebook in its disinformation

[-] VintageGenious@sh.itjust.works 20 points 4 days ago

Using mainstream social media is literally agreeing to be constantly used as an advertisement optimization research subject

load more comments (1 replies)
[-] TronBronson@lemmy.world 15 points 4 days ago

Wow you mean reddit is banning real users and replacing them with bots?????

[-] perestroika@lemm.ee 16 points 4 days ago* (last edited 4 days ago)

The University of Zurich’s ethics board—which can offer researchers advice but, according to the university, lacks the power to reject studies that fall short of its standards—told the researchers before they began posting that “the participants should be informed as much as possible,” according to the university statement I received. But the researchers seem to believe that doing so would have ruined the experiment. “To ethically test LLMs’ persuasive power in realistic scenarios, an unaware setting was necessary,” because it more realistically mimics how people would respond to unidentified bad actors in real-world settings, the researchers wrote in one of their Reddit comments.

This seems to be the kind of a situation where, if the researchers truly believe their study is necessary, they have to:

  • accept that negative publicity will result
  • accept that people may stop cooperating with them on this work
  • accept that their reputation will suffer as a result
  • ensure that they won't do anything illegal

After that, if they still feel their study is necesary, maybe they should run it and publish the results.

If then, some eager redditors start sending death threats, that's unfortunate. I would catalouge them, but not report them anywhere unless something actually happens.

As for the question of whether a tailor-made response considering someone's background can sway opinions better - that's been obvious through ages of diplomacy. (If you approach an influential person with a weighty proposal, it has always been worthwhile to know their background, think of several ways of how they might perceive the proposal, and advance your explanation in a way that relates better with their viewpoint.)

AI bots which take into consideration a person's background will - if implemented right - indeed be more powerful at swaying opinions.

As to whether secrecy was really needed - the article points to other studies which apparently managed to prove the persuasive capability of AI bots without deception and secrecy. So maybe it wasn't needed after all.

load more comments (1 replies)
load more comments
view more: next ›
this post was submitted on 03 May 2025
934 points (100.0% liked)

Technology

69845 readers
3532 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS