379
submitted 2 days ago* (last edited 2 days ago) by Jankatarch@lemmy.world to c/fuck_ai@lemmy.world
top 37 comments
sorted by: hot top controversial new old
[-] merc@sh.itjust.works 15 points 1 day ago

Axios updated the story:

Editor's note: This story has been updated to note that Aaru is an AI simulation research firm.

But still stands by their claim:

New findings by Aaru, an AI simulation research firm, for Heartland Forward show that a majority of people trust their own doctors and nurses

What kind of bullshit "fact checking" is this?

"New findings by Smegma, an Xbox chatroom research firm, show that your mother is a woman of loose morals who has had sexual intercourse with dozens of Xbox gamers."

[-] DragonTypeWyvern@midwest.social 55 points 2 days ago

"the idea is tantalizing"

No the fuck it isn't, and that's not even a Fuck AI type opinion just basic fucking scientific principles

[-] WhatAmLemmy@lemmy.world 8 points 2 days ago

Lying, cheating, stealing, exploitation and propaganda all sound "tantalizing" when you're a criminally corrupt sociopath.

We're just lucky capitalism doesn't reward sociopaths with wealth and power /s

[-] Clent@lemmy.dbzer0.com 64 points 2 days ago

In relayed news, a recent study that concluded I am, not just the smartest person in the universe but also the smartest that has every been or will ever be.

[-] Thunderbird4@lemmy.world 4 points 1 day ago

You’re absolutely right!

✅ Here’s why it matters:

[-] kittenzrulz123@lemmy.dbzer0.com 24 points 2 days ago

I recently did a AI study that concluded that I am not only the cutest catgirl on lemmy but deserve free unlimited hrt :3

[-] Tiresia@slrpnk.net 7 points 2 days ago

Well, a broken clock is right twice a day.

[-] s38b35M5@lemmy.world 5 points 2 days ago

Your typos and use of commas betrays you, fake study

[-] Clent@lemmy.dbzer0.com 5 points 2 days ago
[-] FederatedFreedom1981@lemmy.ca 3 points 2 days ago

ALL HAIL DONALD TRUMP

[-] ICastFist@programming.dev 16 points 2 days ago

It's ironic that the survey companies, who I thought wanted to avoid noise and bullshit, would pay for noise and bullshit that any RNG could fill.

[-] hansolo@lemmy.today 10 points 2 days ago

Yes, but how much of the training data is synthetic data? Because I expect this startup has no idea. Microsoft uses ML to crawl files on OneDrive to build aggregate models of document types, then use that for LLM training.

It's just all slop all the way down, huh? Just a fuzzy picture of a fuzzy picture hit with the "sharpen" filter 20 times?

[-] Jankatarch@lemmy.world 26 points 2 days ago* (last edited 2 days ago)

Alt text.

A recent Axios story on maternal health policy referenced "findings" that a majority of people trusted their doctors and nurses. On the surface, there's nothing unusual about that. What wasn't originally mentioned, however, was that these findings were made up.

Clicking through the links revealed (as did a subsequent editor's note and clarification by Axios) that the public opinion poll was a computer simulation run by the artificial intelligence start-up Aaru. No people were involved in the creation of these opinions.

The practice Aaru used is called silicon sampling, and it's suddenly everywhere. The idea behind silicon sampling is simple and tantalizing. Because large language models can generate responses that emulate human answers, polling companies see an opportunity to use A.I. agents to simulate survey responses at a small fraction of the cost and time required for traditional polling.

[-] WorldsDumbestMan@lemmy.today 8 points 2 days ago

I instantly thought "fuck no, this can't be true", then read the AI part.

[-] Tarogar@feddit.org 22 points 2 days ago

They were so busy thinking about the fact that they could that they didn't stop to think if they should. How much of an idiot can you be?

[-] Burninator05@lemmy.world 1 points 1 day ago

I dont know the Axis was ever the most trustworthy source out there but if they're doing this then less trustworthy sources are also doing it.

[-] ech@lemmy.ca 15 points 2 days ago
[-] dadarobot@lemmy.ml 11 points 2 days ago

Wasnt it axios that had that controversy recently where some github admin ended up in a flame war with an ai, and axios made up quotes?

Or was that someone else?

[-] BluesF@lemmy.world 6 points 2 days ago* (last edited 2 days ago)

I was interested in this idea, because although LLMs are not good at many things, what they absolutely are good at is taking large data sets of writing and finding a kind of "average" of that data. I can understand why this would make sense. I think it's a situation where the further you go from the training set the less reliable your "silicon sample" will be, because it has less and less relevant information to draw from, but I can also kind of see it working in some circumstances.

So, anyway, I have done a little research into this and the concept does show some definite promise. I think this is the study that kicked off the concept, and their results are quite impressive. GPT-3 manages to be close to human respondents on a variety of topics and in a variety of contexts (guessing preferences, tone, word choices, etc).

There are some issues I don't see addressed:

  • The evaluation is necessarily on data that is available, and it's unclear whether they've determined if that data existed in GPT-3's training set. Obviously if it did, this would somewhat poison the results as it would "know" the answers ahead of time.
  • The evaluation is limited to the US, and is all of "public opinion" topics, outside those I can't find further evidence that this works at all - while the paper does include methods they used to correct for default biases in GPT-3, this remains within this fairly narrow context.
  • Because much of the data is qualitative, some of the methods used to evaluate the fidelity of the model are somewhat unreliable (e.g. surveying humans and having them gauge the model's output). To be fair, this is in many cases inherent to the nature of psychological research rather than LLMs, but it makes trusting the results more difficult.

One important part from the article:

These studies suggest that after establishing algorithmic fidelity in a given model for a given topic/domain, researchers can leverage the insights gained from simulated, silicon samples to pilot different question wording, triage different types of measures, identify key relationships to evaluate more closely, and come up with analysis plans prior to collecting any data with human participants.

"Algorithmic fidelity" is a term that I think they have coined in this paper, it refers to how accurately the model reflects the population you are sampling. Roughly what they suggest is - take a known dataset of the population you want to assess, in the general area you are researching, and compare the real results of that with the LLM results. If this is successful you have an indication that the model can predict the population/area of interest, and you can adjust your questions to your specific topic. They don't really highlight enough that without this your results could just be completely bogus. Who knows what this company Aaru are doing.

I do think this is quite an interesting and potentially promising use of the technology. Despite the fact it might on the surface seem to be just "inventing" data, in a way the LLM has already surveyed many more heads than any "real" survey ever could hope to. I would like to see more research before being sure of any of this though, I'm certainly going to continue reading about it to see what limitations there are beyond my first assumptions. GPT-3 is not the latest model, and I wonder about how much AI generated content is out there now... Are the later generations of models starting to eat their own tails? There's obvious manipulation of online conversations through bots, could someone poison the well in this way and cause these "surveys" to produce skewed results?___

[-] jaredwhite@humansare.social 8 points 2 days ago

No, even in the absolute best case scenario, the LLM analysis is a trailing indicator. There's no way that it indicates current views, just possibly an indication of past views.

Personally I think this entire line of thinking ("silicon sampling") is dangerous af.

[-] BluesF@lemmy.world 2 points 1 day ago

That's a good point, although I imagine a dedicated company could refine a model using more recently sampled general data to improve the recency.

[-] jaredwhite@humansare.social 2 points 1 day ago

Yeah, I'm not saying a tool akin to LLMs can't be used as part of a suite of software workflows for parsing through and analyzing large datasets (seems rather obvious to say that), but forgoing the real work of live data gathering and statistics evaluation in order to do a sort of "vibe polling" sounds extremely off to me.

[-] BluesF@lemmy.world 1 points 8 hours ago

I agree, which is why I find the results they got interesting, the fact that the initial study was able to, arguably quite correctly (well, debatable if it was correct, as I pointed out their results are not the easiest to evaluate), predict real results is pretty impressive.

[-] okamiueru@lemmy.world 5 points 1 day ago* (last edited 1 day ago)

I'm eagerly waiting more studies on AI psychosis. Make sure to participate if you get the chance.

[-] BluesF@lemmy.world 1 points 1 day ago

I think I was overall pretty critical of the idea? I just find it interesting.

[-] FearMeAndDecay@literature.cafe 3 points 2 days ago

It seems like the kind of thing that could eventually be useful for helping to survey companies figure out how to word surveys and which surveys are even worth doing for a given group, rather than replacing the surveys themselves. Unfortunately it seems like the companies currently just want to replace the actually useful product with ai slop, as per usual

[-] BluesF@lemmy.world 2 points 1 day ago

Yes, it can obviously never entirely replace real surveys. I would assume that survey results forming a part of the training set is a big part of why they're able to get good results in the first place, and as I said I think its a significant risk that the evaluation is done it performs well because the data being evaluated against are (unbeknownst to the researcher) present in the training set.

[-] nieceandtows@programming.dev 4 points 2 days ago
[-] JackbyDev@programming.dev 2 points 2 days ago

What an interesting but absolutely horrible idea.

[-] Retail4068@lemmy.world 4 points 2 days ago* (last edited 2 days ago)

I'm still convinced axios made up the "truck owners don't use their shit right" back in 2018 and it caused 75% of the internet hate for trucks. To this day after asking repeatedly I still have not found a single lock of evidence outside one of their hit pieces.

[-] michaelmrose@lemmy.world 2 points 1 day ago

Most people never use the hauling capabilities and use their trucks as a worse car

[-] Retail4068@lemmy.world 1 points 1 day ago

Cite one source. I bet when you Google whatever random website to support your already made view it leads back to nowhere or axios.

[-] michaelmrose@lemmy.world 1 points 1 day ago

I see very little hauling, TONS of trucks in the city, and trucks parked at people's office job. It's kind of painfully obvious.

[-] Retail4068@lemmy.world 1 points 9 hours ago

I thought today was the day somebody might have an ounce of data instead of regurgitating retarded observations with biases and not a metric in sight. Not today I guess.

[-] michaelmrose@lemmy.world 1 points 52 minutes ago

it is literally evident by looking at people all around you my mom who never hauled anything had one come on its painful.

https://www.powernationtv.com/post/most-pickup-truck-owners-use-them

[-] atopi@piefed.blahaj.zone 1 points 1 day ago* (last edited 1 day ago)
this post was submitted on 07 Apr 2026
379 points (100.0% liked)

Fuck AI

6678 readers
1607 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS