You know what else costs a fraction of traditional polling and takes a fraction of the time?
Lying, making shit up. Which conveniently is basically what AI slop does, and having a person lie is even cheaper than licensing some random AI to do it.
You know what else costs a fraction of traditional polling and takes a fraction of the time?
Lying, making shit up. Which conveniently is basically what AI slop does, and having a person lie is even cheaper than licensing some random AI to do it.
Ah, but licensing the AI lets you blame the AI when it inevitably blows up
That is exactly it, the AI gives them an excuse to blame someone else even as they had every reason to know, and they did know, we all know they know but the courts pretend like they didn't know because the Federalist Society.
That bullshit line of thinking only remotely works if you let AI make the decisions from the beginning. Somebody still made the decision to ask AI for bullshit stats. That's the problem here, the human decisions, not necessarily the AI output.
Ah, yes, lies.
Technically, these are damned lies because they've been summarized.
Is there a community for cataloging tech bullshit like "silicon sampling"? If not I'll make one. First thought c/techbrobabble
EDIT: 'tis done !techbrobabble@piefed.blahaj.zone
I hope you're ready to be the only poster until the community catches on, and then that you're ready to play moderator. best of luck
Thanks! This was a spur of the moment decision, but I have thought about moderating a comm for a while. This seems like a good candidate to start with as I have a decades-long back catalogue of tech nonsense to draw from, I'll probably sit down an write out a list sometime this week and plan some posts out. At one post a day I bet I could keep it going myself for a few months, before I take into account the eternal firehose of techbrobabble I drink from every day lol.
That's fucking stupid.
You beat me to it. I was going to comment that this is literally the stupidest shit I have ever heard in my goddamn life.

I could go even cheaper by just thinking about it really hard and guessing
honestly it'd probably be better unless you're actively hallucinating bullshit
“opinions” formed from a mix of stolen books and movie scripts, terminally online shutins and fanfic writers, and politics comment sections cannot be considered a holistic look at humanity.
We’re absolutely going extinct. I’m out of hope at this point.
I have an environment friendly alternative to this method. It involves tea leaves…
It costs less to make shit up that "mimics" the real information. Who would have thought
Misleading title, Axios did not do this, but rather the referenced a study that they later discovered did this.
It’s on them for not learning this sooner, but let’s not act like they’re the ones who sent it up to try and manipulate political reporting.
Cool, so everything is just fucking made up now. Why even bother with the AI at that point? Just make up stats that say what you want right there on the spot. Its the same fucking different. Bullshit from humans or bullshit from AI, its all still bullshit.
What the actual fuck??

Holy fuck. It can simulate large samplings or it can just hallucinate some nonsensical BS that completely misinterprets the data it gathers in order to agree with the phrasing of the person who created the prompt.
Do the majority of people trust their doctors and nurses? Maybe. Or, maybe it depends on the context of the question.
Do I trust my doctors and nurses are a better source of information than random internet advice and AI generated slop? I would hope so.
Do I trust that the American healthcare system is set up to prioritize the health and well-being of the patient over maximizing profits and forcing healthcare workers to adhere to standardized time allotments of 10 to 15 minutes for every patient interaction regardless of the individual case? Absolutely not.
Unfucking believable that a legitimate media outlet would do such a thing. That’s some Breitbart shit.
Sure, our polls mean nothing, but think about the money you can save!!!
Sadly, I'm not too surprised. Check this shit out, published back in November 2025: https://arxiv.org/pdf/2510.25137.
"We simulated 151 million American workers [using LLMs] to see what proportion of tasks they do that can also be done by AI".
Much more recently, Esquire couldn't get ahold of an actor for an interview and so decided to generate the actors responses using Claude: https://esquiresg.com/mackenyu-one-piece-roronoa-zoro-interview/.
We had the photospread, but nothing directly uttered by the 29-year-old. With a driving need for a feature, we had to be inventive. Harnessing our creative license, we pulled his verbatim from previous interviews and fed them through an AI programme to formulate new responses.
Are these the words we expect from Mackenyu? Or are they just replies from an echo chamber of celebrity-hood that we want to believe is from him?
With the absence of information, can new insights be gained?
Nature abhors a vacuum, and in its place, a story fills the hollow.
Somehow it is currently accepted by a certain portion of people that LLM-based systems can be used to replace actually existing human beings.
Doing an interview with an LLM trained on a real person feels like libel.
What the absolute fuck. Even if I was pro-AI, I'd find this to be incredibly unethical.
Lately, I've had some coworkers empowered by AI in really cool ways, building mockups using code they can't personally write to present to me, an actual engineer. Not with the expectations of a final product, but to express their thoughts and ideas and how they would envision a project moving forward. I think that's a really cool and exciting use of AI, allowing non-technical people to better communicate with technical people.
Then I see crap like this and think "we need to burn this shit down immediately."
Ever notice that they're just doing what Clavicular or whatever his name is doing? They're inventing lingo to sound like its not bullshit.
Its not hitting yourself with a hammer its looksmaxxing. Its not standing around being a dork, its mogging. Its not a context window, its the chat scrollback. Its not asking chatgpt its "silicon sampling".
They're making it seem legit by making its own terminology and in-group lingo.
Making shit up, but with extra steps.
The ideal would be that clients who actually want useful information will stop paying the pollsters for their useless crap.
The reality will be that slack will be more than picked up by people who want sham poll results to back up their agenda.
Polls have always been leveraged as a form of propaganda.
We had Push Polling from back during the early Bush Era, where the ostensible polling cold call was just a marketing tool. We had "Unskewed Polls" during the Obama/Romney election, wherein Republicans tried to insist they were far more popular in order to influence everyone else through bandwagon appeal. Polling about Transgender Athletes was used as an excuse to dismantle civil protections for the LGBTQ community. Polls online are used to gather information on the public through responses and attendant metadata. Call-in shows are a form of engagement bait.
You can talk about the useful information gleaned from a public survey. But by and large, we only take polls when we want to change people's opinions. Its the first step in market research that ends with a blizzard of advertisements.
This might be the dumbest fucking thing I've ever heard.

You wouldn’t know my survey results, she goes to another school
Kinda sounds like numbers pulled out of your ass…
The thing is, logically the distribution of opinions or individual situations/beliefs which lead to those opinions, has been baked into the model when the data used to train it was captured, which means that at best and if the entire principle of the thing works (which itself isn't mathematically proven in any way form or shape) they're still getting only poll results for the past and which will not actually change beyond some random noise until the next time data is captured and the model is retrained.
It's like repeatedly using an old picture of a street to make realtime claims about the traffic there.
It’s like repeatedly using an old picture of a street to make realtime claims about the traffic there.
It is worse then that really, as the data used has shown to be heavily siloed (think a subreddit with heavy modderation or a facebook group). So its like using a old biased propaganda photo of a street to make realtime claims about the traffic there.
So, use AI to just make shit up, then report that as information? I wouldn't expect anything less in our post-truth era. However, come on Axios, I thought you were better than that.
I've read a lot of fucked up shit in the last few years - that's the first time I've thrown my phone in response!
(My phone's fine - I just threw it onto the sofa beside me, but still..)
Here, have one of these:

Sorry, wrong pic:

Well this might go a ways in explaining why polling has been so off in the last few years.
What could possibly go wrong?
Pictures, Videos, Articles showing just how boring it is to live in a dystopic society, or with signs of a dystopic society.
Rules (Subject to Change)
--Be a Decent Human Being
--Posting news articles: include the source name and exact title from article in your post title
--If a picture is just a screenshot of an article, link the article
--If a video's content isn't clear from title, write a short summary so people know what it's about.
--Posts must have something to do with the topic
--Zero tolerance for Racism/Sexism/Ableism/etc.
--No NSFW content
--Abide by the rules of lemmy.world