114
submitted 7 months ago* (last edited 7 months ago) by Five@slrpnk.net to c/science@slrpnk.net

Among those who shared any political content on Twitter during the election, fewer than 5% of people on the left or in the center ever shared any fake news content, yet 11 and 21% of people on the right and extreme right did

Grinberg, N., Joseph, K., Friedland, L., Swire-Thompson, B., & Lazer, D. (2019). Fake news on Twitter during the 2016 U.S. presidential election. Science, 363(6425), 374–378. doi:10.1126/science.aau2706

top 18 comments
sorted by: hot top controversial new old
[-] Jesusaurus@lemmy.world 37 points 7 months ago

Also worth noting that the X axis is growing by orders of magnitude and not linearly.

[-] Honytawk@lemmy.zip 8 points 7 months ago

Did my best with the information I had. Which was basically only the graph itself.

[-] grue@lemmy.world 7 points 7 months ago

Ironically, the misleadingly biased visualization makes this tantamount to fake news.

[-] Five@slrpnk.net 17 points 7 months ago* (last edited 7 months ago)

It's not even close to fake news. Logarithmic scales are standard in this kind of visualization. The thrust of the result is that right-wing people share more fake news, and if you look at the graph, this is clear. If you mistake the X-axis as a linear scale, the result makes the effect less pronounced, not more.

So if anything, the graph undersells the thesis in the name of creating a more compact and readable visualization. There is no deception here.

[-] grue@lemmy.world 5 points 7 months ago

If you mistake the X-axis as a linear scale, the result makes the effect less pronounced, not more.

Exactly, and that's the problem! When the chart makes it look like the right "only" shares maybe twice as much fake news when it's actually 10x-100x more, it makes the right look way less bad than it actually is.

[-] fogstormberry 5 points 7 months ago

there's also superconsumer and supersharer on the "political right" side of the chart causing a visual bias

[-] grue@lemmy.world 2 points 7 months ago

I'm less upset about those, but I agree that it would be nice to have a vertical gap between them and the ideological clusters above to make it clearer that they're orthogonal categories of grouping.

[-] Dippy@beehaw.org 26 points 7 months ago

There is a lot happening on that graph with not nearly enough metrics to tell you what it's presenting

[-] pendulous@lemmy.world 14 points 7 months ago

Isn't this "Who is exposed to fake news sources", not "who shared fake news sources"?

[-] Iceblade02@lemmy.world 1 points 7 months ago

Yes, the irony if mislabelling data about misinformation is fun

[-] LibertyLizard@slrpnk.net 9 points 7 months ago

This is just referring to completely fabricated stories right? I assume very biased stories are a lot more common.

[-] ChicoSuave@lemmy.world 9 points 7 months ago

Sure does look like gullibility is a factor in politics.

[-] homesweethomeMrL@lemmy.world 4 points 7 months ago

George Soros told you to say that didn’t he?!

[-] OlPatchy2Eyes@slrpnk.net 7 points 7 months ago

What are "superconsumers" and "supersharers?" Are those politically neutral terms, or are they further extentions to the right like the graphs seem to imply?

[-] Five@slrpnk.net 4 points 7 months ago

Yes, they are suspected right-wing bots separated from the data-set based on a set of criteria that marks them as outliers.

The “supersharers” and “superconsumers” of fake news sources—those accountable for 80% of fake news sharing or exposure—dwarfed typical users in their affinity for fake news sources and, furthermore, in most measures of activity. For example, on average per day, the median super- sharer of fake news (SS-F) tweeted 71.0 times, whereas the median panel member tweeted only 0.1 times. The median SS-F also shared an average of 7.6 political URLs per day, of which 1.7 were from fake news sources. Similarly, the median superconsumer of fake news sources had almost 4700 daily exposures to political URLs, as compared with only 49 for the median panel member (additional statistics in SM S.9). The SS-F members even stood out among the overall supersharers and superconsumers, the most politically active accounts in the panel (Fig. 2). Given the high volume of posts shared or consumed by superspreaders of fake news, as well as indicators that some tweets were authored by apps, we find it likely that many of these accounts were cyborgs: partially automated accounts controlled by humans (15) (SM S.8 and S.9). Their tweets included some self-authored content, such as personal commentary or photos, but also a large volume of political re-tweets. For subsequent analyses, we set aside the supersharer and superconsumer outlier accounts and focused on the remaining 99% of the panel.

[-] ssm@lemmy.sdf.org 6 points 7 months ago* (last edited 7 months ago)

Who is determining what is and isn't fake news?

I'd check the paper, but it's paywalled

[-] Matriks404@lemmy.world 2 points 7 months ago

Sounds like fake news to me. /s

this post was submitted on 24 Jun 2024
114 points (100.0% liked)

Science

255 readers
10 users here now

Community for discussion about experiments or discoveries made with scientific methods.

Links to articles: please preserve headlines when possible, shortening / replacing as needed. When multiple articles are involved, please consider a text post.

If there is a narrower community available, discussion is encouraged there.

If a topic relates more closely to application of knowledge than obtaining it, discussion is encouraged in c/technology.

Attribution for the banner image: Image by FreePik

founded 2 years ago
MODERATORS