A thumb is seen about to press on the Facebook icon on a phone screen
If your instincts say a lot of images on Facebook are misleading, you’re right. AP Photo/Jenny Kane

Visual Misinformation Is Widespread on Facebook, Often Undercounted

How much misinformation is on Facebook? Several studies have found that the amount of misinformation on Facebook is low or that the problem has declined over time.

This previous work, though, missed most of the story.

We are a communications researcher, a media and public affairs researcher and a founder of a digital intelligence company. We conducted a study that shows that other studies have overlooked massive amounts of misinformation. The biggest source of misinformation on Facebook is not links to fake-news sites but something more basic: images. And a large portion of posted pictures are misleading.

For instance, on the eve of the 2020 election, nearly one out of every four political image posts on Facebook contained misinformation. Widely shared falsehoods included QAnon conspiracy theories, misleading statements about the Black Lives Matter movement and unfounded claims about Joe Biden’s son Hunter Biden.

Visual Misinformation by the Numbers

Our study is the first large-scale effort, on any social-media platform, to measure the prevalence of image-based misinformation about U.S. politics. Image posts are important to study, in part because they are the most common type of post on Facebook at roughly 40% of all posts.

Previous research suggests that images may be especially potent. Adding images to news stories can shift attitudes, and posts with images are more likely to be reshared. Images have also been a longtime component of state-sponsored disinformation campaigns, like those of Russia’s Internet Research Agency.

We went big, collecting more than 13 million Facebook image posts from August 2020 through October 2020, from 25,000 pages and public groups. Audiences on Facebook are so concentrated that these pages and groups account for at least 94% of all engagement—likes, shares, reactions—for political-image posts. We used facial recognition to identify public figures, and we tracked reposted images. We then classified large, random draws of images in our sample, as well as the most frequently reposted images.

Overall, our findings are grim: 23% of image posts in our data contained misinformation. Consistent with previous work, we found that misinformation was unequally distributed along partisan lines. While only 5% of left-leaning posts contained misinformation, 39% of right-leaning posts did.

The misinformation we found on Facebook was highly repetitive and often simple. While plenty of images had been doctored in a misleading way, memes with misleading text, screenshots of fake posts from other platforms, or posts that took unaltered images and misrepresented them outnumbered posts using altered images.

For example, a picture was repeatedly posted as “proof” that now-former Fox News anchor Chris Wallace was a close associate of sexual predator Jeffrey Epstein. In reality, the gray-haired man in the image is not Epstein but actor George Clooney.

There was one piece of good news. Some previous research had found that misinformation posts generated more engagement than true posts. We did not find that. Controlling for page subscribers and group size, we found no relationship between engagement and the presence of misinformation. Misinformation didn’t guarantee virality, but it also didn’t diminish the chances that a post would go viral.

But image posts on Facebook were toxic in ways that went beyond simple misinformation. We found countless images that were abusive, misogynistic or simply racist. Nancy Pelosi, Hillary Clinton, Maxine Waters, Kamala Harris and Michelle Obama were the most frequent targets of abuse. For example, one frequently reposted image labeled Kamala Harris a “‘high-end’ call girl.” In another, a photo of Michelle Obama was altered to make it appear that she had a penis.

Yawning Gap in Knowledge

Much more work remains to be done in understanding the role visual misinformation plays in the digital political landscape. While Facebook remains the most used social-media platform, more than a billion images a day are posted on Facebook’s sister platform, Instagram, and billions more on rival Snapchat. Videos posted on YouTube, or on the more recent arrival TikTok, may also be an important vector of political misinformation about which researchers still know too little.

Perhaps the most disturbing finding of our study, then, is that it highlights the breadth of collective ignorance about misinformation on social media. Hundreds of studies have been published on the subject, but until now researchers have not understood the biggest source of misinformation on the largest social media platform. What else are we missing?The Conversation

Yunkang Yang is an assistant professor of communication at Texas A&M University; Matthew Hindman is a professor of media and public affairs at George Washington University, and Trevor Davis is a fellow at the Tow Center for Digital Journalism at Columbia University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Can you support the Mississippi Free Press?

The Mississippi Free Press is a nonprofit, nonpartisan 501(c)(3) focused on telling stories that center all Mississippians.

With your gift, we can do even more important stories like this one.