How much misinformation is on Facebook? Several studies have found that the amount of misinformation on Facebook is low or that the problem has declined over time.

This previous work, though, missed most of the story.

We are a communications researcher, a media and public affairs researcher and a founder of a digital intelligence company. We conducted a study that shows that other studies have overlooked massive amounts of misinformation. The biggest source of misinformation on Facebook is not links to fake-news sites but something more basic: images. And a large portion of posted pictures are misleading.

For instance, on the eve of the 2020 election, nearly one out of every four political image posts on Facebook contained misinformation. Widely shared falsehoods included QAnon conspiracy theories, misleading statements about the Black Lives Matter movement and unfounded claims about Joe Biden’s son Hunter Biden.

Visual Misinformation by the Numbers

Our study is the first large-scale effort, on any social-media platform, to measure the prevalence of image-based misinformation about U.S. politics. Image posts are important to study, in part because they are the most common type of post on Facebook at roughly 40% of all posts.

Previous research suggests that images may be especially potent. Adding images to news stories can shift attitudes, and posts with images are more likely to be reshared. Images have also been a longtime component of state-sponsored disinformation campaigns, like those of Russia’s Internet Research Agency.

We went big, collecting more than 13 million Facebook image posts from August 2020 through October 2020, from 25,000 pages and public groups. Audiences on Facebook are so concentrated that these pages and groups account for at least 94% of all engagement—likes, shares, reactions—for political-image posts. We used facial recognition to identify public figures, and we tracked reposted images. We then classified large, random draws of images in our sample, as well as the most frequently reposted images.

Overall, our findings are grim: 23% of image posts in our data contained misinformation. Consistent with previous work, we found that misinformation was unequally distributed along partisan lines. While only 5% of left-leaning posts contained misinformation, 39% of right-leaning posts did.

The misinformation we found on Facebook was highly repetitive and often simple. While plenty of images had been doctored in a misleading way, memes with misleading text, screenshots of fake posts from other platforms, or posts that took unaltered images and misrepresented them outnumbered posts using altered images.

For example, a picture was repeatedly posted as “proof” that now-former Fox News anchor Chris Wallace was a close associate of sexual predator Jeffrey Epstein. In reality, the gray-haired man in the image is not Epstein but actor George Clooney.

There was one piece of good news. Some previous research had found that misinformation posts generated more engagement than true posts. We did not find that. Controlling for page subscribers and group size, we found no relationship between engagement and the presence of misinformation. Misinformation didn’t guarantee virality, but it also didn’t diminish the chances that a post would go viral.

But image posts on Facebook were toxic in ways that went beyond simple misinformation. We found countless images that were abusive, misogynistic or simply racist. Nancy Pelosi, Hillary Clinton, Maxine Waters, Kamala Harris and Michelle Obama were the most frequent targets of abuse. For example, one frequently reposted image labeled Kamala Harris a “‘high-end’ call girl.” In another, a photo of Michelle Obama was altered to make it appear that she had a penis.

Yawning Gap in Knowledge

Much more work remains to be done in understanding the role visual misinformation plays in the digital political landscape. While Facebook remains the most used social-media platform, more than a billion images a day are posted on Facebook’s sister platform, Instagram, and billions more on rival Snapchat. Videos posted on YouTube, or on the more recent arrival TikTok, may also be an important vector of political misinformation about which researchers still know too little.

Perhaps the most disturbing finding of our study, then, is that it highlights the breadth of collective ignorance about misinformation on social media. Hundreds of studies have been published on the subject, but until now researchers have not understood the biggest source of misinformation on the largest social media platform. What else are we missing?The Conversation

Yunkang Yang is an assistant professor of communication at Texas A&M University; Matthew Hindman is a professor of media and public affairs at George Washington University, and Trevor Davis is a fellow at the Tow Center for Digital Journalism at Columbia University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Yunkang Yang (Ph.D., University of Washington, 2020) is Assistant Professor of Communication at Texas A&M University. He is the author of Weapons of Mass Deception: How Right-wing Media Wage Political Warfare and Undermine American Democracy (forthcoming). Dr. Yang is known for his research regarding U.S. right-wing media, disinformation, and social media. In 2022, he provided his research and expert opinions to the U.S. House Select Committee on the January 6th Attack as statements of record. He has been quoted by various media outlets including the New Yorker, The Hill, NPR, and AFP.
Matthew Hindman is a professor of media and public affairs at George Washington University. His latest book, The Internet Trap: How the Digital Economy Builds Monopolies and Undermines Democracy, is forthcoming from Princeton University Press in September 2018.
Trevor Davis is the founder and CEO of CounterAction, a New Media Ventures portfolio company. His focus is on understanding how to measure and counter online disinformation. His work has been featured in major publications around the world.