In the months after Covid-19 was first identified, the World Health Organization identified another health threat. The WHO warned Silicon Valley firms of an “infodemic,” with false information spreading faster than the virus itself.
Even as sites like Facebook and Twitter flagged pandemic-related posts and directed users to get the facts from sources like the Centers for Disease Control and Prevention, bots continued to circulate misinformation. According to a study published in JAMA Internal Medicine, these automated accounts shared Covid-19 misinformation at a far greater rate than human users.
“There’s been tremendous change in how social media, legislators and health policy leaders have been addressing this information. They’re taking a more proactive stance,” said John Ayers, the study’s lead author and co-founder of the Center for Data Driven Health at the University of California – San Diego. “That assumes that misinformation is sourced from the ordinary public and we’re trying to have this dynamic exchange with real people, when the reality is — and we should have learned this in 2016 with the presidential election — bots are the potential purveyors of misinformation.”
Researchers identified Facebook groups that were more likely to be affected by bots by measuring how quickly different profiles posted identical links. For example, bots were more likely to share the same information within four seconds of each other. They’re less obvious than people might think, especially on Facebook, where people often can’t view someone’s profile when they post to a group.
Specifically, they tracked the sharing of an oft-misquoted Danish study about the efficacy of masks in protecting the wearer from Covid-19. Of a total of 712 posts that linked to the study, about 40% of them were shared in the Facebook groups most affected by automation.
Posts shared in groups where bots were more predominant were more than twice as likely to make false claims that masks harm the wearer or make conspiratorial claims than posts in other groups, according to the study.
For example, half of the posts in bot-affected groups that shared the link promoted conspiracies, such as, “corporate fact checkers are lying to you! All this to serve their Dystopian #Agenda2030 propaganda.”
While the researchers did not know who was behind the bots, or what their motives were, most of the posts were designed to be “clickable” and inflammatory.
“It’s clearly malicious, but we don’t know the underlying motivation,” Ayers said in an interview. “What they’re really doing is undermining evidence-based medicine generally. … That’s a dangerous place to be in and a universal issue.”
While this study specifically followed misinformation about masks, the issue certainly doesn’t end there. Dr. Davey Smith, co-author of the study and chief of infectious diseases at UC San Diego, postulated that automated misinformation could also foster vaccination hesitancy or amplify anti-Asian discrimination.
A separate study, published in the American Journal of Public Health in 2018, found that Twitter bots and known Russian troll accounts shared more polarizing tweets about vaccines.
These bots have the effect of distorting social norms, where people feel more comfortable saying something when they see other people saying it too, Ayers said.
“That’s how bots work, they violate that principle,” he said. “People who reshare this content or engage with this content may themselves be skeptical of it, but because of the bots, become more free to start to verbalize it or even start to believe it.”
The good news is that there are publicly available tools to identify bots, and social media companies have the ability to detect these campaigns. From Ayers’ perspective, this would be easier to do than evaluating and removing posts one by one.
“If you want to change the infodemic, you’ve got to start with bots,” he said. “It has a spillover effect everywhere.”
Photo credit: Nuthawut Somsuk, Getty Images