On Thursday morning, Facebook announced several new policies to wrangle misinformation on its platforms ahead of the November election. Among them: limiting the number of people or groups you can forward a message to at one time on Messenger. For a glimpse of whether that might work—and how well—you needn’t look further than another Facebook-owned company: WhatsApp.
Restricting Messenger forwards is just one of several tools that Facebook has rolled out to combat misinfo, and it barely made an appearance in the company’s press release. But it’s also one of the only measures with an established track record, albeit an opaque one. More important, it’s one of the few steps Facebook can take without sparking accusations of political bias from either side.
In 2018, misinformation ran rampant on WhatsApp, and it was linked to deadly consequences in countries like India, where the messaging app is the de facto means of online communication. Because WhatsApp is end-to-end encrypted by default, the platform can’t know the contents of messages as they propagate throughout its ecosystem. But it could at least slow the spread. That July, WhatsApp reduced the number of accounts that you could forward a message to, from 256 to 20 for most people. In January 2019, it trimmed that number again, to 5.
That’s the playbook Facebook is emulating with Messenger, lopping the maximum number of forward recipients from 150 down to 5. “We’ve already implemented this in WhatsApp during sensitive periods,” Mark Zuckerberg wrote in a Facebook post outlining Thursday’s changes, “and have found it to be an effective method of preventing misinformation from spreading in many countries.”
Which is probably the case! WhatsApp did manage to cut the total number of forwarded messages on its platform globally by 25 percent after that first round of changes. And stricter limits, instituted in April, on “highly forwarded messages”—anything that routed through five or more people before it gets to you—have curtailed those nuclear-grade viral chains by 70 percent. “The limits we have put in place at WhatsApp over the last two years have certainly reduced the spread of forwarded messages,” says WhatsApp spokesperson Carl Woog. “It would be difficult for us to say with certainty it reduces misinformation ‘only’—the user feedback we’ve gotten is that it also reduces sharing of harmless memes like ‘good morning’ messages.”
In other words, limiting forwards is a blunt instrument. “Measuring the impact of misinformation and disinformation on messaging apps with accuracy is close to impossible at the moment,” says Irene Pasquetto, cofounder of the Harvard Kennedy School Misinformation Review. “Especially on WhatsApp, given that all content is encrypted and we have no access to the data.”
That encryption has unquestionable, and essential, benefits for the privacy and security of billions of people. It also contributes to what Rutgers professor Britt Paris has coined as “hidden virality,” content that gets passed around in private groups and messages outside of the public eye. “The little data we have on misinformation is what we get from publicly available and open source intelligence,” says Cristina Lopez, a senior research analyst at the nonprofit Data & Society who focuses on disinformation. “When you think about “Plandemic,” and the way that was amplified so quickly publicly, it makes me shudder to think what that looked like privately. We were not able to measure that scale; there’s a chance that privately the spread started way before we were able to notice.”
Limiting Messenger forwards won’t shed any more light on what kind of content traverses those corridors, or how it spreads. It’s just playing the odds that it’ll slow the process down. At least one recent study indicates that it’ll work. Last fall, researchers from the Federal University of Minas Gerais in Brazil used data sets comprising posts from public WhatsApp groups in India, Indonesia, and Brazil to track the spread of messages and images—and to model what impact forwarding limits have on their spread.