Our work is based on science and trust, on our ability to communicate reliable information—and on having people believe that information is real.
We help Tanzanians change their behaviors to protect themselves from malaria by laying out the advantages of sleeping under mosquito nets each night. We help Nigerians adopt modern family planning by underscoring the health benefits of spacing their children. We help Zambians protect themselves from cholera by explaining the importance of clean water and oral vaccines.
Here in the US we know about the spread of misinformation, with bots sharing untruths, claims of “fake news” from the highest levels and unhealthy skepticism of scientific concepts such as climate change. The spread of misinformation around the world, however, has started to turn deadly.
False rumors about child kidnappers have gone viral on WhatsApp in India, prompting fearful mobs to kill 2 dozen innocent people since April, according to The New York Times. In Brazil, messages on WhatsApp falsely claimed a government-mandated yellow-fever vaccine was dangerous, leading people to avoid it. Violence toward Muslims in Sri Lanka earlier this year was directly a result of social media postings.
What is fascinating (and horrifying) is that those pushing this misinformation are using some of the same techniques we do. We want our health information—vetted, accurate, current health information—to go viral, to get into trusted closed networks like WhatsApp so that people can influence their friends and families in their decisions to adopt healthy behaviors. We spend time and money thinking about how to do just that—conducting formative research, strategizing, doing network analysis.
Making matters worse, falsehood consistently dominates the truth on Twitter, according to a study published this year by MIT in the journal Science. Fake news and false rumors reach more people, penetrate deeper into the social network and spread much faster than accurate stories.
What we are seeing here is that the same tools that we use for good can just as easily be used for harm.
This puts those of us in the communication business in a bind: Truth isn’t as sticky as fiction. In many cases, it goes against what people want to believe. And often it competes with misinformation, which is typically way more interesting than anything the truth has to offer.
So what can we do?
Technology companies are trying to come up with solutions—vetting news, shutting down bots and even creating WhatsApp hotlines in some countries where people can forward questionable content to be debunked.
On our end, we need to both change and stay the same. We need to understand and use the lightning fast spread of information, just as people spreading false information do. At the same time, we should take a breath and remember that what we are seeing is a new method for an old problem: Misinformation has always been a challenge, and we have theories to deal with it. What is new is the technology and its speed and reach.
What does our experience tell us? Refute bad information without repeating it, which does nothing but reinforce the original lie. People believe the people they trust, so continue to enlist trusted community members (imams, pastors, mothers) to be ambassadors of truth. In-person interactions—and repeated interactions—will always be powerful.
This is our new normal. We know what works to influence behavior, and we must craft our tools to this fast-changing world.
Susan Krenn is executive director of the Johns Hopkins Center for Communication Programs.
Join the tens of thousands of subscribers in more than 100 countries who rely on Global Health NOW summaries and exclusive articles for the latest public health news. Sign up for our free weekday enewsletter, and please share the link with friends and colleagues: http://www.globalhealthnow.org/subscribe.html