October 15, 2020

World Health Organization Director-General Dr. Tedros Adhanom Ghebreyesus warned of an “infodemic” in February 2020. (Fabrice Coffrini/REUTERS)

At least one million people have died from coronavirus disease 2019 (COVID-19) since January 2020. Many more will die before we find better ways to manage or eradicate this disease, although eradication seems increasingly unlikely. This is a tragedy of almost incomprehensible proportions.

That tragedy has combined with another potentially life-threatening problem — an alarming quantity of poor-quality and often directly harmful information about the pandemic and quack cures. World Health Organization (WHO) Director-General Dr. Tedros Adhanom Ghebreyesus warned back in February about this “infodemic.” This warning came a few weeks before the WHO declared in mid-March that COVID-19 was a pandemic, meaning a global epidemic.

The infodemic, too, appears to have had real health consequences: one study, published in August 2020, estimated that alcohol poisoning killed almost 800 people from around the world who apparently believed an online rumour that drinking highly concentrated alcohol would prevent COVID-19.

The word infodemic is not the first term to describe our online world that has been borrowed from health. During the pandemic, researchers have talked about how social media companies should “flatten the curve” of misinformation. We talk about computer viruses and, in fact, that terminology emerged in the 1990s from an analogy to the HIV/AIDS pandemic, as Cait McKinney and Dylan Mulvin have shown. McKinney and Mulvin argue that “HIV/AIDS discourses indelibly marked the domestication of computing, computer networks, and nested, digitized infrastructures.”

Too often, platforms appropriate medical language. Twitter has talked for several years about wanting to promote “healthy conversations.” But what does healthy mean online?

Content on social media platforms is indeed only one part of the ecosystem, which also includes questions around tax policy, surveillance, antitrust, company culture and so on.

One way to grapple with this problem is to draw inspiration from other concepts in public health. One potentially helpful concept is “One Health.” One Health has multiple definitions, but at its most basic it is “an approach to ensure the well being of people, animals and the environment through collaborative problem solving — locally, nationally, and globally” (this is the definition from the One Health Institute at the University of California, Davis). The One Health idea sounds similar to many Indigenous and other ways of knowing that see humans as inextricably linked to surrounding animals and environment — a new name for very old concepts.  

However, very little discussion of COVID-19 beyond scholarly circles has evoked the One Health idea. This is strange in many ways because it has existed as a specific term for several decades and was crucial in much of the drive to create global influenza response systems. The concept matters because COVID-19 can only really be understood as part of a broader ecosystem of human interactions with nature that have accelerated the emergence of zoonotic diseases such as COVID-19.

COVID-19 is neither unprecedented nor a one-off. Seventy-five percent of the emerging pathogens over the past few decades have been spread from animals to humans. These include epidemics such as Ebola and Zika virus disease. One Health encompasses more than just novel pathogens that are emerging from human encroachment upon new environments. Researchers have also considered how global food practices often create deadly foodborne diseases: more than 400,000 people die annually from contaminated food. The stakes of addressing public health problems through One Health are incredibly high.

If we continue a long-standing trend of using health concepts and language to think about the online world, what can the One Health concept contribute to platform governance?

If we follow the One Health concept, we see that COVID-19 is only one example of how human interactions with animals and our planet accelerate pandemics. Rather than portray COVID-19 as an incident to be overcome and forgotten like the 1918 flu pandemic, One Health reminds us to understand COVID-19 within the context of both climate change and increasing human encroachment into animal and plant habitats. COVID-19 is the latest in a series of such problems, like the earlier coronavirus outbreaks (SARS in 2003 and MERS in 2012) and Ebola. Governments, companies and citizens can muddle through this pandemic and just hope (almost certainly in vain) that another does not arise. Or they can reckon with the One Health approach that demands systemic change to prevent such pandemics.

Applying the One Health logic to platform governance means looking beyond individual examples of content that so often rise to the top. One platform may seem to cause an infodemic. But the reality is far more complex and intertwined. A One Health approach to platform governance means seeing platforms as part of our social, political and economic ecosystems. Dealing with Facebook alone would be like addressing COVID-19 but nothing else — no regard for economic recovery, improved infrastructure or the changing relationship between humans and animals. It might seem to work for a time, but the impact of the virus, or other viruses like it, would surface again. Just like systemic understandings of viruses remain fundamental to our response to COVID-19, a systemic understanding of the platform ecosystem — and how it interacts with the offline world — remains fundamental to address our problems of platform governance.

Platforms

In May, the video Plandemic was released online and viewed by more than eight million people within seven days. The video featured a discredited doctor, Judy Mikovits, and was packed full of problematic assertions. Her assertions underplayed the gravity of COVID-19 as a disease and could have potentially serious consequences for anyone who believed them. When a sequel to Plandemic was released in August, platforms like YouTube were ready, and the video attracted far less attention.

The management of the Plandemic videos might seem like a story of success: platforms seemed to learn from the first outing. But these videos comprise just one very specific example of problematic content. Celebrating individual wins of removing posts or successfully combating a single conspiracy video is akin to focusing on how, say, one group of people has isolated to stop COVID-19 while creating policies, such as in-person university teaching, that enable many more cases. One online conspiracy may be gone, but then another pops up, such as the recent campaign on Facebook and Instagram urging people to burn face masks on September 15. The online focus on individual cases often comes at the expense of attending to the algorithmic and other systemic reasons why such cases keep occurring.

Moreover, content like the Plandemic videos constantly places the pressure on actors outside platforms — in this case, scientists, to debunk the claims in the video, or journalists, to detect problems in the first place. Such examples resemble the phenomenon of sports: the NBA or the US Open can afford to create a successful bubble, but such instances only work for high-profile cases which attract millions of viewers. Pushing actors with fewer resources to address each problem serially only obscures the question of when, not if, such problems will recur.

The One Health approach reminds us to look holistically. Instead of focusing on individual posts, policymakers must consider the ecosystem, risk tolerance and infrastructure that allows harmful or untrue information to proliferate. Facebook recently noted that it has removed one million groups in the past year for violating its policies. Quarterly and annual reports from many platforms similarly detail how much hate speech or other types of forbidden speech they’ve removed from the platforms. So the larger questions would go beyond whether platforms have succeeded in removing all the hate speech posts, and address instead what it is about the ecosystem that enables so many of these posts to exist in the first place, and examine where this content has cropped up most virulently around the globe.

One Health means that a health problem anywhere in the world can become a problem everywhere in the world. Similarly, many of the problems now coming to roost in North America and Europe have happened elsewhere. Generals in Myanmar used Facebook to push for mass expulsions of the Rohingya in 2017. Hatred spread quickly in Myanmar over Facebook for many reasons, including that the company did not have content moderators with the language skills or local contextual knowledge to understand what was happening.