Evaluating COVID-19 Vaccine Policies on Social Media Platforms

This work reflects the collective position of the Virality Project. We would especially like to thank Carly Miller, Chase Small, Koko Koltai, Isabella Garcia-Camargo and Renee Diresta for their contributions to this post.


In 2019 the World Health Organization declared vaccine hesitancy a top-ten threat to global health; number three was a global influenza pandemic. A few short years later, we are challenged by an ongoing global pandemic — COVID-19 — and a vaccine rollout to counter it has just begun. In today’s environment, hesitancy is not unexpected: this is a new vaccine. However, public perception of the safety and efficacy of the vaccine is critical to determining public uptake. Misinformation has the potential to cloud accurate understanding of both facts and risks. At the onset of the vaccine rollout in the US, social media platforms are playing a key role in shaping public perceptions and behaviors. 

Anti-vaccine sentiment has been around for as long as we’ve had vaccines, and vaccine misinformation on social platforms has been a longstanding public health communication challenge that predates COVID-19. But beginning in December 2020, platforms such as Twitter, Facebook, and YouTube updated their policies around harmful misinformation about COVID-19 to include misinformation specific to the COVID-19 vaccine. On February 8, 2021, at the recommendation of the Facebook Oversight Board, Facebook announced it would expand the list of false vaccine-related claims that it would remove from its platform; on February 10, 2021, several high-profile anti-vaccine activist accounts were taken down. This update takes a stricter approach to health misinformation than Facebook has in the past, and has sparked a discussion about the best approach to addressing vaccine-related content. Facebook’s policy updates are the most recent in a long and slow evolution of social media platforms developing more stringent policies around vaccine misinformation.

This first blog post from the team at the Virality Project is a review of vaccine-related policies across nine different platforms. It looks at both general as well as COVID-19-specific vaccine policies to understand the current policy landscape that will ultimately impact the narratives that reach social media audiences. We evaluate these policies based on four core categories of vaccine-related content: 1) safety of vaccines; 2) vaccine efficacy and necessity; 3) vaccine development and distribution; and 4) conspiracy claims about the vaccine. We also address Facebook’s recent policy update and discuss the tradeoffs and nuances surrounding policy interventions on false and misleading narratives related to public health.

Key Takeaways 

  • Policy Assessment. Our team evaluated the policies of nine platforms. We chose these platforms based on the important role they play as a source of information, including on the topic of vaccines. Our research found that Facebook, YouTube, and Twitter have the most comprehensive policies; Pinterest, TikTok, and Nextdoor have more generic vaccine-related policies; and Reddit, Bing Search, and Google Search do not have articulated vaccine-specific policies, though Google Search has a policy called “Your Money Your Life” for elevating authoritative search results for health searches.

  • Interventions and Counter-Narratives. Because many vaccine-related stories are shared as first-person experiences, fact checking and countering anti-vaccine content is difficult. Moreover, because experiences revolving around health are sensitive and can be emotionally charged, platform interventions such as removing content may backfire by fueling claims of censorship or creating a “forbidden knowledge” effect. 

  • Policy Clarity and Transparency. Platform policies related to vaccines appear within blog posts, Help Center posts, and community guidelines. For example, the community standards post did not include many of the specific claims that Facebook said it would remove in its Help Center post. Facebook should centralize and clarify its platform policies. 

  • Policy Recommendations. Platforms need to be more transparent with how they apply their policies, specifically in cases in which the same accounts habitually violate them (repeat offenders). Some platforms have developed a transparent strike system such as Twitter under its Civic Integrity Policy; while this policy does not currently apply to vaccine content it could serve as a model in this space. 

Categories of COVID-19-related Misinformation and Disinformation

We designed a framework to evaluate platforms’ COVID-19 vaccine-related policies that breaks down the core facets of narratives commonly associated with vaccine hesitancy or refusal. This framework is informed by the work of several researchers and medical professionals such as Tara Smith, a professor of epidemiology at Kent State University, First Draft and their COVID-19 misinformation research, and others

This body of work has repeatedly identified six core anti-vaccine narratives: toxicity, religiosity, liberty, distrust of industry, safety, and conspiracy. Several of these are related specifically to the vaccines themselves and their impact on the body. However, others can be viewed more as political statements, concerned not with falsifiable claims but rather with notions of personal or religious liberty. In the “Liberty” narrative, for example, vaccine opposition is often couched in political identity and avowed resistance to the prospect of requirements or mandates to take a vaccine at the behest of the state (including in service to communal public health, such as routine school immunizations). This narrative is a political position as opposed to a health claim; as such it remains largely outside of health misinformation moderation policies, and may increase in prominence as misinformation infrastructure previously leveraged by partisans for election-related content changes focus to vaccine-related content.

These categories inform the scope of the Virality Project. We chose to focus on four core categories— those largely focused on health, safety, and efficacy claims, rather than political opinion— for our analysis. Platform policies should take into account claims that are false, misleading, taken out of context, exaggerated, or used in coordination to degrade confidence in the COVID-19 vaccine or the integrity of efforts to vaccinate individuals.

The four core categories of vaccine-related content which we evaluated these policies on are as follows:

  • Safety: Claims that the COVID-19 vaccines cause harm to recipients. 

    • Narratives may sensationalize vaccine side effects or question the health and safety of vaccine ingredients.

  • Efficacy and Necessity: Claims about the effectiveness of the vaccine or whether it is necessary to receive the vaccine. 

    • Common narratives question the vaccine's ability to prevent the virus (efficacy), or whether COVID itself is ‘bad enough’ to necessitate a vaccine, rendering the vaccine unnecessary (necessity).

  • Vaccine Development and Government Distribution: Claims that misrepresent vaccine production and distribution plans and vaccine mandates. 

    • Narratives may claim that the vaccine’s development was rushed to market; its distribution is politically motivated; distribution will favor elites over at-risk populations; or that governments are mandating or forcing vaccination.

  • Conspiracy: Claims fueled by distrust of authorities falsely implicating individuals or government institutions to claim or suggest they have malicious intent behind creating or administering the vaccine.

    • Already prevalent conspiracies include claims that the vaccine injection contains a secret government microchip, or other control device. The conspiracies can follow similar tropes about secret operations to increase power for the government or the government’s will to control certain populations.

Platform Policy Evaluation

Methodology

Our team analyzed the platform policies of nine companies, seven social media platforms and two search platforms: Facebook, Twitter, YouTube, Pinterest, TikTok, Nextdoor, Reddit, Bing Search, and Google Search. The nine platforms were chosen because they are popular in the US where anti-vaccine content has the potential to be shared. The two search engines play an important role in information seeking for those looking for COVID-19 vaccine information and could potentially be used to steer users towards misinformation or conspiracy theories. Past work has shown that Google and Bing return very different results for queries related to vaccine misinformation, and that misinformation can thrive in data voids

The platforms’ vaccine-related policies vary widely in their level of detail and enforceability. Some policies were too generic to grade. Others were evaluated for their inclusion of the specific categories listed above. A detailed explanation of the ratings and policies used in our assessment can be found in the PDF attached to this blog post. 

Policy Evaluation Rubric

  • None: The platform’s policy does not address this type of vaccine-related content. 

  • Generic: The policy is too generic to grade against specific narratives  

  • Non-Comprehensive: The policy as it relates to a specific category is vague such that researchers and users are unsure if a platform would act on it or if specific types of content fall under the platform’s policy. The policy could also be missing key narratives under the category.

  • Comprehensive: Policy in this category uses direct language and is clear on what type of COVID-19 vaccine misinformation falls under the category. Comprehensive policies cite a breadth of cases that violate its policies and clearly state what action would be taken on such content. 

Table of COVID-19 and Vaccine Related Policies by Platform

Of the nine platforms, Reddit, Bing Search, and Google Search do not have vaccine-related policies. Some platforms look to an external public health authority — such as the WHO or CDC — to determine what information is false and to identify a list of known conspiracy theories. We have included this information when available, for further context. 

Table 1: The sources for the evaluation of the policies by platform was informed by their community guidelines and standards linked here: Facebook, Twitter, YouTube, Nextdoor, Pinterest, TikTok. *While YouTube’s policies are largely rated comprehens…

Table 1: The sources for the evaluation of the policies by platform was informed by their community guidelines and standards linked here: Facebook, Twitter, YouTube, Nextdoor, Pinterest, TikTok. *While YouTube’s policies are largely rated comprehensive, it is important to note that the policies are written with many specific examples under each category. While these examples are informative, it is unclear if they are meant to be comprehensive, or if related content which does not fall into these hyper-specific examples would also be considered. Thus, we found that the wording of this language may run the risk of being over prescriptive, without flexibility for new types of narratives.

As the table shows, Facebook, Twitter, and YouTube have the most comprehensive policies for vaccine-related content. Nextdoor, Pinterest, and TikTok have broader policies that don’t clearly specify which types of narratives or claims specifically related to vaccines may be subject to platform action. 

The benefits of comprehensive policies is that users and researchers can have greater transparency into the policy decisions being made by platforms and offer important feedback. Transparency and clarity also allow users to hold platforms that do have comprehensive policies accountable to their commitments. However, comprehensiveness of a policy does not mean the platform is rid of harmful vaccine content on its platform; some platforms can be less comprehensive in their policies but better at content moderation than those with comprehensive ratings. Clarity of the policy and success of the implementation are two different things; this survey evaluates the presence or absence of policies per platform in key facets of the overall vaccine conversation. 

Platform Intervention: A discussion of Facebook’s new policies

Platforms’ actions to address violative content usually fall under three categories: removing content entirely, reducing distribution of content, or applying labels to content. In determining what action to take in the context of health misinformation, most platforms consider the “imminent physical harm” risk of a given post, and all consider the longer-term impact on public health. However, there is an additional challenge in balancing moderating content with preserving free expression, and with avoiding alienating users with fact-check labels that backfire to make them resistant to accurate information.

A screenshot illustrating how Facebook tries to connect Pages and Groups that share vaccine misinformation to authoritative sources such as the CDC. 

A screenshot illustrating how Facebook tries to connect Pages and Groups that share vaccine misinformation to authoritative sources such as the CDC. 

Platforms have leaned into other types of moderation strategies outside of the leave up or take down binary: TikTok makes anti-vaccine content harder to find by redirecting searches associated with vaccines or COVID-19 disinformation to its Community Guidelines and does not autocomplete anti-vaccine hashtags in its search; Twitter provides a prompt suggesting credible health resources when a user searches keywords associated with vaccines; Pinterest surfaces “reliable results” from public health organizations such as the WHO and CDC that provide vaccine safety information in various languages.  

Facebook’s recent decision to expand the types of claims it will remove breaks with its previous moderation strategy on vaccine content. Facebook first began to address vaccine-related content in 2019 and focused on reducing discoverability and downranking content from Pages and Groups that “spread misinformation about vaccines” rather than removing anti-vaccine content. Facebook’s policy at this time also split health misinformation about vaccines from political positions, such as those surrounding state-level legislation on vaccinations. Increasingly, this division is becoming less-clean cut as anti-vaccine groups also overlap with QAnon and anti-lockdown content. Facebook’s new policies do not specify if they will wade into false claims of government mandates and it will be interesting to see how their policies will manifest with claims rejecting the vaccine based on “health freedom”.

While Facebook’s updated removal policies provide important clarity into what comprises violating content, they may have negative side effects. One potential outcome to takedowns is the creation of a “forbidden knowledge” effect, where the removal of a post or account is reframed as if it's a signal that the information is something that authorities are trying to keep from the public, which then increases curiosity about the content. Another concern is that removing content may stifle legitimate discussions around the vaccine; as researchers such as Zeynep Tufekci have flagged, there are still unknowns about the vaccine, and discussing certain aspects of health or safety topics may get swept up in this policy. For example, Tufekci points out there are ongoing clinical trials of specific vaccines with no placebo. Under Facebook’s new policy, this conversation may not be allowed because adjacent claims (which are misinformation in the context of other vaccines) have been earmarked by Facebook as false

What Tufekci and others’ comments illustrate is that there are going to be challenges in the application of this policy. Vaccine skepticism and hesitancy can manifest in more subtle statements than the straightforward ones used in Facebook’s examples, such as “the COVID-19 vaccine causes COVID-19!” Similarly, a defining aspect of the anti-vaccine information space is that many of these vaccine related-narratives are shared via first-person accounts. Therefore, it will be difficult for platforms’ third party fact-checking orgs to verify certain information. According to its policies, Facebook will take into account additional context in determining whether or not to remove the claim. However, we do not know what this additional context looks like, or what the threshold is, for removal.

Lastly, this update fails to provide more transparency on what will happen with repeat offenders. While Facebook’s new rules specifically mention “Pages, Groups, profiles, and Instagram accounts that repeatedly post misinformation related to COVID-19, vaccines, and health”, it isn’t clear which action will be taken against the accounts or how many infringements of the policy are needed in order for the account to be taken down; there are many accounts that are actively in violation of this policy. 

Overarching Challenges For Platforms and Researchers

The few platform policies around vaccine misinformation that existed prior to COVID-19 were written to address claims about decades-old vaccines with established distribution. As platforms adapt their vaccine-content moderation strategy to the rollout of the new COVID-19 vaccines, there are a number of potential challenges they may face. First, while Facebook, Twitter, and YouTube have committed to addressing conspiracy theories about the COVID-19 vaccine, they are relying on the WHO and CDC to define those conspiracy theories. This information loop is critical for getting authoritative information, but it could slow down platforms’ ability to respond to new conspiracy theories at a pace to match the speed of virality online. The CDC outlines known hoaxes related to vaccines (with much more detail than the WHO) in an attempt to counter them, however, communicating counter-narratives rapidly has not been an organizational strength. Researchers have raised concerns relying on the CDC and WHO as authoritative sources to underpin these policies. There are still unknowns about the COVID-19 vaccine, and medical and scientific research may lead to changes in findings — and messages — over time. 

Lastly, users in vaccine-opposed digital spaces are often hyper aware of potential moderation from these major platforms and constantly adapt their information behaviors and practices to changes on platforms. This can include a variety of tactics including using non-standard lexical variation of spelling the word “vaccine” or adopting labels like “medical freedom” instead of “anti-vaccine,” duplicating and migrating to other platforms, hijacking hashtags, and evolving the way content is shared by providing suggestions towards claims instead of making a claim. Platforms will need to be agile to adapt to these changing tactics.

Previous
Previous

White House COVID-19 Vaccine Communication Plan: Analysis and Recommendations