A MOOving Target
Herd immunity has become a familiar phrase, passed around in conversation and in the media as the pandemic’s endgame, an attainable numeric threshold of immunity to be conquered. That is, however, until recently. The dynamic nature of all things COVID-19 pandemic - vaccines, variants, distribution of immunity in local communities, and time of year to name a few, muddy the certainty of predicting a single percentage that ends this pandemic and brings life back to normal.
Herd immunity occurs when a sufficient proportion of a population has immunity to a particular contagious disease, preventing pathogens from finding and infecting susceptible hosts. Immunity can be achieved through natural infection recovery or through vaccination, and when reached, those unable to receive the vaccine are largely protected from disease. The herd immunity threshold differs for every contagious disease. For example, protection against measles and polio require 95% and 80% of a population, respectively, to be vaccinated.
For COVID-19, modeling performed at the pandemic’s onset projected 60-70% of the global population needed to be immune to stop the pandemic. However, more recently, with the increase in transmissibility observed with variant strains and young children ineligible for vaccinations, the estimated threshold has been bumped even higher, up to 90% - 100%. While determining a herd immunity estimate through computational modeling is useful to policymakers, these thresholds are really “guesstimates” based on oversimplified assumptions and are moving targets.
The origins of the phrase
An article published in the The Lancet traced the first reference to herd immunity back to veterinary medicine in 1910 when “contagious abortion,” or Brucellossis, became the leading threat to cattle and sheep in the United States. Farmers would cull or sell affected livestock until 1916 when George Potter, a Kansas veterinarian, recognized that by keeping surviving animals, raising their calves, and avoiding the introduction of foreign cattle, the herd would no longer succumb to disease. Potter compared the disease to a fire, whereas if “new fuel is not constantly added, it will eventually die down” and with an increase in surviving animals, “herd immunity” could be created.
Several years later in 1919, “herd immunity” was demonstrated in the laboratory setting while mapping the spread of disease through mice colonies before making its way to human medicine in 1923. Outbreaks of diphtheria were occurring among school boys at the Royal Hospital School in Greenwich, a school in which students would live in crowded dormitories for an average of three years. After recognizing the outbreaks coincided with the admission of new students each term, Professor Sheldon Dudley divided the students into groups based on their arrival date. He then administered the Schick test to differentiate those immune from those susceptible to diphtheria. The study was repeated each term for over 10 years. Ultimately, Dudley concluded immunity against diphtheria was most profound among student groups who experienced repeated outbreaks and received repeated active immunizations (via the Schick test), demonstrating “herd immunity.”
While herd immunity is possible without the use of vaccines, as was observed with cattle and mice, nowadays, vaccination is almost always included. The World Health Organization considers it “scientifically problematic and unethical” to achieve herd immunity through natural exposure without the use of vaccines since this would lead to a significant amount of unnecessary suffering and deaths. To illustrate that point, we can look at natural infection levels and vaccine coverage. Serosurveys indicate that as of November 15, around 14% of the US population has been infected with COVID-19 (less than 10% of the world population has been infected with COVID-19) and around 38% of the US population are fully vaccinated - really a far cry from reaching the estimated herd immunity of 90-100% but a gap that would be even larger without vaccines.
Vaccines are the endgame
At this point in the COVID-19 pandemic, we really don’t know for certain what proportion of the population needs to be vaccinated to induce herd immunity. Predicting herd immunity amongst the human population at large is not as straightforward as it is in animals housed in pens or schoolboys living in dormitories. Immunity in a given population is not static but is constantly in flux depending on a confluence of factors including human behavior and movement. Unvaccinated individuals in the US - including those considered “vaccine hesitant” (around 28 million people) and those considered "vaccine amenable” (around 30 million people), have been a pressing challenge in the quest to even the immunity distribution.
Immunity levels vary by area - and this matters. Movement of people from low vaccination areas can spillover to areas with higher vaccination rates, unleashing more infections. Professor Meyers at the University of Texas at Austin describes the confusion in an NPR interview, “you may hear numbers like 50% of the population are immunized. But is that really 50% in every single neighborhood?” Likely not. The duration of immunity is also in question and will likely wane with time. While a recent study found natural immunity from a previous COVID-19 infection may last up to 8 months, it is unknown how long the vaccines will remain protective.
While computing herd immunity is useful for epidemiologists and policymakers, most public health experts agree herd immunity is not something on which the public needs to fixate. Instead, the most ethical, most effective way to end local viral transmission, reduce infections, hospitalizations, and deaths, and to get us on the path back to normal life is through focusing on vaccination. According to Professor Mordecai from Stanford University, “more infections are bad, and the way to stop them is to get vaccinated. It’s that simple.”
The New York Times