A Centers for Disease Control scientist in a protective airtight suit, handling influenza virus specimens, in a biosafety level 4 laboratory, in Atlanta, Georgia, in 2005. | Douglas Jordan/Centers for Disease Control/Getty Images
New rules around gain-of-function research make progress in striking a balance between reward — and catastrophic risk.
Imagine you’re a virologist. You’re doing research into monkeypox, and in an effort to better understand which genes make monkeypox deadly, you take genetic components of one of the clades of monkeypox that is more deadly and components from a clade that is less deadly but more transmissible. (Because you’re a virologist, you know that a clade is a group of organisms sharing specific genetic traits.) You combine them to make a new monkeypox variant with traits from both the deadly version and the contagious version.
Would this work be covered by US guidelines that require heightened safety scrutiny for research that could potentially spark a deadly pandemic?
Under the current guidelines, this actually isn’t clear. When researchers with the National Institute of Allergy and Infectious Diseases (NIAID) planned such an experiment, a safety panel concluded they were exempt from review. Monkeypox, after all, isn’t a “potential pandemic pathogen,” one of the exceptionally risky viruses like influenzas and coronaviruses at which the guidelines are aimed.
And while the current guidelines also target work on any virus that is “enhanced” to be more dangerous, the NIAID researchers said they didn’t expect their new hybrid virus to be more deadly than the deadlier of the starter strains or more virulent than the more virulent of the starter strains.
This may seem like a bizarre way to decide when heightened safety standards are appropriate for virology research. Surely, the thing we care about is not how viruses are classified but how much damage would be done if the end result infects people — as happens with worrying frequency in lab accidents around the world.
Fortunately, a new set of proposed guidelines released last week by the National Science Advisory Board for Biosecurity (NSABB) would change how we evaluate research with the potential to cause a pandemic — hopefully making the process more transparent and more reasonable while keeping the public safer from potential catastrophe.
It represents “a number of important steps forward,” Tom Inglesby, director of the Johns Hopkins Center for Health Security, told me.
Defining pandemic potential by results
Here’s a simple way to define whether research should be subject to additional safety oversight: Is the final result of the work, or any intermediate results, a virus that could spark a pandemic? If so, additional safety oversight is probably appropriate!
That’s approximately the standard put forward in the new proposed guidelines, but it largely wasn’t the case previously.
The NSABB board found substantial shortcomings in the current standards. “The current definitions of a PPP [potential pandemic pathogen] and enhanced PPP (ePPP) are too narrow,” they write in the report. “Overemphasis on pathogens that are both likely ‘highly’ transmissible and likely ‘highly’ virulent could result in overlooking some research involving the creation, transfer, or use of pathogens with enhanced potential to cause a pandemic.”
Say a pathogen is incredibly contagious but only mildly deadly. It may not sound that bad, but you’ve just described Covid-19, which has killed tens of millions of people worldwide. As we should all know by now, a pathogen that is only as deadly as SARS-CoV-2 is still catastrophic if it’s contagious enough to go global.
Furthermore, under the current standards, if the method by which the virus is made more virulent or more deadly involves swapping components of the virus with a different variant that is more contagious or more deadly, that also doesn’t count as an “enhanced’ virus. But of course, in the sense public policy should care about — the odds that millions of people will die — the changes are obviously enhancement!
“What matters isn’t the starting pathogen but the resulting pathogen,” said Inglesby. “If it results in a novel pathogen or a novel variant that has novel high transmissibility or novel high lethality, then that’s subject to oversight.”
The NSABB board proposed these revised guidelines:
“Amend USG P3CO policy to clarify that federal department-level review is required for research that is reasonably anticipated to enhance the transmissibility and/or virulence of any pathogen (i.e., PPPs and non-PPPs) such that the resulting pathogen is reasonably anticipated to exhibit the following characteristics that meet the definition of a PPP:
Likely moderately or highly transmissible and likely capable of wide and uncontrollable spread in human populations; and/or
Likely moderately or highly virulent and likely to cause significant morbidity and/or mortality in humans”
Pandemics can be nightmarish. Our policy needs to reflect that.
Biologists have made huge advances in their ability to understand and manipulate DNA and RNA over the last few decades. That has been enormously advantageous for humanity, and no one wants to bring the research that leads to those advances to a halt.
But it’s not that rare for pathogens to escape the lab. It’s not the stuff of conspiracy theories — as a recent investigative series in the Intercept uncovered, lab accidents are far more frequent than we might know, and rarely result in serious policy change. And given how much damage a pandemic can do, it means that research into creating new pathogens that have pandemic potential needs to be subject to a level of oversight that the government has, up to this point, struggled to provide clarity on.
Part of the confusion stems from the scaling challenge around catastrophic risks. Most workplace safety rules are assumed to protect the lives of the employees — perhaps dozens of people. Engineering reliability rules for bridges and skyscrapers are meant to protect the lives of hundreds of people, maybe thousands.
If something goes wrong — if those rules aren’t tough enough or aren’t enforced — it’s a very bad day for those people, but not beyond them. Pandemic prevention rules, though, are needed to protect the lives of literally millions of people. A mistake in a lab that unleashes something like Covid — or something worse — doesn’t just endanger those working in that lab, but potentially all of us. The degree of caution required for these astronomical stakes is simply different from anything else.
Looming over all of this is the question of the true origins of the Covid-19 pandemic. While there’s no smoking gun that indicates the SARS-CoV-2 virus began life in a lab — and there likely never will be — the very fact that it’s difficult to know precisely what may have happened at the Wuhan Institute of Virology should give us pause. The number of biolabs dedicated to work on the world’s most virulent pathogens is growing, as is our ability to fiddle with the genetics of a virus. That’s a dangerous combination.
The new guidelines aren’t perfect. For one thing, they’re still focused on the US, and any effective effort to prevent human-caused pandemics and ensure risky research happens safely needs to be global — just as the pandemics such work could spark will be. But it’s a huge step toward making the rules more consistent, more reasonable, and more focused on the place where the stakes are the highest.
A version of this story was initially published in the Future Perfect newsletter. Sign up here to subscribe!