Skip to main contentSkip to navigationSkip to navigation
Crowds of shoppers in Edinburgh, 2 March 2020: ‘Herd immunity was nothing more than a dressed-up version of the “just do nothing” approach.’
Crowds of shoppers in Edinburgh, 2 March 2020: ‘Herd immunity was nothing more than a dressed-up version of the ‘just do nothing’ approach.’ Photograph: Murdo MacLeod/The Guardian
Crowds of shoppers in Edinburgh, 2 March 2020: ‘Herd immunity was nothing more than a dressed-up version of the ‘just do nothing’ approach.’ Photograph: Murdo MacLeod/The Guardian

The UK's coronavirus policy may sound scientific. It isn't

This article is more than 4 years old

Dominic Cummings loves to theorise about complexity, but he’s getting it all wrong

When, along with applied systems scientist Dr Joe Norman, we first reacted to coronavirus on 25 January with the publication of an academic note urging caution, the virus had reportedly infected fewer than 2,000 people worldwide and fewer than 60 people were dead. That number need not have been so high.

At the time of writing, the numbers are 351,000 and 15,000 respectively. Our research did not use any complicated model with a vast number of variables, no more than someone watching an avalanche heading in their direction calls for complicated statistical models to see if they need to get out of the way.

We called for a simple exercise of the precautionary principle in a domain where it mattered: interconnected complex systems have some attributes that allow some things to cascade out of control, delivering extreme outcomes. Enact robust measures that would have been, at the time, of small cost: constrain mobility. Immediately. Later, we invoked a rapid investment in preparedness: tests, hospital capacity, means to treat patients. Just in case, you know. Things can happen.

The error in the UK is on two levels. Modelling and policymaking.

First, at the modelling level, the government relied at all stages on epidemiological models that were designed to show us roughly what happens when a preselected set of actions are made, and not what we should make happen, and how.

The modellers use hypotheses/assumptions, which they then feed into models, and use to draw conclusions and make policy recommendations. Critically, they do not produce an error rate. What if these assumptions are wrong? Have they been tested? The answer is often no. For academic papers, this is fine. Flawed theories can provoke discussion. Risk management – like wisdom – requires robustness in models.

But if we base our pandemic response plans on flawed academic models, people die. And they will.

This was the case with the disastrous “herd immunity” thesis. The idea behind herd immunity was that the outbreak would stop if enough people got sick and gained immunity. Once a critical mass of young people gained immunity, so the epidemiological modellers told us, vulnerable populations (old and sick people) would be protected. Of course, this idea was nothing more than a dressed-up version of the “just do nothing” approach.

Individuals and scientists around the world immediately pointed out the obvious flaws: there’s no way to ensure only young people get infected; you need 60-70% of the population to be infected and recover to have a shot at herd immunity, and there aren’t that many young and healthy people in the UK, or anywhere. Moreover, many young people have severe cases of the disease, overloading healthcare systems, and a not-so-small number of them die. It is not a free ride.

This doesn’t even include the possibility, already suspected in some cases, of reccurrence of the disease. Immunity may not even be reliable for this virus.

Worse, it did not take into account that the duration of hospitalisation can be lengthier than they think, or that one can incur a shortage of hospital beds.

Second, but more grave, is the policymaking. No 10 appears to be enamoured with “scientism” – things that have the cosmetic attributes of science but without its rigour. This manifests itself in the nudge group that engages in experimenting with UK citizens or applying methods from behavioural economics that fail to work outside the university – yet patronise citizens as an insult to their ancestral wisdom and risk-perception apparatus. Social science is in a “replication crisis”, where less than half the results replicate (under exact same conditions), less than a tenth can be taken seriously, and less than a hundredth translate into the real world.

So what is called “evidence-based” methods have a dire track record and are pretty much evidence-free. This scientism also manifests itself in Boris Johnson’s chief adviser Dominic Cummings’s love of complexity and complex systems (our speciality) which he appears to apply incorrectly. And letting a segment of the population die for the sake of the economy is a false dichotomy – aside from the moral repugnance of the idea.

As we said, when one deals with deep uncertainty, both governance and precaution require us to hedge for the worst. While risk-taking is a business that is left to individuals, collective safety and systemic risk are the business of the state. Failing that mandate of prudence by gambling with the lives of citizens is a professional wrongdoing that extends beyond academic mistake; it is a violation of the ethics of governing.

The obvious policy left now is a lockdown, with overactive testing and contact tracing: follow the evidence from China and South Korea rather than thousands of error-prone computer codes. So we have wasted weeks, and ones that matter with a multiplicative threat.

Nassim Nicholas Taleb is distinguished professor of risk engineering at New York University’s Tandon School of Engineering and author of The Black Swan. Yaneer Bar-Yam is president of the New England Complex System Institute



Most viewed

Most viewed