July 11, 2020 Reading Time: 9 minutes
brain computer

As data accrues on both a national and state-by-state basis, the parameters of COVID-19’s lethality is firming up. Two new papers from Dr. John Ioannidis point to the growing shortfall between apocalyptic pandemic predictions and the vastly more destructive policies implemented in observance of them.

The first, entitled “Population-level COVID-19 mortality risk for non-elderly individuals overall and for non-elderly individuals without underlying diseases in pandemic epicenters” offers more evidence supporting the assertion that the government reaction to the virus has been vastly overwrought. 

Using data from 11 European countries, 12 US states, and Canada, Ioannidis and his team show that the infection rate is much higher than previously thought, which suggests that both the incidence of asymptomatic and mildly symptomatic cases is higher than thought, and the fatality rate much lower than previously estimated. 

As regards the age of victims,

People [under] 65 years old have very small risks of COVID-19 death even in pandemic epicenters and deaths for people [under] 65 years without underlying predisposing conditions are remarkably uncommon. Strategies focusing specifically on protecting high-risk elderly individuals should be considered in managing the pandemic. 

In the other paper, “Forecasting for COVID-19 has failed,” Ioannidis and co-authors take aim at the reasons for which the predictions were so incredibly inaccurate. Early predictions included that New York needed up to 140,000 hospital beds for stricken COVID-19 victims; the total number of individuals hospitalized was 18,569.

In California on March 17th, 2020, it was predicted that “at least 1.2 million people over the age of 18 [would] need hospitalization from the disease,” which would require 50,000 additional hospital beds. In fact, “COVID-19 patients [ultimately] occupied fewer than two in 10 beds.”

On March 27th 2020, Vice Provost for Global Initiatives at the University of Pennsylvania, Ezekiel Emanuel predicted that there would 100,000,000 COVID-19 cases in the United States in the four subsequent weeks — slightly less than one in three of all Americans. Unsurprisingly, this prediction has since been taken down.

Ezekiel Emanuel

Divination, accurate or not, is harmless in and of itself: that’s obvious. But when made by scientific dignitaries, in particular in the process of informing politicians amid crisis circumstances, it often leads to knee jerk reactions at all levels. The causative factors cited are, or should be, well known to economists: they include use of poor data or the wrong use of high quality data; improper or incorrect assumptions; wrongful estimates of sensitivity; wrongly interpreted past results or evidence; problems of dimensionality; and groupthink/bandwagon effects.

From a high level, epidemiological forecasts failed for the very reason that econometric predictions often flounder: the uncritical importation of modeling techniques from physics or applied mathematics into social science realms. This should not be especially revelatory. In “The Counter-revolution of Science” (1956), F. A. Hayek noted the pernicious effects of applying rigidly quantitative concepts where human action is at work, attributing them to “an ambition to imitate science in its methods rather than its spirit.”

Using Ioannidis’ guidelines, a subset of the elements which lead to predictive failures in epidemiology can not only be examined, but analogized directly with economic and econometric counterparts.

Data Problems

The issue of data quality and application in economics is one which arose from the growing quantification of the social sciences. Data which is either erroneously recorded, speciously accurate, or completely fabricated has been a problem of legendary proportions in econometrics and in the crafting of economic policy.

Although first identified as a serious issue 70 years ago (less than three years after the publication of this pivotal work), the mathematization of economics has proceeded apace with virtually no embracing of Oskar Morgenstern’s cautions. (While not waxing conspiratorially, it bears mentioning that low-quality data can be as much a political tool as a source of imprecision in both epidemiology and economics.) 

Similarly, there is growing evidence that some COVID-19 related data has been problematic: erroneous or miscalculated. Where testing is concerned, even a 1% error in the tens of millions of coronavirus tests being conducted would amount to hundreds of thousands of misdiagnoses, with the knock-on effects that such results give rise to. 

Erroneous Assumptions

Untenable and oversimplifying assumptions in economic formulations are often defended as pragmatic or unavoidable. These are problematic even when methodologies are appropriate, the data sound, and the calculations correct

Many [epidemiological] models assume homogeneity, i.e.all people having equal chances of mixing with each other and infecting each other. This is an untenable assumption and in reality, tremendous heterogeneity of exposures and mixing is likely to be the norm. Unless this heterogeneity is recognized, estimates of the proportion of people eventually infected before reaching her immunity can be markedly inflated.  

Epidemiologically, the homogeneity oversight is seen at its starkest and most tragically in comparing the outcome of insufficiently protecting the most vulnerable populations while simultaneously closing schools and excoriating teenagers/college students — among the least affected groups — for their social inclinations.   

Sensitivity of Estimates

Determining how an independent variable or groups of independent variables affect dependent variables is the focus of sensitivity analysis. Depending upon the regression (or other operation) being performed, and in particular the presence of exponents, a small error in independent factors can lead to huge variances in outcomes. (This is one of the characteristics of a chaotic system: the so-called butterfly effect refers to systems where ultimate outcomes or states show a tremendous degree of sensitivity to initial conditions.) 

There are techniques which can be used to determine where, when, and to what degree estimates have a disproportionate impact on the outcome of simulations or calculations, whether that comes in the form of wildly overblown or unrealistically diminished outcomes. Often, though, sensitivity is seen not in models, but in the real world events they are designed to approximate. 

Ioannidis cited the “inherent impossibility” of fixing such models, as the ubiquity of models employing “exponentiated variables [lead to] small errors [that] result in major deviations from reality.” Morgenstern evinced similar concerns in 1950 regarding the curve-fitting propensities of the new wave of economic practice; here in production functions, but the criticism is certainly extendable:

Consider, for example, the important problem of whether linear or nonlinear production functions should be considered in economic models. Non-linearity is a great complication and is, therefore, best avoided as much as possible. True non-linearity in the strict mathematical sense is avoided in physics as far as possible. Even quantum mechanics is treated as linear on a higher level. Many apparently nonlinear phenomena, upon closer investigation, can well be treated as linear . . . The distinction is largely a matter of the precision of measurement, which is exactly where the weakness is strongest in economics. It is astonishing that economists seem to hesitate far less to introduce non-linearity than physical scientists, where the tradition of mathematical exploration is so much older and the experience with observation and measurement so much firmer.

I would not deign to correct such a luminary as Dr. Morgenstern, but I would add that the weakness is not strongest in economics alone, but in all undertakings which quantitatively rigidify human action, whether individual or en masse

Poor past evidence on effects of available interventions

Unbeknownst to the vast majority of people who are or will suffer from the effects of the lockdowns, the “flatten-the-curve” efforts were informed by information from the Spanish Flu of 1918. Thus data of impeachable quality, from a pandemic event which occurred over one century ago, involving a different pathogen — as a major world war was ending, and when living conditions, longevity, the state of medical science, the tenor of social interactions, and countless other variables were immeasurably different — were applied to sculpt the government response to the outbreak of the novel coronavirus. 

 Ioannidis and his co-authors comment that “[w]hile some interventions . . . are likely to have beneficial effects, assuming huge benefits is incongruent with the past (weak) evidence and should be avoided. Large benefits may be feasible from precise, focused measures.” 

The idea that a single (or even a small handful) of studies might be used to buttress indefensible arguments or to support questionable plans is occasionally seen in economic policymaking as well.

Dimensionality

“Almost all models that had a prominent role in [pandemic] decision-making,” Ioannidis continues, “focused on COVID-19 outcomes, often just a single outcome or a few outcomes (e.g., deaths or hospital needs). Models prime for decision making need to take into account the impact on multiple fronts (e.g. other aspects of healthcare, other diseases, dimensions of the economy, etc.).” Some remedies to this include interdisciplinary scrutiny of model outcomes and a look at past implementations in the face of pandemics — including those to which there was no response at all. 

While dimensionality as a specific problem afflicts economic modeling as well, general comments in this regard closely echo the battered-but-unmoved screeds against one of the earliest fixtures of economic education: ceteris paribus, by which one considers causal or empirical relations while holding other influences equal. While a useful tool for educational purposes, when it creeps into crafting policies the results can be costly. 

(At times, the ceteris paribus approach is defended by econometricians who liken it to the practice of ignoring air resistance in gravity experiments. It’s a shamefully underhanded argument that immingles physical with social science phenomena.)

Groupthink and Bandwagon Effects

Ioannidis cites groupthink among epidemiologists as a source of forecasting error. When a doomsday prediction is made — especially by celebrity scientists — the act of introducing a more mitigating prognosis may bring substantial risk to one’s career, and thus be suppressed. Alternately, the published or broadcast results of thought leaders may be a form of anchoring. As Ioannidis and his team write,

Models can be tuned to get desirable results and predictions, e.g. by changing the input of what are deemed to be the plausible values for key variables. This is true for models that depend upon theory and speculation, but even data-driven forecasting can do the same, depending upon how the modeling is performed. In the presence of strong groupthink and bandwagon effects, modelers may consciously fit their predictions to what the dominant thinking and expectations are – or they may be forced to do so. 

The economics profession is certainly not immune to this. It manifests in several ways, one of which is mainstream economists’ unwillingness to admit their errors (as the continued use of flawed models or bad data attests to). Many economists instinctively do not criticize theory or practices within their institution or school of thought owing to political expediency. The highly ‘siloed’ nature of journals and conferences attests to it, as do the veritable echo chambers in social media. This is not merely a personal observation; it and its effects have been cited elsewhere. Here, in no less prominent a place as the International Monetary Fund:  

Analytical weaknesses were at the core of some of the IMF’s most evident shortcomings in surveillance … [as a result of] … the tendency among homogeneous, cohesive groups to consider issues only within a certain paradigm and not challenge its basic premises.

Cognitive and confirmation biases are noted as well. 

The Media Amplifier

Farcical predictions, whether owing to one or all of the above elements, would nevertheless be innocuous if limited to circulating among small groups of scientists or within the rarified pages of peer-reviewed journals. But whether viewed as a vital democratic institution, a propagandistic organ of political parties, or somewhere in between, it’s far from a conspiracy theory to note that the dominant media outlets are massive businesses which fundamentally compete for revenue on the basis of attention. As with politicians, the loudest and scariest messages and interpretations garner the most attention and have the added perk of defensibility in the name of “vigilance.” 

And in the same manner in which tremendously negative predictions permit self-aggrandizing assessments of policy outcomes — such as in Neil Ferguson’s claim that the lockdowns saved lives — doomy economic projections are almost always associated with unprovably optimal outcomes. 

An example of that is found in President Obama’s assertion that without the bailouts and Fed programs administered in the wake of the 2008 financial crisis, the world might have fallen into a “permanent recession.” (The idea that a “permanent recession” would have been a recession which simply lapsed into a new, permanent low level of economic activity went predictably unchallenged.) The best (and least common) unprovable counterfactuals are good guesses; the majority are deceptive. 

Where Economists can Help Epidemiologists

Having said all of that, the paper concludes with a redemptory note, commending the efforts of epidemiology teams and warning that it would be “horrifically retrograde if this [modeling] debate ushers in a return to an era where predictions, on which huge decisions are made, are kept under lock and key (e.g. by the government – as is the case in Australia).”

The mundanity of letting individuals or localities assess and act in concert with proprietary risk appetites must, on some level, be frustrating when compared with creating vast artificial populations of agents or using big data to sift through colossal data repositories. It would no doubt seem a massive waste of time to expend energy writing code and poring over results only to recommend that citizens exercise their best judgment. 

Simply building and running computational models is not, of course, harmful in and of itself: it is in the leap from output to implementation where hazards emerge. Here’s Hayek, again, in “The Counter-Revolution of Science” (1956):

The universal demand for conscious control or direction of social processes is one of the most characteristic features of our generation. It expresses perhaps more clearly than any of its other cliches the peculiar spirit of the age. That anything is not consciously directed as a whole is regarded as itself a blemish, a proof of its irrationality and of the need completely to replace it by a deliberately designed mechanism . . . The belief that processes which are consciously directed are necessarily superior to any spontaneous process is an unfounded superstition. It would be truer to say [as Whitehead did] that on the contrary “civilization advances by extending the number of important operations we can perform without thinking about them.”

Hysterical, wildly off-the-mark forecasts about COVID-19 will ultimately cause more harm than good, and find their origins in the same set of snags which regularly trip up econometric forecasts. In the epidemiological version, instead of predicting a new Great Depression, they brought an artificial depression, a growing spate of coercive masking initiatives, school closures, and the lockdowns — which quite possibly filled the powderkeg that was ignited by the killing of George Floyd. And that’s what we can see, directly in front of us: the ultimate cost of surgeries foregone, rising rates of drug abuse, alcoholism, and suicides, and other knock-on effects of the ridiculous government responses to the novel coronavirus outbreak will be unfolding for a generation. 

What can economists teach epidemiologists? When it comes to forecasting, humility is key and discretion is the better part of valor. If in a position of power or influence, don’t be afraid to bore politicians to death. Be aware, and remain aware, of the utter unpredictability of human action. And always, above all, remain mindful that the presence of even one human being (and more realistically, millions) introduces complexities which are difficult to predict and virtually impossible to simulate. 

Peter C. Earle

Peter C. Earle

Peter C. Earle, Ph.D, is a Senior Research Fellow who joined AIER in 2018. He holds a Ph.D in Economics from l’Universite d’Angers, an MA in Applied Economics from American University, an MBA (Finance), and a BS in Engineering from the United States Military Academy at West Point.

Prior to joining AIER, Dr. Earle spent over 20 years as a trader and analyst at a number of securities firms and hedge funds in the New York metropolitan area as well as engaging in extensive consulting within the cryptocurrency and gaming sectors. His research focuses on financial markets, monetary policy, macroeconomic forecasting, and problems in economic measurement. He has been quoted by the Wall Street Journal, the Financial Times, Barron’s, Bloomberg, Reuters, CNBC, Grant’s Interest Rate Observer, NPR, and in numerous other media outlets and publications.

Get notified of new articles from Peter C. Earle and AIER.