Good decisions require good experiments

Hydroxychloroquine provides a case study

Credit: Associated Press

Oleg Urminsky | Apr 09, 2020

Sections Public Policy Strategy

Collections COVID-19 Crisis

It’s a common enough type of scene: the CEO of a major company announces the decision to leverage a new promising technology, backed by scientific evidence. The tech visionary who heads up the company’s strategic partner explains how big data methods will provide new insights as the technology is rolled out. The company’s chief engineer uncomfortably tries to lower expectations, pointing out that the technology is not fully proven yet, while also attempting to not contradict the CEO. But the press, zeroing in on the fact that the public is more interested in solutions than in caveats and caution, breathlessly promotes the new advance. Providers around the world react to the demand, offering this unproven new product or service to their customers, causing shortages and another round of press coverage. A highly successful rollout—but will customers benefit?

We have seen exactly this scenario take place in the past few weeks, but this time on a national stage and in a matter of life and death: treatment of the coronavirus-induced COVID-19 illness. A team of researchers in France, led by Philippe Gautret of the Mediterranean University Hospital Institute for Infectious Diseases, published a hastily assembled research paper on March 17, in which they suggest that the antimalarial drug hydroxychloroquine is effective in treating coronavirus, particularly when used in conjunction with another drug, azithromycin. US president Donald Trump, in press conferences and on Twitter, touted this breakthrough as a “game changer.” Oracle chairman Larry Ellison announced an initiative to collect data on the drugs’ efficacy faster than a US Food and Drug Administration clinical trial would.

However, Anthony S. Fauci, director of the National Institute of Allergy and Infectious Diseases, in a joint press conference with President Trump, responded to a reporter’s question about using the treatment as preventative by referring to the evidence as “anecdotal” and stating, “It was not done in a controlled clinical trial, so you really can’t make any definitive statement about it.” The next day, Fauci elaborated further, citing the need for randomized control trials to prove safety and efficacy. “Many of the things that you hear out there are what I had called anecdotal reports. They may be true, but they are anecdotal,” Fauci said. “So the only thing that I was saying is that if you really want definitively to know if something works, that you’ve got to do the kind of trial that you get the good information.”

An inability to assess evidence relative to the standards of a scientific experiment can result in unfounded decisions with serious consequences. 

Why was Fauci skeptical, insisting that the evidence was only anecdotal? It turns out you don’t need to be an expert on infectious disease, or even a medical doctor, to see what Fauci saw. If you have a working understanding of what makes experiments beneficial, a quick read of that initial French study is all you need to identify multiple reasons to pause before drawing a conclusion. 

First of all, the foundation of what makes experiments informative is randomization, comparing people who were randomly assigned to receive a treatment with an otherwise equivalent control group of people who were randomly assigned to not get the treatment. But the Gautret study was not randomized. Instead, it compared 20 COVID-19 patients at one hospital who received the hydroxychloroquine treatment with a “control” group—16 patients, some at that same hospital who did not qualify for or refused the treatment, and others at an entirely different hospital. Since the treatment was not randomly assigned, there was no guarantee that differences in outcomes were caused by the treatment, as opposed to by differences in the patients, unrelated to the treatment.

Second, all outcomes need to be taken into account in an experiment, to identify the net effect and make decisions about whether to use the treatment. The Gautret study reported results regarding six days of testing for the presence of the virus but did not report length of hospital stays or mortality, which are the policy-relevant outcomes.

Third, a valid experiment needs to account for all data, including the possibility that missing data may distort the actual result. In the Gautret study, 42 patients were initially included, but six hydroxychloroquine-treated patients were dropped from the study for not completing the six-day trial, including one who died on day three of the study, and three who were transferred to intensive care. This is a classic example of a mortality confound, which invalidates an experiment—after all, a treatment that is effective when ignoring patients who get worse is not much of a treatment.

An understanding of the simple properties that make experiments informative reveals how uninformative the Gautret study was and illustrates why Fauci insisted that a controlled clinical trial was needed before making any decisions. Of course, these limitations of the Gautret study don’t mean that hydroxychloroquine is not an effective treatment for COVID-19—just that we can’t know from the study. However, given the possibility of serious side effects from hydroxychloroquine and the need that patients with other conditions have for the medication as a proven treatment, promoting it as a treatment for COVID-19 on the basis of anecdotal evidence may turn out not to be harmless. An inability to assess evidence relative to the standards of a scientific experiment, whether in medicine or business, can result in unfounded decisions with serious and perhaps even deadly consequences. 

Oleg Urminsky is professor of marketing at Chicago Booth.