In May 2007, the US Food and Drug Administration issued a strict warning for rosiglitazone after studies linked the approved diabetes drug to an increased risk of heart problems. Use of the drug plummeted 78 percent in 15 months; annual sales dropped from $3 billion to $183 million.
In 2013, following additional studies of the drug’s safety, the FDA reversed course and removed restrictions on rosiglitazone. But it was too late to undo the damage caused by their initial warning—sales never recovered, and patients had to resort to taking potentially less suitable medications.
The FDA could have prevented this six-year roller-coaster ride if it had taken a more robust, data-driven approach to its postmarket drug surveillance process, suggests research by Southern Methodist University’s Vishal Ahuja, Texas Tech’s Carlos Alvarez, and Chicago Booth’s John R. Birge and Chad Syverson.
Using rosiglitazone as a retrospective case study, the researchers propose a new empirical method for monitoring and evaluating the safety of drugs already on the market. Their approach uses large, relevant, and reliable longitudinal databases and established econometrics methods to assess the relationships between approved drugs and potentially related adverse health events.
This evaluation method could help prevent incorrect drug recalls and warnings that cause financial consequences for drugmakers, confusion among doctors, and potential harm to patients’ health, the researchers argue.
“We should be doing these sorts of independent verifications when there’s a potential adverse event because a lot of people were taken off the drug and were possibly hurt because they weren’t being treated when they could’ve been,” Birge says.
To demonstrate their approach, the researchers analyzed Veterans Health Administration data from more than 320,000 diabetes patients over eight years. Comparing the health outcomes of rosiglitazone users and nonusers, they uncover no link between the drug and increased risk of heart conditions. “We find that, if anything, rosiglitazone is associated with lower coronary events than not having any treatment at all,” says Birge.
To demonstrate further the generalizability of the approach, they also retroactively assessed two additional FDA postmarket drug warnings—one for statins and another for atenolol—and determined that those warnings were warranted.
The researchers say their proposed method improves upon the FDA’s postmarket drug review process, which they argue is subject to bias, relies on incomplete and underreported information, and mistakes correlation for causation. “The shortcomings of existing approaches can produce high error rates, resulting in an untimely, or, worse, incorrect regulatory decision with serious ramifications for patients, providers, and firms,” they write.
The FDA’s recent pause and restart of the Johnson & Johnson COVID-19 vaccine following six reports of blood clots out of nearly 7 million administered doses exemplifies some of those ramifications, though Birge notes that this is an imperfect example because the FDA has so far approved the vaccine for emergency use only.
Even so, the situation illustrates the impact of the FDA’s decisions on public opinion. As the country strives for herd immunity, a Washington Post-ABC poll finds that less than 25 percent of Americans who have not yet vaccinated would be willing to get the Johnson & Johnson shot.
The new approach could improve the FDA’s response to spontaneously reported adverse events while also proactively monitoring approved drugs by studying their relationships with a wide swath of harmful side effects, the researchers write.
“We know that these types of adverse events are going to be observed randomly, but what we’d like to do is ensure that we’re not being too aggressive in terms of removing the drug from the market when that drug could actually be helping people,” Birge says.