Last year, you published a paper documenting how an algorithm used by health-care organizations generated racially biased results. What takeaways did that offer in terms of how algorithmic bias differs from human bias?

That paper might be, by some measures, among the strangest papers I’ve ever worked on. It’s a reminder of the sheer scale that algorithms can reach.

Exact numbers are hard to get, but about 80 million Americans are evaluated through this algorithm. And it’s not for some inconsequential thing: it is an algorithm used by many health-care systems to decide which patients should get put into what are called care-management programs. Care-management programs are for people who are going to be at the hospital a lot. If you have many conditions, you’re going to be in the system frequently, so you shouldn’t have to go through the normal front door, and maybe you should have a concierge who works just with you. You get additional resources to manage this complex care.

It costs a lot of money to put somebody in a care-management program. You really want to target these programs. So the question is, who should be in them?

Over the past five years, there have been algorithms developed using health records of people to figure out who is at highest risk of using health care a lot. These algorithms produce a risk score, and my coresearchers and I wanted to know if there was any racial bias in these scores.

The way we looked for it was to take two people given the same score by the algorithm—one white and one Black. Then we looked at those two people and asked whether, on average, the white person had the same level of sickness as the Black person. What we found is that he or she didn’t, that when the algorithm gives two people the same score, the white person tends to be much healthier than the Black person. And I mean much healthier, extremely so. If you said, “How many white people would I have to remove from the program, and how many Black people would I have to put in, until their sickness levels were roughly equalized?” you would have to double the number of Black patients. It is an enormous gap.

I say it’s one of the craziest projects I’ve worked on in part because of the sheer scale of this thing. But there are a lot of social injustices that happen at a large scale. What made it really weird was when we said, “Let’s figure out what’s causing it.” In the literature on algorithmic bias, everyone acts like algorithms are people, like they’re biased [in the sense that people are]. It’s just a little piece of code. What went wrong in the code?

This project tells us what should frighten us about algorithms, but also what should give us enormous hope for them.

What we found is something that we’re finding again and again in all of our A.I. work, that every time you see that an algorithm has done something really bad, there’s no engineering error. That’s very, very different than the traditional bugs in code that you’re used to: when your computer crashes, some engineering bug has shown up. I’ve never seen an engineering bug in A.I. The bug is in what people asked the algorithm to do. They just made a mistake in how they asked the question.

In this case, we said, “OK, look, it’s way off. How do we figure out what it’s doing wrong? Well, let’s figure out what people wanted it to optimize.” They wanted to find the sick people, but how did they measure sickness? They measured it using the data they had: claims.

So sickness was measured by how many dollars patients generated, which is very-subtly different. Sickness doesn’t equal dollars. They’re highly correlated, but they’re not exactly the same thing. And it turns out that if you look at total dollars spent, you don’t actually see any racial bias in the algorithm. At the same risk score, the Black patients chosen and the white patients chosen have the same average dollars spent.

Again, costs are highly correlated with health, but not across races. At the same level of health, we spend less on African Americans. So when the algorithm went to predict cost, it obviously did not find the sickest African Americans to be as appealing as the sickest white patients.

I should note, this is not a dumb thing to do. There were about five or six such algorithms built, and they all had this bug. Some were built by private companies, some were built by nonprofits, some were built by academics, but this bug was pernicious and it was everywhere because of the product-management side of it.

The way these algorithms are built is that a bunch of data scientists go in and tell these health systems, “We can build a risk score. What is the thing that you want risk on?” The health systems say, “Well, we want to find the sickest patients,” and they provide what data they have.

The health systems don’t realize the mistake. The data scientists don’t know much about the health-care domain. So between the people who know a lot about the context and the people who know a lot about the coding, something falls through the cracks. The translation from the domain to the coding is where we see problem after problem after problem.

The fact that computer code scales, however, also means that solutions scale—this is the great thing about it. Once we realized this was the problem, we built an algorithm trained to predict health. And now that is being scaled: by the end of this year, we’ll have this thing fixed for 50 million people. We’ll probably have the whole problem fixed by next year.

I’ve never worked on any social science like this, where you find such inequality, some problem at this scale, and suddenly you can then fix it. This project tells us what should frighten us about algorithms, but also what should give us enormous hope for them.

How can regulation help address problems of algorithmic bias?

We should separate out two kinds of bugs. The bug that I just described was bad for the business and bad for society. So there, instead of a regulator, there’s probably just a pretty good arbitrage opportunity for people with the human capital to find these problems and properly formulate solutions. For the bugs that are privately bad, there’s a huge moneymaking opportunity or career-making opportunity.

Let’s come to a different kind of bug, a bug that is privately not that bad or maybe even slightly good, but socially very bad. Should there be a regulator that will look at these algorithms and audit them? I think the answer is yes.

Think about the case of employment. The US Equal Employment Opportunity Commission is almost never able to get lawsuits through the door because it’s very hard to prove that one person discriminated. Even when we have statistical data that says the whole system is discriminatory, producing evidence that one action by one person was discriminatory is difficult to do. They’ll just say, “Sure, I didn’t hire that person, but that’s because there was this other person who was better.” But who’s to say who’s better? It’s a very complicated thing.

There are committed interests with dollars—the producers of algorithms—who will have the incentives to capture these regulatory agencies. Who’s on the other side?

The advantage that regulators have for regulating an algorithm is that unlike human beings, algorithms are remarkably auditable. Notice even in the paper I described, we just took the algorithm and said, “Great, show me what you would do for this group of patients, and show me what you would do for this group of patients, and we’ll compare.” I can’t go to a human-resources manager and say, “I’m going to show you a million résumés, and I’m going to see what you did with them.” Algorithms are beautifully auditable.

One of the changes we’ll see in the next decade is, hopefully, smart regulation of all algorithms. What I mean by smart is not getting into the nitty-gritty of how they should be designed, etc., but putting safeguards in saying there’s some properties they should have. If we care, as we should, about there not being racial inequities, that’s a property that any regulator can check for in any employment algorithm or any other algorithm. It’s very checkable.

What would a regulator need in order to check for bias in an algorithm?

Let’s say that regulators are interested in algorithms that help employers screen whom to hire. So let’s take a concrete category of algorithms that is getting more common. These algorithms take résumés and rank them. And you’re worried, could these algorithms have some sort of biases?

A regulator needs two things. The simplest thing she needs is access to the code of that algorithm. And what she’s going to do is say, “I’m going to run a million résumés through this algorithm. I’m going to do things like take the same résumé and change it from a male to a female name.” She’s just going to run a bunch of what you might call in coding “unit tests,” but in this case, tests for whether there are differences on dimensions that you don’t want. That’s the first thing that a regulator is going to check.

The second thing a regulator will want to check is, “What was the training data used to build this algorithm? I would like that, and I would like to know if you built an algorithm that has some disparate impact for the groups I care about.”

Suppose that someone built a shoddy hiring algorithm, and all it looked at was the college you went to. It doesn’t look at whether a name is ethnic or not, so the designer may say, “Look, my algorithm is not discriminatory.” But the regulator needs to be able to look at the designer’s data and say, “No, you only looked at this. We went through your data. Here’s a better algorithm.” And that better algorithm is no longer using just this one proxy. Why does that matter? Because if I really want to discriminate against disadvantaged groups, I would use proxies like, “Did you go to a good school?”

It’s actually shockingly easy to check for discrimination as long as the law says these things need to be stored. That would be consistent with other regulatory spaces. Take finance: you have to keep information for a long time. Auditors have to be able to come in. The Internal Revenue Service has to be able to come in.

What can you envision going wrong with algorithmic regulation?

The biggest danger here is regulatory capture. We have sectors where we’ve avoided it, but this is a sector very prone to regulatory capture. There are committed interests with dollars—the producers of algorithms—who will have the incentives to capture these regulatory agencies. Who’s on the other side? It needs to be us consumers, but what are we going to do? Transparency for algorithms is not going to bring many people out to march.

So that would be my biggest fear: this is a place where the harm that algorithms can produce is diffuse and large, but the gains from a bad algorithm are concentrated in a few actors. That sets up a bad regulatory situation.

The other place where there’s some real danger is something financial regulators have in some countries found a way to work around, which is, you want to prevent the overregulation of innovative financial products. Complicated hedges and other complex products can create large systemic risks that are hidden. But frankly, the knowledge of finance is really good on the money side, so there are a lot of people innovating things. You want them to innovate because most of it is good, but the regulators need to somehow keep out the bad ones. And how do you keep out the bad ones without keeping out the good ones?

For auditing purposes, you don’t need the auditor as a person to understand the algorithm. You want the auditor to be able to run a bunch of things to test it.

This gets to why I was saying if regulators find themselves getting into the nitty-gritty, something has gone wrong. If, on the other hand, they have simple, transparent tests, it would be not very different from the US Food and Drug Administration. It wouldn’t bother you if the FDA commissioner and his staff didn’t know that much about biologics. The only thing they need to know about is how a drug trial is run and how we make sure it’s not run badly. They need to be expert at that activity.

We need a transparent way to test these algorithms, hopefully at much lower cost than a drug trial, but where we all agree if they’re not passing this test, that’s not good innovation at all. Unfortunately, when you see the regulatory proposals put forward by lawmakers, they’re vague statements about hope. They say, “We want to outlaw algorithms that discriminate.” What does that even mean? Once you’ve done regulation like that, who gets to define what that means in practice? The vested interest gets to define that.

In your research, you have found that there is a trade-off between how interpretable an algorithm is to people and how fair its output is. Is that a concern for regulating them?

A lot of people are familiar with the idea that algorithms are “uninterpretable.” That word is overloaded. People use it to mean at least two things that have nothing to do with each other.

One is: as a human being, how well can I understand what the algorithm is accomplishing? That’s a question as much about my cognitive limitations as it is about the algorithm. For example, it’d be easy to write down a mathematical formula that’s transparent but that almost everyone would stare at and say, “What is this thing doing?” But that’s on us. The formula is completely transparent. It’s all there. You have everything you need. So this first definition of interpretability is common, but it is entirely a statement about human cognitive limitations.

Now let’s get to the second kind of interpretability problem: inscrutability, where we can’t even audit the algorithm. It would be a bad algorithm if that were the case. Every algorithm I know of is interpretable in the sense that you can run cases through it and see exactly what happens. And you can do that again and again and again. Algorithms are entirely predictable. They’re consistent. You can work with them. It’s hard for us to understand because with human beings, these two go hand in hand: someone who’s understandable to us is also predictable to us.

For auditing purposes, you don’t need the auditor as a person to understand the algorithm. You want the auditor to be able to run a bunch of things to test it. And that level of technical auditability is something all algorithms share.

Things go wrong when we ask the algorithm to be interpretable to us. When we ask algorithms to be interpretable to us, you can see naturally what’s going to happen: they really start taking on our biases, because that’s the way they’re going to be interpretable to us. Simplicity is easy for the human mind to understand, but simplicity inherently leads to inequity.


Sendhil Mullainathan is the Roman Family University Professor of Computation and Behavioral Science at Chicago Booth. This transcript is an edited excerpt. The original conversation took place June 10 as part of the Thought Leadership on Crises event series hosted by Chicago Booth’s Executive MBA Program.

More from Chicago Booth Review

More from Chicago Booth

Your Privacy
We want to demonstrate our commitment to your privacy. Please review Chicago Booth's privacy notice, which provides information explaining how and why we collect particular information when you visit our website.