Executives and others are increasingly using data when assessing business policies, comparing marketing strategies, and making other decisions. In particular, they use machine learning to analyze data—and the results to make decisions.
But how much can an executive trust a recommendation generated by machine learning? Recognizing that uncertainty is involved, and could produce expensive mistakes, Chicago Booth’s Max Farrell, Tengyuan Liang, and Sanjog Misra have sought to quantify this uncertainty so that decision makers can take it into account.
In the past few years, machine-learning methods have come to dominate data analysis in academia and industry. One type of learning in particular—deep learning, where computers learn through iterations to recognize important features—has become a mainstay in modern business practice. It is at the base of many applications, from digital image recognition to language processing and virtual assistants such as Apple’s Siri and the Amazon Alexa.
Many people are treating deep-learning models as though they are able to learn unassailable truths, and Farrell, Liang, and Misra point out that the supposed truths are actually uncertain. Deep learning might produce the right answer to a question, or it might not. If a business is using a deep-learning model to guide decision-making, and some potentially large investments, that’s an important distinction. Therefore it’s crucial to understand how close deep-learning models can come to finding out truths, and how quickly they can do so.
The researchers studied the effectiveness of—and uncertainty involved in—deep-learning models theoretically and conceptually. When it comes to a simple prediction task, it’s easy enough to evaluate a model’s performance. For example, when training an algorithm to separate pictures of cats and dogs, people can simply look at the pictures and decide if the model is reliable. It’s trickier to determine how a model performs in predicting how consumers react to advertisements or discounts. A shopper may see an ad and make a purchase, but it’s hard to say the ad caused the purchase. Who knows what the shopper was thinking? In these cases, how accurate the model is, and how much data is required to get close to a trustworthy result, cannot be known for sure. The researchers derived explicit bounds for the uncertainty, answering the question of how close deep-learning methods can get to the best-possible model given the data at hand.
Farrell, Liang, and Misra illustrated their findings empirically by describing an experiment conducted with a large US consumer products company. The company, which the researchers don’t name, sells its products directly to consumers and sends out catalogs to boost sales. The experiment involved data on about 300,000 customers, two-thirds of whom received a catalog.
The researchers compared the results (6 percent of people made a purchase within three months of the catalog campaign, spending an average of $118) with those of eight deep-learning predictive models, evaluating the success of each. They computed the level of uncertainty resulting from each deep-learning model, showing how this uncertainty would feature in decision-making. Their results suggest that deep learning can have excellent performance when properly used.
As people and businesses increasingly rely on data to guide decisions, these findings about deep learning can have broad applications. For example, say a doctor turns to similar data analysis when trying to assess whether to treat a patient with a particular drug. Knowing the level of uncertainty in the analysis could help the doctor make a potentially life-or-death call. “The end goal,” says Misra, “is making robust decisions.”