Is a Toyota Camry the best car for your family, or a Honda Accord? How likely is it that a particular stock will increase in price, and should you invest? When making such choices, people often consult multiple experts for advice. But research by Johns Hopkins’s Robert Mislavsky and Chicago Booth’s Celia Gaertig suggests that the way expert forecasts are presented to us can change how we combine them.
The research explores two ways people combine probability forecasts. If the forecasts are numeric, people tend to average the probabilities. For example, if two experts say that the likelihood of a stock price increasing is 60 percent, you’ll average this advice and also assume the probability is about 60 percent.
However, if the experts express the probabilities in words, people tend to “count” the predictions, leading them to feel that the likelihood is more certain than either expert said. For example, if two experts say it’s “likely” a stock price will increase, you may add up the advice and wind up believing it’s “very likely.”
Mislavsky and Gaertig examined this effect with more than 7,000 participants over eight studies. In one study, they asked participants to predict how likely it was that a stock’s price would be higher one year later. Before making their own forecasts, participants saw forecasts from two (fictional) financial advisers, both of whom said it was “rather likely” that the stock price would increase. In this case, “rather likely” equated to a 7 on the 10-point scale participants used to make their estimate, with the scale ranging from 1 (“nearly impossible”) to 10 (“nearly certain”).
More than 29 percent of people who heard from the two advisers that a price rise was “rather likely” chose an 8, 9, or 10 for their own forecast. The researchers call this an “extreme forecast” because the participants took two forecasts that equated to a 7 on the scale and determined that the actual probability must be higher. In contrast, most participants who saw numeric forecasts (almost 90 percent) did not make extreme forecasts, instead averaging the probabilities.
These results held in another study, when participants made forecasts on the outcome of one of 10 upcoming Major League Baseball games. More people who saw multiple verbal forecasts made extreme predictions than did people who saw multiple numeric forecasts. The effect also held in studies in which participants saw forecasts sequentially instead of simultaneously, in which experts predicted negative outcomes, in which participants saw more than two expert forecasts, and in which participants were incentivized to give accurate guesses (because they bet money on who would win a football game).
Although many participants counted verbal probabilities and therefore made a more confident estimate than advisers, it’s unlikely that participants believed an additional verbal forecast gave them more new information. The researchers tested several other possible explanations for the effect but did not find strong evidence supporting one specific reason.
Regardless of why people combine verbal versus numeric probabilities in different ways, anyone gathering or providing advice from multiple sources would do well to remember that they do.