Why the power of TV advertising has been overstated

Brian Wallheimer | Jun 18, 2019

Sections Marketing

Television advertising may be considerably less effective than published studies suggest, according to Chicago Booth’s Bradley Shapiro and Günter J. Hitsch and Northwestern’s Anna E. Tuchman. Their findings may disappoint marketing executives responsible for spending billions of dollars a year on TV ads in the United States.

Drawing on the Nielsen Datasets at the Kilts Center for Marketing, the researchers devised a way to match up data on advertising with data on sales for the top 500 products sold at more than 12,000 stores from 2010 to 2014. The results demonstrate that television advertising’s effect on sales over the 52 weeks following the ads was about one-fifth as much as calculated when considering only data that showed statistically significant positive results—the only type of results typically included in published studies of advertising efficacy. A typical brand in the data sample, if doubling its television advertising, should expect about a 1 percent increase in sales, as compared to the 5 percent it might expect if it analyzed only successful campaigns, the research suggests. 

“Even though advertising may have a small effect, it may still be profitable when you consider each company’s particular circumstances,” Shapiro says. “But television advertising is considerably less effective than the established research would have you believe.” 

Shapiro, Hitsch, and Tuchman argue that while most brands in their sample would benefit from advertising less, many are still better off with their current advertising expenditure than if they weren’t advertising at all. However, for almost all brands, shutting off some advertising would increase profit.

So why has the established literature found larger average ad effects than Shapiro, Hitsch, and Tuchman find? The researchers suggest publication bias may be a big part of the answer. Academic journals are interested in studies that show statistically significant results, specifically increases in sales or returns on investment related to advertising. So when researchers find little or no effect, it’s likely that journal reviewers will reject their studies. “In particular, editors or reviewers may reject advertising-effect estimates that are not statistically significant or judged as small or ‘implausible,’ i.e. negative. Thus, false positives get published while true negatives get discarded,” the researchers write. 

And that’s if the researchers even try to publish in the first place. The field has a documented “file drawer problem,” Shapiro, Hitsch, and Tuchman say, wherein research sometimes goes unfinished or unsubmitted to journals if the researchers recognize the findings are unlikely to be published. Including all the data, they argue, reveals that TV ads have a much smaller effect on sales than assumed. The researchers find that when they restricted their estimates to only brands that saw a positive and significant effect of advertising on sales, the average advertising effectiveness was much closer to that which is estimated in the established literature.

The researchers caution that all products are different, and ad campaigns can yield different results. They suggest their findings can provide a baseline that companies might use to determine whether television advertising will be worth the cost. Anyone starting from a baseline that omits small or negative results is making decisions based on bad information, Shapiro says.