From the introduction of the Michelin Man in the 19th century to today’s GEICO gecko, marketing campaigns have long sought to capitalize on the human tendency to attribute human traits to nonhuman things. The subject has attracted researchers in recent years as it grows more complicated, with technology such as Apple’s Siri, Google’s self-driving cars, and Amazon’s Alexa that is both anthropomorphized and interactive. Now we’re invited not simply to buy products from talking objects but to have conversations with them as well.
How we respond and whether we trust this artificially humanized technology at least partly depends on our financial status, according to Chicago Booth’s Ann L. McGill and Booth PhD candidate Hye-Young Kim. Their findings have important implications beyond marketing, to other professions including health care, transportation, securities, and countless other fields that increasingly apply interactive technologies.
In one of a series of experiments, Kim and McGill manipulated participants’ feelings of wealth by asking them to imagine they’d won money in the lottery and to think about what they’d spend it on. The researchers then magnified participants’ feelings of wealth or deprivation by immediately asking them to report their actual financial status on a specially designed scale, which guided people to feel richer or poorer than they might otherwise report.
After that, participants were asked to imagine being a passenger in one of two versions of a driverless car. One was anthropomorphized and referred to itself in the first person, as in, “Hi, I’m Jasper!” The other vehicle wasn’t given human qualities. The participants had to predict how the cars would respond in a moral dilemma similar to the famous trolley problem: faced with hitting 10 people in its path or veering to avoid them but killing the passenger, what would the autonomous car do?
Those who’d been manipulated to feel wealthy more frequently predicted that the anthropomorphized car would act to spare them, the passengers, while participants manipulated to feel deprived said the same car would be more likely to sacrifice them for the lives of the 10 pedestrians.
Follow-up experiments demonstrated similar trends. Participants who felt wealthier evaluated anthropomorphized products more positively and generally believed that these humanized entities would be more likely to act in the participants’ own interests.
The researchers observed the same phenomenon when using participants’ actual financial situations, without manipulating anyone’s sense of wealth or deprivation. This suggests that a link between affluence and response to humanized objects may well exist in real life, the researchers write. Just as high-income individuals usually expect to be treated well by salespeople (a notion supported by data the researchers collected in a survey, as well as prior studies on preferential treatment), these high-earners may assume the same positive reaction from anthropomorphized objects. “Affluent people might be more willing to anthropomorphize product offerings in a commercial setting because they expect them to serve as helpful minions,” write Kim and McGill.
By contrast, “Those who do not feel well-off . . . might resist marketers’ efforts to anthropomorphize products, because these products could then become yet more people who are able to make their lives difficult. Less-affluent consumers would be less likely to perceive the possibility of a positive interaction with these highly humanized products.”
The challenge for marketers and other professionals lies in figuring out the right level of anthropomorphism to help people embrace new technologies and valuable services, the researchers suggest.
“The wise usage of anthropomorphism strategies could contribute to closing the technology gap between the rich and poor by increasing trust and reducing unnecessary anxiety around new technology adoption among lower-income individuals,” the researchers conclude.