New study of robot faces helps resolve debate over “Uncanny Valley”

Uncanny Valley in wild-type robot faces. Fitted curves are shown for likability (panel A; Experiment 1B) and trust-motivated behavior (panel B; Experiment 1C). Solid curves represent best-fitting polynomial regression models and shaded regions show 95% confidence intervals. Dashed curves represent models fit to data from only the 50 low-emotion faces. Panel C shows median rating times in Experiment 1A as a function of MH score; vertical lines mark MH scores associated with maximum rating time and minimum likability.
Roboticists are building some amazingly human-like robots these days, but many of us have had the unsettling experience of glancing at one of these marvels of modern technology only to find our skin crawling with revulsion. In fact, it’s often the adorably rudimentary robots (the likes of Wall-E and Furby) that we’d rather have in our house.
But what makes some humanoid robots so creepy? Since it was proposed in 1970, the concept of the “Uncanny Valley” has been used to explain the phenomenon. This theory speculates that robots that appear nearly human, but that have some defects – say, twitchy eye movement or strange skin consistency – can be deeply unsettling, perhaps because they seem to reside in an unclassifiable twilight zone between inanimate and animate. The idea of the Uncanny Valley is intuitively appealing and has spread from the world of robot design into popular culture, for example to explain the eeriness of “The Polar Express” characters. Yet attempts to experimentally demonstrate an Uncanny Valley effect have produced inconsistent results and the very existence of the Uncanny Valley has been controversial.
The present study set out to test if the Uncanny Valley is real using two complementary experiments to overcome important limitations in past studies. First, to avoid the problem of comparing only a small number of selected robots to each other or using digital imaging to create impossible morphs of humans and robots, the authors collected a diverse sample of 80 real-world humanoid robot faces, spanning a full range from completely mechanical to indistinguishably human, and asked subjects to rate their likability. It turned out that the data resembled the predicted Uncanny Valley: to a certain point, people like robots better when they resemble humans rather than messes of circuitry, but when they get too close to human, people strongly dislike them. From that point, only robots that were nearly indistinguishable from humans escape from the Uncanny Valley to become likable again. In a second complementary experiment, they confirmed the Uncanny Valley effect in a tightly controlled set of 6 faces that differed only in their degree of mechanical versus human appearance.

Uncanny Valley in a controlled face series (A) for likability (B) and trust-motivated wagering (C). Error bars represent 95% confidence intervals. Faces sharing the same letter annotation did not differ significantly from each other on the outcome (based on Tukey-adjusted t-tests of least-squares means).
People may say they think a robot is likable, but does that actually affect test how much trust people were willing to place in the robots in a situation with actual consequences? This study adapted methods from the science of game theory to devise a way of measuring how much subjects trusted each robot in a wagering game. For each robot face, a subject had a fixed endowment of imaginary money. They were told they could choose how much of this money to send to each actual robot in its laboratory, that gift would be tripled by the experimenter, and then the robot would decide how much, if any, of the gift to return to the subject. The subjects were told that if they performed well at this game, they would receive a real financial bonus. The researchers could then infer that robots that received more money from subjects were seen as more trustworthy. It was found that, similarly to likability, people did not want to trust robots in the Uncanny Valley. But this effect was less pronounced than for likability and was more nuanced, showing sensitivity to factors like a robot’s displayed emotion and facial expression.
What does this mean for our social future with robots? How do we ensure that robots serving important roles in our lives, like those providing home medical care, distant surgery, or search-and-rescue efforts, earn our social attention and trust at those times when it really matters? These are very much open questions, but this study suggests that the Uncanny Valley can be a real problem and that we will have to take its effects seriously as we move toward a better understanding of our social future with robots.
Publication
Navigating a social world with robot partners: A quantitative cartography of the Uncanny Valley.
Mathur MB, Reichling DB
Cognition. 2015 Sep 21
Leave a Reply
You must be logged in to post a comment.