Your New Robot Colleague Has Been Programmed to Put You At Ease

As robots move into more and more workplaces in the coming decades—not just high-tech manufacturing but eventually everything from hospitals to supermarkets—one of the big challenges employers will face is making their carbon-based workforce comfortable with the new arrivals. That's the topic of an interesting story in The Economist (h/t Kevin Drum) that focuses not just on the technology but on how the robots make us feel, and what must be done to keep people from freaking out when they find out their new partner is made of metal and plastic. It seems that the psychology of human-robot interaction is going to be a burgeoning field in the next few years:

To keep human workers at ease, collaborative robots should also have an appropriate size and appearance. Takayuki Kanda of the ATR Intelligent Robotics and Communication Laboratories in Kyoto says that collaborative, humanoid robots should generally be no larger than a six-year-old, a size most adults reckon they could overpower if necessary. Large eyes make robots seem friendlier and, crucially, more aware of their surroundings. But overly humanoid features can lead to problematically unrealistic expectations, says Ulrich Reiser of Fraunhofer IPA, a manufacturing research institute in Stuttgart that makes a €250,000 home-assistant robot called Care-O-bot. He notes that people tend to distrust robots with protruding sensors, “Terminator”-like exposed cables, or a jerry-rigged, student-project look.

To interact smoothly with people, robots will also need “social intelligence”. It turns out, for example, that people are more trusting of robots that use metaphors rather than abstract language, says Bilge Mutlu, the head of the robotics laboratory at the University of Wisconsin-Madison. He has found that robots are more persuasive when they refer to the opinions of humans and limit pauses to about a third of a second to avoid appearing confused. Robots’ gazes must also be carefully programmed lest a stare make someone uncomfortable. Timing eye contact for “intimacy regulation” is tricky, Dr Mutlu says, in part because gazes are also used in dialogue to seize and yield the floor.

I love the part about limiting pauses. I could be wrong, but my interpretation of this is that people are comfortable with them either being very fast or very slow when answering a question, because then they resemble computers. But if they take about the same amount of time as a human to respond to a query, then it seems like they're thinking. And it's a short hop from thinking to plotting.

Here's the other issue this raises. In science-fiction movies and television shows, robots who look like human beings are ubiquitous. It's usually when they start talking that you can tell there's something mechanical about them—they speak in a monotone, or with overly formal language. The reason, of course, is that these characters are played by humans, who tend to look human. The easiest way to indicate their difference is to have them talk like you think a machine would talk.

But in reality, it's exactly the opposite: Constructing a robot that talks like a human may not be a piece of cake, but it's doable, and we're getting closer all the time. Getting one that looks and moves like a human, enough to make you initially unsure of whether it's a real person or not, is going to be much harder. I suspect that within a decade or so you'll be able to carry on conversations with your robot in which it sounds just like another person in almost all its inflections, language choices, and so on. But it'll be many more years before we have androids that fool us into thinking they're people.

You may also like