Imagine walking in the park, and all of a sudden you see a little cardboard box, with cute little wheels, and a hand drawn face, sitting in the street. It’s stuck! Would you free it, to let it continue its journey?
Tweenbots
The “Tweenbot” experiment says yes.
Every time a Tweenbot got caught under a park bench, ground futilely against a curb, or became trapped in a pothole, some passerby would always rescue it and send it toward its goal. Never once was a Tweenbot lost or damaged. Often, people would ignore the instructions to aim the Tweenbot in the “right” direction, if that direction meant sending the robot into a perilous situation. One man turned the robot back in the direction from which it had just come, saying out loud to the Tweenbot, “You can’t go that way, it’s toward the road.” — Tweenbots.com
Blabdroids
If you were to see a small box on wheels that asked you a question, would you answer?
The Blabdroid experiment says yes. The droids were send to parks, festivals and other crowded places. Interaction ranged from 8 to 30 minutes and people revealed very personal stories and things that you would not normally tell a stranger. Even when the robot told its interviewees that it was recording them for a documentary to be shown later. It asked questions like: “What’s the worst thing you have ever done to someone?”
The 1944 Heider and Simmel Experiment on Anthropomorphism
We attribute human traits to objects. It’s called anthropomorphism and it was established in 1944. At the time, two psychologists (Heider and Simmel) executed a now famous experiment with people watching triangles and other geometric shapes move about. Take a look at it and try to describe what you saw.
Some of the original subjects described the shapes as birds and a cage. Did you?
The role of anthropomorphism in robots serves a distinct purpose. It’s not to build a humanoid, somehow we find these “gemini” too spooky. A person’s response to a humanlike robot shifts quickly to revulsion when it only approaches a lifelike appearance.
Instead, robots are build in a way to take advantage of the mechanism to enable better interaction and acceptance. It’s reflected in the shape, motion, and interaction of the robot.
Wall-e and Wirecutters
For shapes, many roboticists work with toy- or child-like appearances, that use rudimentary tools to convey expressions. These robots convey the Gestalt of emotion, and observers fill in the blanks. Think Disney’s Wall-e, or take a look at the 9-minute movie “Wirecutters” (love that film)
Jibo, the Family Robot
In the case of motion, facial expressions are amplified by a face that moves and eyes that follow you. Simple cues, like: follow the beige spot with your head, give us the impression that the robot focusses on us. The forerunner in this field is Jibo.
In the field of interaction, the latest technology is called Deep Learning. It stands for a type of search algorithm, that allows computers to learn by looking for similarities. You “feed” an algorithm a library of things you want it to recognize. And when confronted with new information, the robot will quickly reference its library, understand, and reply with other associations. Here, Google, Microsoft, and Facebook do the heavy lifting and they have published the code for others to use.
Impact on Jobs
It’s expected that all kinds of information gathering tasks will become robotic soon. But human social intelligence will take at least a couple of decades more.
Take a look the information your company gathers: from customer service and librarians to business intelligence developers. Parts of their jobs could be done better by robots we open up to. Particularly when robots can match our stories to cases of solved problems. It’s when the algorithm becomes associative enough, not if. Are you ready to say yes?