News & Perspectives

The secret life of robots

The secret life of robots

At what point does a machine seem alive - or even like a “person?”
Perspective// Posted by: Alexander Reben / 4 May 2016
The secret life of robots

As a species, we are excellent at imbuing life into the lifeless—just as we are proficient in giving meaning to the meaningless. One could argue that the ability of our brains to recognize patterns quickly is part of what gives us our humanity. Seeing faces on Mars, yelling at our cars for breaking down and giving animals more agency than they may possess are all results of our psyche.

Our penchant for gestalt is important in the ever increasing world of social robots and machines. When it comes to technology and social robotics, the whole is often seen as more meaningful than the sum of the parts. The field of social robotics includes machines that use social behaviors and cues to interact with people. These machines are specifically designed to seem alive in one way or another. This may range from behaving like a non-intelligent animal, through acting like another person. Over the past few years, several products have entered the social robotics market. Our increasing interactions with machines leads to an important question: What qualities are essential in a machine for a good social interaction? When does a machine seem alive or, even further, a “person?”.

Let’s start with a simple example. Take a look at these images below:
FaceFace

 

Even a child can see that both of these are “faces,” even though in actuality they are not. They are merely images which, in our minds, match the concept of “face” closely enough that we classify them as such. The have indicators of “face-ness,” with which they activate the pattern matching system in our minds.

These two seemingly trivial examples are the crux of the reason why we are able to see and comprehend machines as alive. We can make the argument that, for our brains to interpret something as “alive,” the thing in question simply needs to have enough indicators to activate that classification system.

What does it mean to be alive? Some definitions of life include the ability to reproduce, grow, change and be active. The definition of alive tends to be “a living thing which is not dead”. I conjecture that to be alive is more granular than that, and is perhaps more of a mental construct than one might think.

Let’s do a thought experiment, and consider a fictional thing we will call a “Vrad.” All a Vrad does is destroy others and reproduce. It uses its tails along with a sharp protrusion that grasps onto its prey to inject itself inside where it can grow and multiply. After it gestates, it bursts out from the host, destroying it in the process. Its offspring then profligate in great numbers, and continue the cycle.

What I have just described is a virus. A virus is not considered a living thing. However, I’d wager that most of you, while reading my description, formed an image of something that is indeed living. Ask yourself, “how would I describe the personality of the Ebola virus?” As mean? Selfish? Evil?

A virus is a natural construct. Other simple behaviors, however, can be seen as more complex. Take for instance some simple machines described in the book Vehicles by Valentino Braitenberg. A simple machine is described where a light-detecting sensor is connected to a motorized wheel. When the sensor detects light, it turns on the motor. This seemingly simple machine creates animal-like motions. The diagram below shows two such machines. 

2a 2b

Machine 2a has the left light sensor connected to the left wheel, and the right sensor connected to the left wheel. The result is that when the right sensor “sees” light, that motor turns on—thereby steering the machine left, away from the light, eventually stopping in a dark area. This can be seen as “light avoiding behavior “and has been described as “nocturnal” or “sun hating.”

By simply crisscrossing the sensors, as seen in 2b, the machine now moves toward the light. Once facing the light head-on, the machine will accelerate straight toward it. These two behaviors, seemingly intelligent, are nothing more than simple sense-and-response.

Okay: Suppose the light source is a fire. Machine 2a will stay away from the danger, while machine 2b will turn accelerate into it. Does this behavior make Machine 2b suicidal? With no AI, not even any computer processing, we might well give that machine a character. We would see that machine not only as alive, but as having a personality disorder. If this system were placed inside a human- shaped dummy, one might well see it as a “person” running into a fire.

Simple interactions, then, can give a machine “life” and perhaps even personhood. When you add on complex programming, anthropomorphic features, simulated speech and such, we quickly see machines as more alive than they actually are.

For the past decade, I have been building machines which study some of the questions and issues surrounding our evolving relationship with technology. Some of my findings have been surprising and eye-opening, not only showing us what may happen with social technology in the future, but also enlightening us about what makes us human.

While doing my graduate studies at MIT in 2008, I studied human-robot symbiosis: How to design social robotics where people do what they do best, and robots do what they do best. That’s where I developed Boxie. Boxie was a cardboard robot, created using some of the principles described before, engineered to study some of the boundaries of human-robot relationships. Boxie was designed to be cute and likeable, with behavior that seemed helpless—along with a simulated child-like voice.

Boxie

Boxie was placed out on the MIT campus in the morning, and would ask for help from passers-by doing things like flipping itself over (to appear helpless) or acting lost. People would stop and approach the small robot—at which point it started talking with them. Not only would people help the robot, they would answer a set of questions the robot posed to them.

When you add on complex programming, anthropomorphic features, simulated speech and such, we quickly see machines as more alive than they actually are.

People began opening up to the robot about their day, telling it their life stories. It seemed as if they were treating it as a “person.”

Inspired by Boxie, the BlabDroids were built—specifically to connect with people and get their stories. The attributes that made Boxie successful were amplified, such as making it cuter and smaller, which allowed people to carry it in one hand and get closer with it. The voice was also made that of a child and the scripting of its questions was done with writer and artist Brent Hoff. For the past 3 years, BlabDroids have been traveling the world, interviewing people with difficult questions and recording the responses with their embedded cameras. Here are some examples of how openly people answered the robot’s questions:

BlabDroid: “What is something you have never told a stranger before?”

Person: “When I was a kid I didn’t like to pee in public bathrooms. So I would hold my pee until I got home. This one time I was on my bike and I could not hold it, so I peed and left a trail behind me.”

BlabDroid: “If you could tell someone not to make the same mistake you did, what mistake would that be?”

Person 1: “Having kids.”

BlabDroid: “What is the worst thing you have ever done to someone?”

Person 1: “Not telling my dad I loved him before he died.”

Person 2: “The worst thing I ever did was, um, made it so that my mother had to drown some kittens one time and I didn’t realize until after that was over that it was a very difficult thing for her to do and I’ve never forgiven myself for making her drown some little kittens, but we couldn’t keep them and I should have come up with some other way.”

Both Boxies and BlabDroids are simply rolling cardboard boxes, with faces cut into them. Yet the sum of their simple human-like attributes compelled people to perceive them as living things with which they could have conversations.

Other robots use different modalities to make social connections. The Jibo robot, for example, has a head that moves on top of a stable base. In lieu of a face, it has an animated icon on its screen. This still conveys life. Other companies, such as Buddy, feature on-screen faces, along with wheels to animate the robot’s body. These animal-or- human-like features make the machines social, and allow our brains to fill in the blanks of what makes something “alive.” Even robots which were not intended to be social have formed “bonds” with people. In the military, bomb-disposal robots are used to help defuse IEDs and other explosive ordnance. It was found that the soldiers treated the robot as one of their team, and would become emotionally upset when a robot was broken or destroyed. These robots had no face, no voice and no intelligence, yet became partners— much as a K9 companion would.

Another poignant example is the Sony Aibo dog, which was developed to be a robotic pet. Sony eventually stopped supporting the product. After a while, the robots started to break down and became increasingly difficult to fix. People in Japan began holding funerals for these “pets,” mourning their loss when they ceased functioning. In these owners’ minds, there was little difference between their robot and a living thing.

Certainly, the emotions felt by these soldiers and “pet” owners were genuine, as one may feel for a lost dog or cat. These robots were not only alive, but had created a strong emotional bond with the humans who interacted with them.

As an artist, I find this an intriguing path to explore. Another of these works that looks at machines and emotion is Pulse Machine done with Alicia Eggert. This electromechanical sculpture was “born” in Nashville, Tennessee on 2 June, 2012, at 6:18 PM. It was programmed to last the average human lifespan of human babies born in Tennessee on that same day: about 78 years. The kick drum provides a “heartbeat” (60 beats per minute), while a mechanical counter displays the number of heartbeats remaining in its lifetime. An internal clock keeps track of the passing time when the sculpture is unplugged. The sculpture will “die” when the counter reaches zero.

Pulse machine

In these owners’ minds, there was little difference between their robot and a living thing.

As gallery visitors observed the machine slowly “dying,” each had their own personal reaction. These tended to fall into two categories: a desire to go out and do something immediately, or a feeling of sadness. Pulse Machine had no face or limbs. But the concept that it was alive (because it was dying) was enough to create a strong emotional reaction—as we might have to a dying friend, or to our own mortality.

Heart of Pulse Machine

As humans, we are programmed to feel strong emotions for things which appear alive. As you have seen, it does not take much to achieve that illusion.

Two other sculptures that play with people’s perceptions of what is alive are “two Mylar balloons” and a work in progress, tentatively titled “knife wielding robot.” In the balloon piece, two silver foil balloons are continuously attracted and repelled. The bump into each other continuously— a seemingly basic interaction that was perceived as anything but. Simply put, their movement in that context made people think they were fighting, like two unruly animals having at it. Again, going back to our previous examples: Something with no brain or any real programming seemed to be engaged in something as complex as battle.

Baloons

The “knife wielding robot,” similarly, holds a kitchen knife and uses it to gesture towards the viewer. This creates the perception of malicious intent, since we understand a knife used in such a way as a threat. If I explain to viewers, however, that the robot does not want to hurt them, only touch them—but its only way to do so is with a knife—their perception quickly changes. They imagine what life would be like if they, too, could only touch someone with a weapon.

Knife wielding robot

As humans, we are programmed to feel strong emotions for things which appear alive. As you have seen, it does not take much to achieve that illusion.

As social robots become ever more complex and human-like, we should take some time to think about what effects this may have on what we perceive as alive, and on what we believe makes us human. We need to be aware how those machines interface with our emotions—just as a screen interfaces with our eyes, and speakers interface with our ears.

Will a robot ever be a “person?” The question is probably irrelevant. By the time robots reach that level of complexity, they will have evolved into something that seems at once alien, familiar and symbiotic —and how we perceive them will be only as good as our human understanding.

Alexander Reben
Alexander Reben is an MIT- trained artist and roboticist working on the border between people and technology.