Hidden Brain's Shankar Vedantam Asks, "Could You Kill A Robot?"

In her book Alone Together, Sherry Turkle explores the effects robots have on our humanity. When we accept robots into our lives, we begin to trust them to keep watch over our property and to satisfy our desires. They become something more than simple machines; they transcend gears and circuitry and become family members, confidants and even friends.

I feel I've been steeped in this robot topic for the last several weeks. I recently rewatched the movie Bladerunner. I let a coworker know I'd read Turkle's book, and he clipped an article about robot waiters. I might be experiencing the frequency illusion, but it seems like robots are popping up everywhere, most recently in an episode of one of my favorite podcasts Hidden Brain.

Hidden Brain is a thirty minute weekly podcast that features interviews with scientists and experts who seek to understand the world through science. This episode, called "Could You Kill a Robot?" is a thought-provoking dive into what makes us sympathetic toward robots. The host, Shankar Vedantam, speaks with MIT researcher Kate Darling, who studies the way humans interact with machines. Darling found that simply giving a machine a name drastically changes how people interact with it. It seems that humans are keen to anthropomorphize even the most rudimentary robots; Darling mentions the kinds of boxy delivery robots employed in factories or hospitals. The moment we name these things, we begin to treat them more like pets than tools. They become companions, and even though their human owners are fully aware that they are not alive, they have no problem treating them as if they were.

This comes with its own set of ethics. Darling observed study participants who were given time to name and connect with small robotic dinosaurs.  They had about an hour to get to know them, design clothes for them, and treat them as they would a friend or pet. After this introduction period was over, the participants were given hammers, axes and other implements and asked to torture and destroy the robots. The results of this experiment may or may not surprise you - most participants refused to destroy the machines.

This behavior has popped up in other settings. There is some evidence that soldiers become attached to their bomb-diffusing robots, and even give them names. Darling feels this attachment may be due to the soldiers' reliance on these machines to do work that would be life-threatening to humans.

Darling posits that simply programming the robot with lifelike tendencies or behaviors is enough to change the way people interact with it. Robots are becoming increasingly sophisticated. They walk, talk and learn new behaviors. Knowing that a machine is not alive does not prevent people from treating it like a living entity, as long as it is what Turkle calls "alive enough". We can interact with it the way small children name and nurtured countless Tamagotchi toys in the 90s.

There is a dark side to these behavior patterns. People with low empathy toward other people are much more willing to behave cruelly toward the machines. They are the ones who did not hesitate to destroy the robot in Darling's study. They took less time to decide whether or not they could strike or torture the toy. This reveals something very interesting about human behavior- that human empathy is measurable and observable, and that people have it in different amounts.

Listen to the podcast episode and decide for yourself, "could I kill a robot?" As further study continues to be done in the fields of artificial intelligence and robotics, the most fascinating result could be not the discovery of what robots can do, but what we are capable of.


Comments

  1. First off, I had written a lovely post about this and then my internet went out and lost all of my wonderful writing so yes I could kill this computer (kind of a robot?) right now. If I can remember, there were 3 main things I had written.

    1) This situation reminds me of dogs. If a human dies in a movie, it's whatever. If a dog dies in a movie, it's an outrage! Just check out www.doesthedogdie.com. Maybe we feel like while humans have the capacity for good and evil, dogs and robots do not, so they deserve better?

    2) Robot talk made me think of Interstellar and my favorite characters, TARS and CASE, both robots. They were programmed with humor and honesty to get along with humans, but were never afraid to remind the characters, "we are robots so we have to do what you say." Still, I kept thinking, "don't let them die, they were so good to you!"

    3) Like with the bomb-diffusing robots you mention, I think it's all about the emotional attachment. We feel like we owe these robots something for protecting us - since we can't give them anything they would appreciate, we feel like our gratitude is never really fulfilled and doesn't go away. They protect us, so we protect them because it's in our nature.

    I felt really bad when I sold my car and replaced it with a new one because it had done it's job and served me so well. Even though I logically know the car doesn't care, I feel guilty because I probably sent it to it's death in a scrapyard somewhere! So other than this computer 7 minutes ago, no I could probably not kill a robot haha.

    ReplyDelete
    Replies
    1. Stacey, thanks for the awesome comment! I'm sure what you wrote was way better than the "first draft" you lost when your computer crapped out on you. The points you brought up remind me of this IKEA commercial- have you seen it?

      https://youtu.be/jU-cori12KU

      Delete
  2. As an avid Star Trek: Next Generation fan, I couldn't imagine "killing" Data!! He's my favorite! But he's also an android, which I think is a bit different from a robot. For example, if you have a wireless robot that you let the battery run down on, and then never bother to recharge it, then have you "killed" it? Also reminds me of reading Bicentennial Man - at which point does a robot move from the realm of "expendable" to "deserving of basic human rights?" An interesting philosophical question.

    ReplyDelete

Post a Comment