Asimov’s Three Laws of Robotics

In the weeks since the Campaign to Stop Killer Robots launched, there has been a lot of media coverage.  The media coverage is very exciting and what I have found to be very interesting is the number of articles that refer to Isaac Asimov’s Three Laws of Robotics.

Now unless like me you grew up with a sci-fi geek for a father who introduced you to various fictional worlds like those in Star Wars, Star Trek and 2001: A Space Odyssey at a young age, you might not know who Isaac Asimov is, what his Three Laws of Robotics are and why these laws are relevant to the Campaign to Stop Killer Robots.

Isaac Asimov (1920-1992) was an American scientist and writer, best known for his science fiction writings especially short stories.  In his writings, Asimov created the Three Laws of Robotics which govern the action of his robot characters.  In his stories, the Three Laws were programmed into robots as a safety function.  The laws were first stated in the short story Runaround but you can see them in many of his other writings and since then they have shown up in other authors’ work as well.

The Three Laws of Robotics are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

After reading the Three Laws, it might be pretty clear why Mr. Asimov’s ideas are frequently mentioned in media coverage of our campaign to stop fully autonomous weapons.  A fully autonomous weapon will most definitely violate the first and second laws of robotics.

To me, the Three Laws seem to be pretty common sense guides for the actions of autonomous robots.  It is probably a good idea to protect yourself from being killed by your own machine – ok not probably – it is a good idea to make sure your machine does not kill you!  It also is important for us to remember that Asimov recognized that just regular robots with artificial intelligence (not even fully autonomous weapons) could pose a threat to humanity at large so he also added a fourth, or zeroth law, to come before the others:

0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

“But Erin,” you say, “these are just fictional stories; the Campaign to Stop Killer Robots is dealing with how things really will be.  We need to focus on reality not fiction!”  I hear you but since fully autonomous weapons do not yet exist we need to take what we know about robotics, warfare and law and add a little imagination to foresee some of the possible problems with fully autonomous weapons.  Who better to help us consider the possibilities than science fiction writers who have been thinking about these types of issues for decades?

At the moment, Asimov’s Three Laws are currently the closest thing we have to laws explicitly governing the use of fully autonomous weapons.  Asimov’s stories often tell tales of how the application of these laws result in robots acting in weird and dangerous ways the programmers did not predict.  By articulating some pretty common sense laws for robots and then showing how those laws can have unintended negative consequences when implemented by artificial intelligence, Asimov’s writings may have made the first argument that a set of parameters to guide the actions of fully autonomous weapons will not be sufficient.  Even if you did not have a geeky childhood like I did, you can still see the problems with creating fully autonomous weapons.  You don’t have to read Asimov, know who HAL is or have a disliking for the Borg to worry that we won’t be able to control how artificial intelligence will interpret our commands and anyone who has tried to use a computer, a printer or a cell phone knows that there is no end to the number of ways technology can go wrong.  We need a pre-emptive ban on fully autonomous weapons before it is too late and that is what the Campaign to Stop Killer Robots will be telling the diplomats at the UN in Geneva at the end of the month.

- Erin Hunt, Program Officer

About these ads

Posted on May 15, 2013, in Program Officer and tagged , , , , , , , , , . Bookmark the permalink. 4 Comments.

  1. bestventureinc.com

    Thanks for finally writing about >Asimov’s Three Laws of Robotics | Stop Killer Robots Canada <Loved it!

  1. Pingback: Human Agency and the Moral Imperative of Robot Warfare ← Joshua Foust

  2. Pingback: Science Fiction to Science Fact Part 2 | TechBurgh | Generally Cool Stuff | News | Views | Reviews |

  3. Pingback: UN Automated Robotic Warfare Report April 2013 » justcode.ca

Leave a Reply

Fill in your details below or click an icon to log in:

You are commenting using your WordPress.com account. Log Out / Change )

You are commenting using your Twitter account. Log Out / Change )

You are commenting using your Facebook account. Log Out / Change )

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: