Monthly Archives: May 2013

First ever UN debate on killer robots

This week, the United Nations Human Rights Council became the first UN body to discuss the issue of killer robots.  To mark the occasion, the Campaign to Stop Killer Robots headed to Geneva to introduce our campaign to diplomats, UN agencies and civil society.  Check out the full report from the international campaign.

Asimov’s Three Laws of Robotics

In the weeks since the Campaign to Stop Killer Robots launched, there has been a lot of media coverage.  The media coverage is very exciting and what I have found to be very interesting is the number of articles that refer to Isaac Asimov’s Three Laws of Robotics.

Now unless like me you grew up with a sci-fi geek for a father who introduced you to various fictional worlds like those in Star Wars, Star Trek and 2001: A Space Odyssey at a young age, you might not know who Isaac Asimov is, what his Three Laws of Robotics are and why these laws are relevant to the Campaign to Stop Killer Robots.

Isaac Asimov (1920-1992) was an American scientist and writer, best known for his science fiction writings especially short stories.  In his writings, Asimov created the Three Laws of Robotics which govern the action of his robot characters.  In his stories, the Three Laws were programmed into robots as a safety function.  The laws were first stated in the short story Runaround but you can see them in many of his other writings and since then they have shown up in other authors’ work as well.

The Three Laws of Robotics are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

After reading the Three Laws, it might be pretty clear why Mr. Asimov’s ideas are frequently mentioned in media coverage of our campaign to stop fully autonomous weapons.  A fully autonomous weapon will most definitely violate the first and second laws of robotics.

To me, the Three Laws seem to be pretty common sense guides for the actions of autonomous robots.  It is probably a good idea to protect yourself from being killed by your own machine – ok not probably – it is a good idea to make sure your machine does not kill you!  It also is important for us to remember that Asimov recognized that just regular robots with artificial intelligence (not even fully autonomous weapons) could pose a threat to humanity at large so he also added a fourth, or zeroth law, to come before the others:

0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

“But Erin,” you say, “these are just fictional stories; the Campaign to Stop Killer Robots is dealing with how things really will be.  We need to focus on reality not fiction!”  I hear you but since fully autonomous weapons do not yet exist we need to take what we know about robotics, warfare and law and add a little imagination to foresee some of the possible problems with fully autonomous weapons.  Who better to help us consider the possibilities than science fiction writers who have been thinking about these types of issues for decades?

At the moment, Asimov’s Three Laws are currently the closest thing we have to laws explicitly governing the use of fully autonomous weapons.  Asimov’s stories often tell tales of how the application of these laws result in robots acting in weird and dangerous ways the programmers did not predict.  By articulating some pretty common sense laws for robots and then showing how those laws can have unintended negative consequences when implemented by artificial intelligence, Asimov’s writings may have made the first argument that a set of parameters to guide the actions of fully autonomous weapons will not be sufficient.  Even if you did not have a geeky childhood like I did, you can still see the problems with creating fully autonomous weapons.  You don’t have to read Asimov, know who HAL is or have a disliking for the Borg to worry that we won’t be able to control how artificial intelligence will interpret our commands and anyone who has tried to use a computer, a printer or a cell phone knows that there is no end to the number of ways technology can go wrong.  We need a pre-emptive ban on fully autonomous weapons before it is too late and that is what the Campaign to Stop Killer Robots will be telling the diplomats at the UN in Geneva at the end of the month.

– Erin Hunt, Program Officer

Avoiding Rabbit Holes Through Policy and Law

All the discussions we’ve been having since the launch of the Campaign to Stop Killer Robots make me think about Alice in Wonderland and therefore I’ve been thinking a lot about rabbit holes.  I feel like current technology has us poised at the edge of a rabbit hole and if we take that extra step and create fully autonomous weapons we are going to fall – down that rabbit hole into the unknown, down into a future where a machine could make the decision to kill you, down into a situation that science fiction books have been warning us about for decades.

The best way to prevent such a horrific fall is going to be to create laws and policies that will block off the entrance to the rabbit hole so to speak.  At the moment, not many countries have policies to temporarily block the entrance and no one has laws to ban killer robots and close off the rabbit hole permanently.  It is really only the US and the UK who have even put up warning signs and a little bit of chicken wire around the entrance to this rabbit hole of killer robots through recently released policies and statements.

Over the past few weeks our colleagues at Human Rights Watch (HRW) and Article 36 have released reports on the US and UK policies towards fully autonomous weapons (killer robots).  HRW analyzed the 2012 US policy on autonomous weapons found in Department of Defense Directive Number 3000.09.  You can find the full review online.  Article 36 has a lot to say about the UK policy in their paper available online as well.

So naturally after reading these papers, I went in search of Canada’s policy.  That search left me feeling a little like Alice lost in Wonderland just trying to keep my head or at least my sanity in the face of a policy that like the Cheshire Cat might not be all there.

After my futile search, it became even more important that we talk to the government to find out if Canada has a policy on fully autonomous weapons.  Until those conversations happen, let’s see what we can learn from the US and UK policies and the analysis done by HRW and Article 36.

The US Policy

I like that the US Directive notes the risks to civilians including “unintended engagements” and failure.  One key point that Human Rights Watch’s analysis highlights is that the Directive states that for up to 10 years the US Department of Defense can only develop and use fully autonomous weapons that have non-lethal force.  The moratorium on lethal fully autonomous weapons is a good start but there are also some serious concerns about the inclusion of waivers that could override the moratorium.  HRW believes that “[t]hese loopholes open the door to the development and use of fully autonomous weapons that could apply lethal force and thus have the potential to endanger civilians in armed conflict.”[1]

In summary Human Rights Watch believes that:

The Department of Defense Directive on autonomy in weapon systems has several positive elements that could have humanitarian benefits. It establishes that fully autonomous weapons are an important and pressing issue deserving of serious concern by the United States as well as other nations. It makes clear that fully autonomous weapons could pose grave dangers and are in need of restrictions or prohibitions. It is only valid for a limited time period of five to ten years, however, and contains a number of provisions that could weaken its intended effect considerably. The Directive’s restrictions regarding development and use can be waived under certain circumstances. In addition, the Directive highlights the challenges of designing adequate testing and technology, is subject to certain ambiguity, opens the door to proliferation, and applies only to the Department of Defense.[2]

In terms of what this all means for us in Canada, we can see there may be some aspects of the American policy that are worth adopting.  The restrictions on the use of lethal force by fully autonomous weapons should be adopted by Canada to protect civilians from harm without the limited time period and waivers.  I believe that Canadians would want to ensure that humans always make the final decision about who lives and who dies in combat.

The UK Policy

Now our friends at Article 36 have pointed out the UK situation is a little more convoluted – and they are not quite ready to call it a comprehensive policy but since “the UK assortment of policy-type statements” sounds ridiculous, for the purposes of this post I’m shortening it to the UK almost-policy with the hope that one day it will morph into a full policy.  Unlike the US policy which is found in a neat little directive, the UK almost-policy is cobbled together from some statements and a note from the Ministry of Defense.  You have a closer look at the Article 36 analysis of the almost-policy.

To sum up Article 36 outlines three main shortcomings of the UK almost-policy:

  • The policy does not set out what is meant by human control over weapon systems.
  • The policy does not prevent the future development of fully autonomous weapons.
  • The policy says that existing international law is sufficient to “regulate the use” of autonomous weapons.[3]

One of the most interesting points that Article 36 makes is the need for a definition of what human control over weapons systems means.  If you are like me, you probably think that would be that humans get to make the decision to fire on a target making the final decision of who lives or who dies but we need to know exactly what governments mean when they say that humans will always been in control.  The Campaign to Stop Killer Robots wants to ensure that there is always meaningful human control over lethal weapons systems.

Defining what we mean by meaningful human control is going to be a very large discussion that we want to have with governments, with civil society, with the military, with roboticists and with everyone else.  This discussion will raise some very interesting moral and ethical questions especially since a two-star American general recently said that he thought it was “the ultimate human indignity to have a machine decide to kill you.”  The problem is once that technology exists it is going to be incredibly difficult to know where that is going to go and how on earth we are going to get back up that rabbit hole.  For us as Canadians it is key to start having that conversation as soon as possible so we don’t end up stumbling down the rabbit hole of fully autonomous weapons by accident.

Erin Hunt, Program Officer


[1] See http://pages.citebite.com/s1x4b0y9k8mii

[2] See http://pages.citebite.com/g1p4t0m9s9res

Meet the Human Campaigners!

Yesterday you met David Wreckham, the Campaign to Stop Killer Robots’ first robot campaigner.  David isn’t alone in the campaign and most of his current colleagues are human.  Let’s meet some of them and learn why they are so excited to stop killer robots!

(c) Sharron Ward for the Campaign to Stop Killer Robots

Human or friendly robot?  The Campaign to Stop Killer Robots welcomes all campaigners who want to make history and stop killer robots!  Join us!

Follow

Get every new post delivered to your Inbox.

Join 28 other followers