Blog Archives

A pivotal year ahead

Originally published on the Forum on the Arms Trade’s Looking Ahead blog, Erin Hunt looks at opportunities and challenges ahead in 2017 for efforts to preemptively ban autonomous weapons systems.

2017 has the potential to be a pivotal year in efforts to ensure that all weapons have meaningful human control. For three years, the Convention on Conventional Weapons (CCW) has been discussing lethal autonomous weapons (future weapons that could select and fire upon a target without human control). In December 2016, the Review Conference of the CCW decided to establish a Group of Governmental Experts (GGE) chaired by Ambassador Amandeep Singh Gill of India which will meet over 10 days in 2017 and then report-back to the CCW’s annual meeting on 22-24 November.

A GGE is a more formal level of meetings than the ones held in 2014, 2015 and 2016. States will be expected to bring their own experts and participate actively in discussions, instead of listening to presentations by outside experts and asking questions of those experts. The first meeting of the GGE will be held at the UN in Geneva on either 24-28 April or 21-25 August 2017. The date is dependent on when funds are available for the meeting. The second meeting of the GGE will be on 13-17 November, just before the annual CCW meeting.

In 2016, the number of states calling for a pre-emptive ban on fully autonomous weapons more than doubled.  At the time of writing, Algeria, Argentina, Bolivia, Chile, Costa Rica, Cuba, Ecuador, Egypt, Ghana, Guatemala, Holy See, Mexico, Nicaragua, Pakistan, Panama, Peru, State of Palestine, Venezuela and Zimbabwe have called for a ban while a number of other states seem to support new international humanitarian law of some sort to deal with autonomous weapons systems.

This GGE is a large step towards a pre-emptive ban on autonomous weapons systems but there are a number of challenges ahead in 2017.  First, the Russian Federation continues to object to more formal talks on autonomous weapon systems on the grounds that it is premature to move forward since there is not a clear understanding of the subject under discussion. That objection forgets that definitions are usually the last part of disarmament treaties to be negotiated. It was only at the very end of the 2016 CCW Review Conference that Russia agreed to not block the GGE.

Second, the majority of states, including my own, Canada, do not have national policies on autonomous weapons systems.  However, this challenge is also an opportunity. The Campaign to Stop Killer Robots will be working hard around the world in 2017 to support the development of national policies on autonomous weapons systems.  After three years of informal CCW experts meetings as well as discussions in the Human Rights Council, states have a large amount of information at their disposal to begin to craft national policies. States can also hold consultations on creating a national policy in advance of the GGE meetings.

Third, there is the possibility that the GGE may become distracted by the inclusion of a discussion item on best practices and greater transparency in Article 36 weapons reviews. These legal reviews are an obligation of states developing, purchasing or otherwise acquiring new weapons.

Although Article 36 weapons reviews should be a topic of discussion at the international level to strengthen both policy and practice around the world, better weapons reviews will not solve the problems associated with autonomous weapons systems and should not distract the GGE from the core of its work. Weapons reviews cannot answer moral, ethical, and political questions. An Article 36 review cannot tell us if it is acceptable to the public conscience for a machine to kill without meaningful human control. Autonomous weapons systems are often referred to as a revolution in warfare; and as such, moral, ethical and political considerations must not be pushed aside. These questions need to remain on the international agenda in 2017.

This year, we will witness significant work done at the national and international level to increase understanding of the challenges posed by autonomous weapons as well as the number of states calling for a pre-emptive ban. Stay tuned to see if the international community stands ready at year’s end to ensure that all weapons have meaningful human control.

Gearing up for the Review Conference

The Convention on Conventional Weapons (CCW) Review Conference in December will decide if they will hold a Group of Governmental Experts (GGE) meeting on autonomous weapons systems in 2017. A GGE is the logical next step in the work to address concerns about autonomous weapons systems (or killer robots).

The Campaign to Stop Killer Robots is getting ready for the Review Conferenc20161012_154323e here in Canada and around the world.  Check out our colleagues at Reaching Critical Will for an update on the Preparatory Meeting of the CCW to see how the international preparations are going.

On the Canadian side, our Program Coordinator, Erin Hunt, was pleased to deliver the Campaign’s statement to the United Nations General Assembly’s First Committee on October 12.

Over the next month and a bit, we will be talking with parliamentarians, civil society and academics to help ensure that Canada takes a strong position at the Review Conference and beyond. You can help by writing your MP to ask that Canada outline a national policy on autonomous weapons or by donating online to support our work.

 

Canadian experts join global call for a ban on killer robots

Canadians are among the 270 engineers, computing and artificial intelligence experts, roboticists, and professionals from related disciplines who have signed an experts’ call to ban killer robots.  The experts say “given the limitations and unknown future risks of autonomous robot weapons technology, we call for a prohibition on their development and deployment. Decisions about the application of violent force must not be delegated to machines.”

The International Committee for Robot Arms Control (ICRAC) has thus far received 272 signatures from 37 countries on the statement which continues to collect signatures.  In an announcement released today, Professor Noel Sharkey, Chair of ICRAC said “Governments need to listen to the experts’ warnings and work with us to tackle this challenge together before it is too late.  It is urgent that international talks get started now to prevent the further development of autonomous robot weapons before it is too late.”

Canada does not currently have a policy on fully autonomous weapons and we hope that the government will engage these experts and others as they create the policy.  We expect to see additional signatures from Canadian experts as this issue gains momentum.  At present, the University of Toronto has the largest numbers of signatories but experts from other organizations and institutions still have time to sign the call.  As the quote below from Geoffrey Hinton indicates now is the time to ensure that artificial intelligence and robotic technologies are used for the betterment of humanity.

“Artificial Intelligence can improve people’s lives in so many ways, but researchers need to push for positive applications of technology by supporting a ban on autonomous weapons systems.”

Geoffrey Hinton FRS, [founding father of modern machine learning] Raymond Reiter Distinguished Professor of Artificial Intelligence at the University of Toronto

Killer Robots at TEDx

Prof Noel Sharkey gave a talk at TEDx Sheffield about fully autonomous weapons and the Campaign to Stop Killer Robots.  Prof. Sharkey is one of the founders of the International Committee for Robot Arms Control.  He delivers a passionate call to action to stop killer robots.  Take a few minutes out of your day to see him talk about his journey from a boy who loved toy soldiers growing up in the shadow of World War II to a leading campaigner in the effort to stop killer robots and protect civilians.  Plus he even shares a little song about the CIA!

Asimov’s Three Laws of Robotics

In the weeks since the Campaign to Stop Killer Robots launched, there has been a lot of media coverage.  The media coverage is very exciting and what I have found to be very interesting is the number of articles that refer to Isaac Asimov’s Three Laws of Robotics.

Now unless like me you grew up with a sci-fi geek for a father who introduced you to various fictional worlds like those in Star Wars, Star Trek and 2001: A Space Odyssey at a young age, you might not know who Isaac Asimov is, what his Three Laws of Robotics are and why these laws are relevant to the Campaign to Stop Killer Robots.

Isaac Asimov (1920-1992) was an American scientist and writer, best known for his science fiction writings especially short stories.  In his writings, Asimov created the Three Laws of Robotics which govern the action of his robot characters.  In his stories, the Three Laws were programmed into robots as a safety function.  The laws were first stated in the short story Runaround but you can see them in many of his other writings and since then they have shown up in other authors’ work as well.

The Three Laws of Robotics are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

After reading the Three Laws, it might be pretty clear why Mr. Asimov’s ideas are frequently mentioned in media coverage of our campaign to stop fully autonomous weapons.  A fully autonomous weapon will most definitely violate the first and second laws of robotics.

To me, the Three Laws seem to be pretty common sense guides for the actions of autonomous robots.  It is probably a good idea to protect yourself from being killed by your own machine – ok not probably – it is a good idea to make sure your machine does not kill you!  It also is important for us to remember that Asimov recognized that just regular robots with artificial intelligence (not even fully autonomous weapons) could pose a threat to humanity at large so he also added a fourth, or zeroth law, to come before the others:

0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

“But Erin,” you say, “these are just fictional stories; the Campaign to Stop Killer Robots is dealing with how things really will be.  We need to focus on reality not fiction!”  I hear you but since fully autonomous weapons do not yet exist we need to take what we know about robotics, warfare and law and add a little imagination to foresee some of the possible problems with fully autonomous weapons.  Who better to help us consider the possibilities than science fiction writers who have been thinking about these types of issues for decades?

At the moment, Asimov’s Three Laws are currently the closest thing we have to laws explicitly governing the use of fully autonomous weapons.  Asimov’s stories often tell tales of how the application of these laws result in robots acting in weird and dangerous ways the programmers did not predict.  By articulating some pretty common sense laws for robots and then showing how those laws can have unintended negative consequences when implemented by artificial intelligence, Asimov’s writings may have made the first argument that a set of parameters to guide the actions of fully autonomous weapons will not be sufficient.  Even if you did not have a geeky childhood like I did, you can still see the problems with creating fully autonomous weapons.  You don’t have to read Asimov, know who HAL is or have a disliking for the Borg to worry that we won’t be able to control how artificial intelligence will interpret our commands and anyone who has tried to use a computer, a printer or a cell phone knows that there is no end to the number of ways technology can go wrong.  We need a pre-emptive ban on fully autonomous weapons before it is too late and that is what the Campaign to Stop Killer Robots will be telling the diplomats at the UN in Geneva at the end of the month.

– Erin Hunt, Program Officer

Meet the Human Campaigners!

Yesterday you met David Wreckham, the Campaign to Stop Killer Robots’ first robot campaigner.  David isn’t alone in the campaign and most of his current colleagues are human.  Let’s meet some of them and learn why they are so excited to stop killer robots!

(c) Sharron Ward for the Campaign to Stop Killer Robots

Human or friendly robot?  The Campaign to Stop Killer Robots welcomes all campaigners who want to make history and stop killer robots!  Join us!