There have been some exciting and important developments over the summer. The International Committee of the Red Cross (ICRC) launched the newest edition of the International Review of the Red Cross and the theme is New Technologies and Warfare. A number of campaigners contributed to the journal so it is definitely worth a read. The ICRC also published a Frequently Asked Questions document on autonomous weapons that helps explain the issue and the ICRC’s position on fully autonomous weapons.
France along with the United Nations Office for Disarmament Affairs in Geneva convened a seminar on fully autonomous weapons for governments and civil society in early September. The Campaign to Stop Killer Robots had campaigners taking part and you can read the full report on the global campaign’s website.
The campaigns in Germany and Norway are starting off strong as well. In the lead up to the German election, all the major parties shared their policy positions in regards to fully autonomous weapons with our colleagues at Facing Finance. Norwegian campaigners launched their campaign with a breakfast seminar and now they are waiting to hear what the new Norwegian government’s policy on fully autonomous weapons will be.
Like our colleagues in Norway, we’re still waiting to hear what Canada’s policy on fully autonomous weapons will be. We have written to the Ministers of National Defense and of Foreign Affairs but the campaign team has not yet heard back. In the meantime, Canadians can weigh in on the topic through our new online petition. Share and sign the petition today! This petition is the first part of a new initiative that will be coming your way in a few weeks. Keep your eye out for the news and until then keep sharing the petition so that the government knows that Canadians have concerns about fully autonomous weapons and believe that Canada should have a strong policy against them.
EDIT: We had a very human moment here and forgot to include congratulations to James Foy of Vancouver for winning the 2013 Canadian Bar Association’s National Military Law Section Law School Sword and Scale Essay Prize for his essay called Autonomous Weapons Systems: Taking the Human out of International Humanitarian Law. It is great to see law students looking at this new topic and also wonderful that the Canadian Bar Association recognized the importance of this issue. Congratulations James!
Our colleagues at Article 36 have done a detailed analysis of the debate. In light of the stronger language in this debate, there is some room to optimistic
It would seem straightforward to move from such a strong national position to a formalised national moratorium and a leading role within an international process to prohibit such weapons. The government did not provide any reason as to why a moratorium would be inappropriate, other than to speculate on the level of support amongst other countries for such a course of action.
Whilst significant issues still require more detailed elaboration, Article 36 believes this parliamentary debate has been very valuable in prompting reflection and Ministerial scrutiny of UK policy on fully autonomous weapons and narrowing down the areas on which further discussions should focus. It appears clear now that there will be scope for such discussions to take place with the UK and other states in the near future.
The UK parliamentary debate and Article 36’s analysis of it, coming so soon after the Human Rights Council debate and the widespread media coverage of the issue make it quite clear that it is time to have such a substantive and non-partisan debate in the Canadian House of Commons as the government works out its policy on this important issue.
Now unless like me you grew up with a sci-fi geek for a father who introduced you to various fictional worlds like those in Star Wars, Star Trek and 2001: A Space Odyssey at a young age, you might not know who Isaac Asimov is, what his Three Laws of Robotics are and why these laws are relevant to the Campaign to Stop Killer Robots.
Isaac Asimov (1920-1992) was an American scientist and writer, best known for his science fiction writings especially short stories. In his writings, Asimov created the Three Laws of Robotics which govern the action of his robot characters. In his stories, the Three Laws were programmed into robots as a safety function. The laws were first stated in the short story Runaround but you can see them in many of his other writings and since then they have shown up in other authors’ work as well.
The Three Laws of Robotics are:
After reading the Three Laws, it might be pretty clear why Mr. Asimov’s ideas are frequently mentioned in media coverage of our campaign to stop fully autonomous weapons. A fully autonomous weapon will most definitely violate the first and second laws of robotics.
To me, the Three Laws seem to be pretty common sense guides for the actions of autonomous robots. It is probably a good idea to protect yourself from being killed by your own machine – ok not probably – it is a good idea to make sure your machine does not kill you! It also is important for us to remember that Asimov recognized that just regular robots with artificial intelligence (not even fully autonomous weapons) could pose a threat to humanity at large so he also added a fourth, or zeroth law, to come before the others:
0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
“But Erin,” you say, “these are just fictional stories; the Campaign to Stop Killer Robots is dealing with how things really will be. We need to focus on reality not fiction!” I hear you but since fully autonomous weapons do not yet exist we need to take what we know about robotics, warfare and law and add a little imagination to foresee some of the possible problems with fully autonomous weapons. Who better to help us consider the possibilities than science fiction writers who have been thinking about these types of issues for decades?
At the moment, Asimov’s Three Laws are currently the closest thing we have to laws explicitly governing the use of fully autonomous weapons. Asimov’s stories often tell tales of how the application of these laws result in robots acting in weird and dangerous ways the programmers did not predict. By articulating some pretty common sense laws for robots and then showing how those laws can have unintended negative consequences when implemented by artificial intelligence, Asimov’s writings may have made the first argument that a set of parameters to guide the actions of fully autonomous weapons will not be sufficient. Even if you did not have a geeky childhood like I did, you can still see the problems with creating fully autonomous weapons. You don’t have to read Asimov, know who HAL is or have a disliking for the Borg to worry that we won’t be able to control how artificial intelligence will interpret our commands and anyone who has tried to use a computer, a printer or a cell phone knows that there is no end to the number of ways technology can go wrong. We need a pre-emptive ban on fully autonomous weapons before it is too late and that is what the Campaign to Stop Killer Robots will be telling the diplomats at the UN in Geneva at the end of the month.
- Erin Hunt, Program Officer