Blog Archives

Avoiding Rabbit Holes Through Policy and Law

All the discussions we’ve been having since the launch of the Campaign to Stop Killer Robots make me think about Alice in Wonderland and therefore I’ve been thinking a lot about rabbit holes.  I feel like current technology has us poised at the edge of a rabbit hole and if we take that extra step and create fully autonomous weapons we are going to fall – down that rabbit hole into the unknown, down into a future where a machine could make the decision to kill you, down into a situation that science fiction books have been warning us about for decades.

The best way to prevent such a horrific fall is going to be to create laws and policies that will block off the entrance to the rabbit hole so to speak.  At the moment, not many countries have policies to temporarily block the entrance and no one has laws to ban killer robots and close off the rabbit hole permanently.  It is really only the US and the UK who have even put up warning signs and a little bit of chicken wire around the entrance to this rabbit hole of killer robots through recently released policies and statements.

Over the past few weeks our colleagues at Human Rights Watch (HRW) and Article 36 have released reports on the US and UK policies towards fully autonomous weapons (killer robots).  HRW analyzed the 2012 US policy on autonomous weapons found in Department of Defense Directive Number 3000.09.  You can find the full review online.  Article 36 has a lot to say about the UK policy in their paper available online as well.

So naturally after reading these papers, I went in search of Canada’s policy.  That search left me feeling a little like Alice lost in Wonderland just trying to keep my head or at least my sanity in the face of a policy that like the Cheshire Cat might not be all there.

After my futile search, it became even more important that we talk to the government to find out if Canada has a policy on fully autonomous weapons.  Until those conversations happen, let’s see what we can learn from the US and UK policies and the analysis done by HRW and Article 36.

The US Policy

I like that the US Directive notes the risks to civilians including “unintended engagements” and failure.  One key point that Human Rights Watch’s analysis highlights is that the Directive states that for up to 10 years the US Department of Defense can only develop and use fully autonomous weapons that have non-lethal force.  The moratorium on lethal fully autonomous weapons is a good start but there are also some serious concerns about the inclusion of waivers that could override the moratorium.  HRW believes that “[t]hese loopholes open the door to the development and use of fully autonomous weapons that could apply lethal force and thus have the potential to endanger civilians in armed conflict.”[1]

In summary Human Rights Watch believes that:

The Department of Defense Directive on autonomy in weapon systems has several positive elements that could have humanitarian benefits. It establishes that fully autonomous weapons are an important and pressing issue deserving of serious concern by the United States as well as other nations. It makes clear that fully autonomous weapons could pose grave dangers and are in need of restrictions or prohibitions. It is only valid for a limited time period of five to ten years, however, and contains a number of provisions that could weaken its intended effect considerably. The Directive’s restrictions regarding development and use can be waived under certain circumstances. In addition, the Directive highlights the challenges of designing adequate testing and technology, is subject to certain ambiguity, opens the door to proliferation, and applies only to the Department of Defense.[2]

In terms of what this all means for us in Canada, we can see there may be some aspects of the American policy that are worth adopting.  The restrictions on the use of lethal force by fully autonomous weapons should be adopted by Canada to protect civilians from harm without the limited time period and waivers.  I believe that Canadians would want to ensure that humans always make the final decision about who lives and who dies in combat.

The UK Policy

Now our friends at Article 36 have pointed out the UK situation is a little more convoluted – and they are not quite ready to call it a comprehensive policy but since “the UK assortment of policy-type statements” sounds ridiculous, for the purposes of this post I’m shortening it to the UK almost-policy with the hope that one day it will morph into a full policy.  Unlike the US policy which is found in a neat little directive, the UK almost-policy is cobbled together from some statements and a note from the Ministry of Defense.  You have a closer look at the Article 36 analysis of the almost-policy.

To sum up Article 36 outlines three main shortcomings of the UK almost-policy:

  • The policy does not set out what is meant by human control over weapon systems.
  • The policy does not prevent the future development of fully autonomous weapons.
  • The policy says that existing international law is sufficient to “regulate the use” of autonomous weapons.[3]

One of the most interesting points that Article 36 makes is the need for a definition of what human control over weapons systems means.  If you are like me, you probably think that would be that humans get to make the decision to fire on a target making the final decision of who lives or who dies but we need to know exactly what governments mean when they say that humans will always been in control.  The Campaign to Stop Killer Robots wants to ensure that there is always meaningful human control over lethal weapons systems.

Defining what we mean by meaningful human control is going to be a very large discussion that we want to have with governments, with civil society, with the military, with roboticists and with everyone else.  This discussion will raise some very interesting moral and ethical questions especially since a two-star American general recently said that he thought it was “the ultimate human indignity to have a machine decide to kill you.”  The problem is once that technology exists it is going to be incredibly difficult to know where that is going to go and how on earth we are going to get back up that rabbit hole.  For us as Canadians it is key to start having that conversation as soon as possible so we don’t end up stumbling down the rabbit hole of fully autonomous weapons by accident.

Erin Hunt, Program Officer


[1] See http://pages.citebite.com/s1x4b0y9k8mii

[2] See http://pages.citebite.com/g1p4t0m9s9res

Meet the Human Campaigners!

Yesterday you met David Wreckham, the Campaign to Stop Killer Robots’ first robot campaigner.  David isn’t alone in the campaign and most of his current colleagues are human.  Let’s meet some of them and learn why they are so excited to stop killer robots!

(c) Sharron Ward for the Campaign to Stop Killer Robots

Human or friendly robot?  The Campaign to Stop Killer Robots welcomes all campaigners who want to make history and stop killer robots!  Join us!

Meet David Wreckham – Robot Campaigner

David Wreckham is a friendly robot campaigning for a ban on killer robots.  See him in action during the launch of the Campaign to Stop Killer Robots in London last week.  You can follow David Wreckham on Twitter.

(c) Sharron Ward for the campaign, 23 April 2013.

Learning from the past, protecting the future

A key lesson learned from the Canadian led initiative to ban landmines is to not wait until there is a global crisis before taking action. Fifteen years after the Ottawa Treaty banning landmines was opened for signatures there has been remarkable success. However, due to the widespread use of the weapon before the ban treaty became international law it has taken a considerable amount of effort and resources to lessen that international crisis down to national problem status. Much work remains, but all the trend lines are positive. With continued political will combined with sustained funding this is a crisis that is solvable.

That lesson of taking action before a global crisis exists was an important factor in the Norwegian led initiative to ban cluster munitions. Although a much more high tech weapon than landmines, cluster munitions have caused unacceptable humanitarian harm when they have been used. The indiscriminate effects and the impact they have on innocent civilians resulted in cluster munitions being banned. Fortunately, cluster bombs have not been as widely used as landmines so the 2008 Convention on Cluster Munitions (CCM) is very much a preventive treaty. With tens of millions of cluster submunitions, also known as bomblets, having been destroyed from the stockpiles of states parties to the treaty, the preventive nature of the CCM is already saving countless lives, limbs and livelihoods. However, as with the landmines the use of cluster munitions that had taken place before the treaty came into force means there is much work remaining to clear the existing contamination and the help victims rebuild their shattered lives.

Both landmines and cluster munitions were considered advanced weapons in their day. Landmines were sometimes referred to as the ‘perfect soldier’, but once planted they could not tell the difference between a child or a combatant.  Cluster munitions were a much more expensive and sophisticated weapon than landmines yet once dropped or launched the submunitions dispersed from the carrier munition could not distinguish between a soldier and a civilian. Cluster submunitions also had high failure rates and did not explode upon impact as designed leaving behind de facto minefields.

Both landmines and cluster munitions shared the characteristic of not knowing when the conflict had ended so they continued to kill and injure long after peace had happened. In many cases they continued their destructive tasks decades after hostilities had ceased.

Another characteristic they shared is that once humans were no longer involved, i.e. after planting or firing them, the impact of the weapons became immediately problematic. With no human control over whom the target was or when an explosion would occur resulted in a  weapons that was indiscriminate by nature which was a key factor in the movements to ban them.

Today in London, England a new campaign will be launched taking the concept of prevention to its full extent by banning a weapon that is not yet in use. Fully autonomous weapons are very much on the drawing boards and in the plans of technologically advanced militaries such as China, Russia, the UK and the US. These weapons pose a wide range of ethical, moral, and legal issues. The Campaign to Stop Killer Robots seeks to raise awareness of those issues and to encourage a pre-emptive ban on the weapons.

Over the past decade, the expanded use of unmanned armed vehicles or drones has dramatically changed warfare, bringing new humanitarian and legal challenges. Now rapid advances in technology are permitting the United States and other nations with high-tech militaries, including China, Israel, Russia, and the United Kingdom, to move toward systems that would give full combat autonomy to machines.

Lethal robot weapons which would be able to select and attack targets without any human intervention take warfare to dangerous and unacceptable levels. The new campaign launched today is a coordinated international coalition of non-governmental organizations concerned with the implications of fully autonomous weapons, also called “killer robots.”

The Campaign to Stop Killer Robots calls for a pre-emptive and comprehensive ban on the development, production, and use of fully autonomous weapons. The prohibition should be achieved through an international treaty, as well as through national laws and other measures.

The term fully autonomous weapons may sound like something from a video game, but they are not. They are lethal weapons and once programmed will not be controlled by anyone. While some may find the idea of machines fighting machines with humans spared the death and destruction of combat appealing, the fact is that will not be the case. We are not talking here about futuristic cyborgs battling each other to death, but about robots designed to kill humans. Thus the name killer robots is simultaneously deadly accurate and highly disturbing.

We live in a world where technology is omnipresent, but we are also well aware of its limitations. While we enjoy the benefits of technology and appreciate those who create and operate them, we also well aware that airplanes sometimes crash, trains derail, ships run aground, cars get recalled, the internet occasionally blacks out (as do power grids), computers freeze, viruses spread via email messages or websites, and, people occasionally end up in the wrong place because of a malfunctioning or poorly programmed GPS device. To use the vernacular “shit happens” or in this case hi-tech shit happens. What could possibly go wrong with arming robots without any meaningful human control?

It would also be comforting to think that since these are very advanced weapons only the “good guys” would have them. However, events in the last two years in Libya, North Korea and Syria, to name a few, would indicate that desperate dictators and rogue states have no problems acquiring the most sophisticated and hi-tech weaponry. If they can get them so can terrorists and criminals.

Scientists and engineers have created some amazing robots which have the potential to greatly improve our lives, but no scientist or engineer should be involved in creating an armed robot that can operate without human control. Computer scientists and engineers have created fabulous devices which have increased our productivity and made life much more enjoyable for millions of people. Those computer experts should never create programs that would allow an armed machine to operate without any human in control.

The hundreds of thousands of landmine and cluster munition victims around the world are testament to the fact that what looks good on the drawing board or in the lab can have deadly consequences for innocent civilians; despite the best intentions or even the best technology that money can buy. We need to learn the key lesson of these two weapons that tragedies can and should be prevented.  The time to stop fully autonomous weapons does not begin next week, or next month, or during testing, or after their first use. The time to stop killer robots begins today April 23, 2013 in London, England and wherever you are reading this.

– Paul Hannon

Follow

Get every new post delivered to your Inbox.

Join 28 other followers