by Brett MacFarlane
When I first applied for an internship position to work on the Campaign to Stop Killer Robots back in November, I knew virtually nothing on either the campaign or the killer robots issue. I chose the internship with Mines Action Canada as my top choice because it was the position which most closely related to my field of study: Conflict Analysis and Conflict Resolution. When submitting my application, I had a conversation with my fellow students on just what exactly were killer robots. The general consensus of the group was that killer robots had to be drones that were being militarily used in such countries as Pakistan and Yemen.
Since joining the International Campaign to Stop Killer Robots in January, I have had the privilege of being exposed to a new issue that has not been discussed by the general public or even most international affairs students. I learned about current development efforts by militaries to create robotic weapons which would have complete autonomy to choose whether or not to fire on a specified target without meaningful human control. Most disturbingly I learned that some countries (e.g. the United States, Israel, and several others) have not only taken steps to develop “human-out-of-the-loop weapons”, but that some current technologies could easily be adapted to become autonomous weapons. As a student studying in an international affairs program and as a concerned person, this issue raises human rights and humanitarian concerns.
The use of autonomous weapons is a troubling issue for human rights advocates and humanitarian organizations because it would make humans increasingly vulnerable in warfare where international law is not designed to accommodate autonomous weapons. First, how could the protection of civilians be guaranteed in times of combat? If human judgment is taken out of the battlefield, robots would be tasked with distinguishing armed combatants from ordinary citizens. In this scenario, would a robot have the capability to differentiate between a soldier holding a weapon from a child holding a toy gun? The potential to have such mistakes be made is likely to occur so long as robots are given higher autonomy and decision-making capabilities on the battlefield. Further, the development and use of autonomous weapons could pose serious issues of accountability in war. For example, if a robotic system was to go awry and end up massacring a village of non-combatants, who would be held accountable? Would it be the systems operator of the machine, the military, the computer programmer, or the manufacturer of the machine? Without military troops in the air, land, or sea, who can be held liable for the actions of robots in combat? Implementing the use of autonomous robots in war would severely reduce the legal protections civilians are accorded during conflict.
I am very concerned that putting autonomous weapons on the battlefield would change how wars are fought and conducted. Wars would no longer be fought by the military personnel of two opposing sides; but by autonomous weapons, capable of making their own ‘kill decision’, against human forces. Countries which have the financial means to develop autonomous weapons could threaten lesser developed countries who would bear the costs of higher human casualties on the battlefield. More importantly, the potential for an increase in future conflict will grow as the decision to enter into combat would be much easier for leaders to make as they would not have to bear the costs of human casualties. The concern here is that countries would be sending machines to fight against humans, instead of the traditional model of human versus human. As difficult as this may be to hear, it is only through the casualties of soldiers on the battlefield that we are able to see the true cost of warfare. Taking human sacrifice out of the battlefield could potentially cause an increase in future warfare.
As interest in the topic of killer robots in the international community grows, it is pertinent that students, and indeed all citizens, begin to discuss the development of autonomous robots for military use in their respective fields. Should silence continue not only in the academic community, but in the Canadian parliament and public domain, the potential for autonomous robots to make life and death decisions on the battlefield without human control may be realized. As one concerned student, and citizen, who has signed the petition to Keep Killer Robots Fiction, I strongly encourage all to Keep Killer Robots Fiction by not only gaining exposure and increasing their knowledge on the subject, but to join me in signing the petition at http://bit.ly/KRpetition. Only through increased discussion and knowledge of this topic in the general community can pressure be mounted on governments to create a pre-emptive ban on this emerging threat.
Brett MacFarlane interned at Mines Action Canada and is a Master of the Arts Candidate at the Norman Paterson School of International Affairs at Carleton University specializing in Conflict Analysis and Conflict Resolution.
By Matthew Taylor
There is nothing Canadian about machines that kill people without human control. Machines that have no conscience. Machines that have no compassion. Machines without the ability to distinguish between someone who is a genuine threat and someone in the wrong place at the wrong time.
We, as a people, have for many years sought to build a safer and more peaceful world. Former Prime Minister Brian Mulroney made Nelson Mandela and the end of apartheid in South Africa “the highest priority of the government of Canada in our foreign affairs.” Former Prime Minister Lester Pearson brought about modern peacekeeping in 1956. Former Foreign Affairs Minister Lloyd Axworthy gathered states in our nation’s capital to end the use of anti-personnel landmines around the world. These men understood that a desire for peace and justice is a basic Canadian value. That is not something a machine can ever understand.
This issue presents us as Canadians with an opportunity to share our values, and our vision for a safer world. Killer Robots are perhaps the most important international arms control issue to emerge since nuclear weapons were dropped on Hiroshima and Nagasaki. Nuclear weapons redefined how we understood and approached warfare. That is why it is so absolutely necessary for the world to confront the problem of killer robots before and not after they see action on the battlefield.
The costs of playing catch up are far too evident. Once weapons are employed, most countries will scramble to re-adjust for the change in balance in power. During World War I chemical weapons were used against Canadian soldiers causing blindness, death and unspeakable suffering. Nearly one hundred years later chemical weapons were being used in Syria causing death and significant harm to civilians. With thousands of casualties of chemical weapons in between, the difficulty of banning weapons once they have been put into use is quite evident.
History has shown that the support and leadership of our nation can bring about international change. We have a duty as moral entrepreneurs to prevent the horror of autonomous killing machines from ever becoming a reality.
In November 2013, states agreed to discuss the question of lethal autonomous robots in meetings of the Convention on Conventional Weapons in May, 2014. This umbrella agreement allows for 117 member states to consider issues of arms control.
But at the moment, the official Canadian government position on Killer Robots is unclear. A government statement in the February 2014 edition of L’actualite offers little insight. In the article, a Canadian Foreign Affairs spokesman indicated that Canada does not ban weapons that do not yet exist. But in fact, Canada has participated in a pre-emptive ban of weapons before.
In 1995, Canada was one of the original parties to Protocol IV of the Convention to Conventional Weapons. This international agreement banning blinding lasers was made in the very same forum in whichkiller robots are set to be discussed in May. This not only represents a step in the right direction but a precedent upon which to build.
If a pre-emptive ban has been done before, it can be done again. Whether a weapon exists yet or not should have no bearing on whether the technology should be illegal under international humanitarian law. What should matter is whether we as a people believe that these weapons can ever be considered to be humane. To me, and to many others, the answer to that question is clearly no.
If you feel that as Canadians we must take a stand, please join me in signing our petition to Keep Killer Robots Fiction.
Matthew Taylor is an intern at Mines Action Canada and is a Master of the Arts Candidate at the Norman Paterson School of International Affairs at Carleton University specializing in Intelligence and National Security.
Yesterday you met David Wreckham, the Campaign to Stop Killer Robots’ first robot campaigner. David isn’t alone in the campaign and most of his current colleagues are human. Let’s meet some of them and learn why they are so excited to stop killer robots!
(c) Sharron Ward for the Campaign to Stop Killer Robots
Human or friendly robot? The Campaign to Stop Killer Robots welcomes all campaigners who want to make history and stop killer robots! Join us!
A key lesson learned from the Canadian led initiative to ban landmines is to not wait until there is a global crisis before taking action. Fifteen years after the Ottawa Treaty banning landmines was opened for signatures there has been remarkable success. However, due to the widespread use of the weapon before the ban treaty became international law it has taken a considerable amount of effort and resources to lessen that international crisis down to national problem status. Much work remains, but all the trend lines are positive. With continued political will combined with sustained funding this is a crisis that is solvable.
That lesson of taking action before a global crisis exists was an important factor in the Norwegian led initiative to ban cluster munitions. Although a much more high tech weapon than landmines, cluster munitions have caused unacceptable humanitarian harm when they have been used. The indiscriminate effects and the impact they have on innocent civilians resulted in cluster munitions being banned. Fortunately, cluster bombs have not been as widely used as landmines so the 2008 Convention on Cluster Munitions (CCM) is very much a preventive treaty. With tens of millions of cluster submunitions, also known as bomblets, having been destroyed from the stockpiles of states parties to the treaty, the preventive nature of the CCM is already saving countless lives, limbs and livelihoods. However, as with the landmines the use of cluster munitions that had taken place before the treaty came into force means there is much work remaining to clear the existing contamination and the help victims rebuild their shattered lives.
Both landmines and cluster munitions were considered advanced weapons in their day. Landmines were sometimes referred to as the ‘perfect soldier’, but once planted they could not tell the difference between a child or a combatant. Cluster munitions were a much more expensive and sophisticated weapon than landmines yet once dropped or launched the submunitions dispersed from the carrier munition could not distinguish between a soldier and a civilian. Cluster submunitions also had high failure rates and did not explode upon impact as designed leaving behind de facto minefields.
Both landmines and cluster munitions shared the characteristic of not knowing when the conflict had ended so they continued to kill and injure long after peace had happened. In many cases they continued their destructive tasks decades after hostilities had ceased.
Another characteristic they shared is that once humans were no longer involved, i.e. after planting or firing them, the impact of the weapons became immediately problematic. With no human control over whom the target was or when an explosion would occur resulted in a weapons that was indiscriminate by nature which was a key factor in the movements to ban them.
Today in London, England a new campaign will be launched taking the concept of prevention to its full extent by banning a weapon that is not yet in use. Fully autonomous weapons are very much on the drawing boards and in the plans of technologically advanced militaries such as China, Russia, the UK and the US. These weapons pose a wide range of ethical, moral, and legal issues. The Campaign to Stop Killer Robots seeks to raise awareness of those issues and to encourage a pre-emptive ban on the weapons.
Over the past decade, the expanded use of unmanned armed vehicles or drones has dramatically changed warfare, bringing new humanitarian and legal challenges. Now rapid advances in technology are permitting the United States and other nations with high-tech militaries, including China, Israel, Russia, and the United Kingdom, to move toward systems that would give full combat autonomy to machines.
Lethal robot weapons which would be able to select and attack targets without any human intervention take warfare to dangerous and unacceptable levels. The new campaign launched today is a coordinated international coalition of non-governmental organizations concerned with the implications of fully autonomous weapons, also called “killer robots.”
The Campaign to Stop Killer Robots calls for a pre-emptive and comprehensive ban on the development, production, and use of fully autonomous weapons. The prohibition should be achieved through an international treaty, as well as through national laws and other measures.
The term fully autonomous weapons may sound like something from a video game, but they are not. They are lethal weapons and once programmed will not be controlled by anyone. While some may find the idea of machines fighting machines with humans spared the death and destruction of combat appealing, the fact is that will not be the case. We are not talking here about futuristic cyborgs battling each other to death, but about robots designed to kill humans. Thus the name killer robots is simultaneously deadly accurate and highly disturbing.
We live in a world where technology is omnipresent, but we are also well aware of its limitations. While we enjoy the benefits of technology and appreciate those who create and operate them, we also well aware that airplanes sometimes crash, trains derail, ships run aground, cars get recalled, the internet occasionally blacks out (as do power grids), computers freeze, viruses spread via email messages or websites, and, people occasionally end up in the wrong place because of a malfunctioning or poorly programmed GPS device. To use the vernacular “shit happens” or in this case hi-tech shit happens. What could possibly go wrong with arming robots without any meaningful human control?
It would also be comforting to think that since these are very advanced weapons only the “good guys” would have them. However, events in the last two years in Libya, North Korea and Syria, to name a few, would indicate that desperate dictators and rogue states have no problems acquiring the most sophisticated and hi-tech weaponry. If they can get them so can terrorists and criminals.
Scientists and engineers have created some amazing robots which have the potential to greatly improve our lives, but no scientist or engineer should be involved in creating an armed robot that can operate without human control. Computer scientists and engineers have created fabulous devices which have increased our productivity and made life much more enjoyable for millions of people. Those computer experts should never create programs that would allow an armed machine to operate without any human in control.
The hundreds of thousands of landmine and cluster munition victims around the world are testament to the fact that what looks good on the drawing board or in the lab can have deadly consequences for innocent civilians; despite the best intentions or even the best technology that money can buy. We need to learn the key lesson of these two weapons that tragedies can and should be prevented. The time to stop fully autonomous weapons does not begin next week, or next month, or during testing, or after their first use. The time to stop killer robots begins today April 23, 2013 in London, England and wherever you are reading this.
– Paul Hannon