Blog Archives

Majority of Canadians Oppose Killer Robots

New poll indicates that 55% of Canadians oppose autonomous weapons systems

This week, Ipsos released results from the first global public opinion survey that included a question on autonomous weapons. Autonomous weapons, sometimes called killer robots, are future weapons that could select and fire upon a target without human control. Ipsos found that 55% of Canadians surveyed opposed autonomous weapons while another 25% were uncertain about the technology.

In the survey 11,500 citizens across 25[1] countries were asked “The United Nations is reviewing the strategic, legal and moral implications of autonomous weapons systems. These systems are capable of independently selecting targets and attacking those targets without human intervention; they are thus different than current day ”drones” where humans select and attack targets. How do you feel about the use of autonomous weapons in war?” In all but five countries (France, India, the US, China and Poland), a clear majority are opposed to the use of autonomous weapons in war.

Among Canadians, 21% of respondents reported being somewhat opposed to autonomous weapons in war while 34% were strongly opposed to the technology being used in war. Only 5% of Canadians surveyed were strongly supportive of using autonomous weapons in war. This survey is the first to poll Canadians on autonomous weapons systems.

“As part of the Campaign to Stop Killer Robots, we have frequently heard from Canadians that they want to ensure that there is meaningful human control over weapons at all times. This survey confirms that those opinions represent the majority of Canadians,” said Paul Hannon, Executive Director of Mines Action Canada, a co-founder of the Campaign to Stop Killer Robots, “In addition to strong citizen opposition to the use of autonomous weapons in war, Canada also has the first robotics company in the world to vow to never build autonomous weapons, Clearpath Robotics. It is time for the Canadian government to catch up to the citizens and come up with a national policy on autonomous weapons.”

Mines Action Canada is calling on the Government of Canada to ensure meaningful public and Parliamentary involvement in drafting Canada’s national position on autonomous weapons systems prior to the United Nations talks on the subject later this year.

– 30 –

Media Contact:  Erin Hunt, Program Coordinator, Mines Action Canada, + 1 613 241-3777 (office), + 1 613 302-3088 (mobile) or erin@minesactioncanada.org.

[1] Argentina, Belgium, Mexico, Poland, Russia, Saudi Arabia, South Africa, South Korea, Sweden, Turkey, Hungary, Australia, Brazil, Canada, China, France, Germany, Great Britain, India, Italy, Japan, Spain, Peru and the United States of America.

 

Video Contest – Runner Up Announced

In less than two weeks, states will decide if and how they will continue international talks on autonomous weapons systems at the UN`s Convention on Conventional Weapons in Geneva. We and the whole Campaign to Stop Killer Robots are calling on states to take the next step towards a ban by agreeing to a Group of Governmental Experts.

With such an important decision looming over states, we are launching the winners of our youth video contest. This week, we are pleased to present the runner-up video (and top high school video) by Daryl, Henry, Joseph and Anders at Petersburg High School.

Please feel free to share widely!

We thank all those who submitted videos to the contest and congratulate Daryl, Henry, Joseph and Anders on their excellent video.  Come back next week to see the winning entry.

 

Mind Over Machine: Why Human Soldiers are (and Will Remain) Better than Killer Robots

Guest post by MAC Research Associate, Andrew Luth

This summer, movie-goers are flocking to theatres to see tales of superheroes, dinosaurs, and plucky college singing groups. Two of the season’s biggest movies, Avengers: Age of Ultron and Terminator Genisys have more in common than an over-886723_4178378_ireliance on computer-generated visual effects. Both feature killer robots: advanced weapons systems capable of fighting and killing independent of human command. Killer robots have been a staple of popcorn flicks for decades, but these days movies aren’t the only place we can expect to see them turning up. Many of the world’s most advanced militaries are getting closer and closer to producing killer robots of their own. Killer robots or autonomous weapons systems (AWS) are machines capable of identifying and attacking targets without human intervention. Despite the moral and legal concerns about such weapons, leading scientists and engineers are warning that AWS may be only a few years away from reality. The few who support the development of AWS tend to view them as inherently superior to human soldiers. Robots, they argue, don’t get tired or emotional, and are more expendable than human soldiers. As University of Massachusetts-Amherst Professor Charli Carpenter explains, some supporters have even gone so far as to say that “robots won’t rape,” overlooking the reality that rape and other war crimes are often ordered military tactics. All such arguments assume AWS will make better soldiers than humans. However, they fail to fully consider how human soldiers are actually superior to AWS. Several attributes of human physiology and behaviour give human soldiers the edge over autonomous weapons systems not just now, but for the foreseeable future.

According to the international legal principle of distinction, belligerent parties must distinguish between civilians and combatants when using force in combat. Human soldiers have a significant advantage over artificial systems in meeting this requirement. The human brain and eye work in tandem to process complex visual information incredibly quickly and efficiently. This skill is invaluable on the battlefield, enabling soldiers to pick out subtle distinctions in shape, colour, texture, and movement from long distances and use that information to their advantage. Technology is developing quickly and it is conceivable that computers will someday rival our visual processing powers, but no computer program has yet come close to human abilities to pick out patterns and identify objects even in motionless two dimensional images. Even further out of the realm of possibility for robotics is the brain’s aptitude for reading human behaviour. The human mind is particularly attuned to reading tiny changes in expression and body language even subconsciously. This is immensely important in combat scenarios, where soldiers need to determine an unknown party’s intent almost instantly, with fractions of a second making the difference between life and death. The science of computer vision is advancing rapidly, but it is likely to be decades before AWS can even approach the visual acuity of human soldiers, if ever.

Even if scientists eventually develop autonomous weapons systems with visual processing skills superior to our own, a human soldier would still have many advantages over killer robots. The highly flexible and adaptive nature of the human mind is perhaps the most distinct advantage. This flexibility allows us to receive and process information both from our natural senses and external sources. In addition to acquiring information by communicating with other soldiers, humans can quickly learn to integrate data from radar, night vision, infrared, and other technologies. Furthermore, to analyze this information human soldiers draw on a wealth of learning and experience from all areas of life. Robots, however, are generally designed to analyze specific information sources using pre-determined metrics, making it impossible for them to evaluate or even to detect unanticipated information. In many situations, the success of a mission could balance on the ability to respond to such information.

The human mind’s flexibility also means soldiers can perform any number of activities a situation requires. This is invaluable during military conflict. In his famous work The Art of War, Chinese military strategist Sun Tzu explains “just as water retains no constant shape, so in warfare there are no constant conditions.” Truly successful military tactics, he writes, are “regulated by the infinite variety of circumstances.” Humans are well-equipped to respond to this infinite variety. A modern infantry soldier can fire a rifle accurately, provide emergency medical aid, accept a prisoner’s surrender, operate a vehicle, assess enemy tactics, and perform any number of other necessary tasks. Robots however are specialists, designed to respond to a specific scenario or perform a single task, often in controlled environments. In his recent piece on killer robots for Just Security, retired Canadian military officer John MacBride quotes famed German military theorist Helmuth von Moltke’s observation that “no operation extends with any certainty beyond the first encounter with the main body of the enemy.” When a mission’s parameters change quickly, human minds learn and adapt, developing creative solutions to novel problems. However, when robots meet unanticipated challenges, they often fail spectacularly, necessitating significant human intervention. As MacBride explains, this is distinct cause for concern. There are bound to be programming flaws and oversights when a machine developed years in advance under controlled conditions makes its debut on a battlefield. IBM’s famed computing system Watson illustrated this perfectly during its star turn on the television game show Jeopardy!. Despite its dominant win over two human champions, in response to a question in the Final Jeopardy category of US Cities, Watson answered ‘Toronto’. Such failure is humourous in a game show setting, but the consequences of a similar error on the battlefield could be deadly.

In spite of Watson’s amazing performance, its failures demonstrate that neither human beings nor technological systems can be perfect. Whether out of fatigue, emotion, prejudice, or simple lack of information, human soldiers can and do make poor decisions. When these mistakes result in the deaths of fellow soldiers or innocent civilians, judicial systems are in place to hold military personnel accountable for their unethical behaviour or poor judgement. If AWS are deployed it is inevitable they too will perpetrate atrocities, whether from programming error, technical failure, or unpredictable variables. However, our society has no recourse for crimes committed by robots. Our justice system rests upon punishing immoral acts, but an autonomous weapons system has about as much sense of right and wrong as a toaster. Robots lack the capacity to make ethical decisions, acting only as their programming dictates. Nonetheless, a crime perpetrated by a robot is still a crime. Should society therefore pursue justice with the programmer? The commander? Or would leaders deem certain levels of ‘collateral damage’ acceptable and overlook any atrocities perpetrated by an AWS?

Our respect for the capacity of others to make moral choices is one among many reasons we value human life so highly. As such, the supporters of autonomous weapons systems often claim the best argument for AWS adoption is the potential they have to reduce human casualties. This assertion is tenuous at best. Given that autonomous weapons systems would already require remote oversight and operation capabilities, it would be a simple matter of procedure to give human operators final approval over the use of lethal force on a given target. It is unlikely fully ceding authority over weapons systems to computers would do anything to make military personnel safer. In fact, AWS might actually increase the likelihood of military engagement. Operating an AWS is far cheaper than training and deploying a human soldier, making them relatively expendable. Having access to relatively cheap and easily-replaced military assets significantly lowers the political and financial costs of military action, making states more likely to wage war in the first place. We have already witnessed the advent of this trend with the proliferation of unmanned military drones. Drone technology now allows leaders to conduct military campaigns abroad while their citizens pay little attention. Autonomous weapons systems could take this trend to its extreme, with robots conducting foreign bombing raids or assassinations with little human involvement. Protecting military personnel is a worthy goal, but our aversion to the human cost of war is the reason we place such high value on peace in the first place. Each tragic loss of life compels a society to consider the worthiness of its cause. Sending robots to do the killing externalizes the horrific consequences of war, making governments more willing to wage wars and less concerned with ending them.

We live in a world that sometimes forces us to take human lives. For thousands of years, some of humanity’s greatest minds have worked to develop philosophical and ethical frameworks to guide our decisions in war. Recently however, it has been difficult for us to keep pace with technology’s rapid proliferation. As technology revolutionizes all aspects of society, we can scarcely consider the social and ethical consequences of each new development before it arrives. The advent of nuclear weapons, the internet and countless other scientific advances all bear witness to our ethical tardiness. Although scientists are now making huge breakthroughs in robotics and artificial intelligence, no matter how skilled robots become at distinguishing between targets, we owe it to ourselves and all of humanity to fully consider each decision to use deadly force. Passing this choice off to an amoral machine would be unethical by definition. We currently live in a world where killer robots appear only in movies and other works of fiction, but it may not be long before they make the jump from movie screens to the real world. The international community must take action and ban these immoral weapons before they become a reality.

After graduating from Calvin College in Grand Rapids, Michigan, Andrew Luth spent two years living and working in China. He is currently pursuing his master’s degree at Carleton University’s Norman Paterson School of International Affairs in Ottawa, Canada. His academic interests include disarmament, conflict analysis and resolution, and the Asia-Pacific region.

Asimov’s Three Laws of Robotics

In the weeks since the Campaign to Stop Killer Robots launched, there has been a lot of media coverage.  The media coverage is very exciting and what I have found to be very interesting is the number of articles that refer to Isaac Asimov’s Three Laws of Robotics.

Now unless like me you grew up with a sci-fi geek for a father who introduced you to various fictional worlds like those in Star Wars, Star Trek and 2001: A Space Odyssey at a young age, you might not know who Isaac Asimov is, what his Three Laws of Robotics are and why these laws are relevant to the Campaign to Stop Killer Robots.

Isaac Asimov (1920-1992) was an American scientist and writer, best known for his science fiction writings especially short stories.  In his writings, Asimov created the Three Laws of Robotics which govern the action of his robot characters.  In his stories, the Three Laws were programmed into robots as a safety function.  The laws were first stated in the short story Runaround but you can see them in many of his other writings and since then they have shown up in other authors’ work as well.

The Three Laws of Robotics are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

After reading the Three Laws, it might be pretty clear why Mr. Asimov’s ideas are frequently mentioned in media coverage of our campaign to stop fully autonomous weapons.  A fully autonomous weapon will most definitely violate the first and second laws of robotics.

To me, the Three Laws seem to be pretty common sense guides for the actions of autonomous robots.  It is probably a good idea to protect yourself from being killed by your own machine – ok not probably – it is a good idea to make sure your machine does not kill you!  It also is important for us to remember that Asimov recognized that just regular robots with artificial intelligence (not even fully autonomous weapons) could pose a threat to humanity at large so he also added a fourth, or zeroth law, to come before the others:

0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

“But Erin,” you say, “these are just fictional stories; the Campaign to Stop Killer Robots is dealing with how things really will be.  We need to focus on reality not fiction!”  I hear you but since fully autonomous weapons do not yet exist we need to take what we know about robotics, warfare and law and add a little imagination to foresee some of the possible problems with fully autonomous weapons.  Who better to help us consider the possibilities than science fiction writers who have been thinking about these types of issues for decades?

At the moment, Asimov’s Three Laws are currently the closest thing we have to laws explicitly governing the use of fully autonomous weapons.  Asimov’s stories often tell tales of how the application of these laws result in robots acting in weird and dangerous ways the programmers did not predict.  By articulating some pretty common sense laws for robots and then showing how those laws can have unintended negative consequences when implemented by artificial intelligence, Asimov’s writings may have made the first argument that a set of parameters to guide the actions of fully autonomous weapons will not be sufficient.  Even if you did not have a geeky childhood like I did, you can still see the problems with creating fully autonomous weapons.  You don’t have to read Asimov, know who HAL is or have a disliking for the Borg to worry that we won’t be able to control how artificial intelligence will interpret our commands and anyone who has tried to use a computer, a printer or a cell phone knows that there is no end to the number of ways technology can go wrong.  We need a pre-emptive ban on fully autonomous weapons before it is too late and that is what the Campaign to Stop Killer Robots will be telling the diplomats at the UN in Geneva at the end of the month.

– Erin Hunt, Program Officer