Blog Archives
Mind Over Machine: Why Human Soldiers are (and Will Remain) Better than Killer Robots
Guest post by MAC Research Associate, Andrew Luth
This summer, movie-goers are flocking to theatres to see tales of superheroes, dinosaurs, and plucky college singing groups. Two of the season’s biggest movies, Avengers: Age of Ultron and Terminator Genisys have more in common than an over-reliance on computer-generated visual effects. Both feature killer robots: advanced weapons systems capable of fighting and killing independent of human command. Killer robots have been a staple of popcorn flicks for decades, but these days movies aren’t the only place we can expect to see them turning up. Many of the world’s most advanced militaries are getting closer and closer to producing killer robots of their own. Killer robots or autonomous weapons systems (AWS) are machines capable of identifying and attacking targets without human intervention. Despite the moral and legal concerns about such weapons, leading scientists and engineers are warning that AWS may be only a few years away from reality. The few who support the development of AWS tend to view them as inherently superior to human soldiers. Robots, they argue, don’t get tired or emotional, and are more expendable than human soldiers. As University of Massachusetts-Amherst Professor Charli Carpenter explains, some supporters have even gone so far as to say that “robots won’t rape,” overlooking the reality that rape and other war crimes are often ordered military tactics. All such arguments assume AWS will make better soldiers than humans. However, they fail to fully consider how human soldiers are actually superior to AWS. Several attributes of human physiology and behaviour give human soldiers the edge over autonomous weapons systems not just now, but for the foreseeable future.
According to the international legal principle of distinction, belligerent parties must distinguish between civilians and combatants when using force in combat. Human soldiers have a significant advantage over artificial systems in meeting this requirement. The human brain and eye work in tandem to process complex visual information incredibly quickly and efficiently. This skill is invaluable on the battlefield, enabling soldiers to pick out subtle distinctions in shape, colour, texture, and movement from long distances and use that information to their advantage. Technology is developing quickly and it is conceivable that computers will someday rival our visual processing powers, but no computer program has yet come close to human abilities to pick out patterns and identify objects even in motionless two dimensional images. Even further out of the realm of possibility for robotics is the brain’s aptitude for reading human behaviour. The human mind is particularly attuned to reading tiny changes in expression and body language even subconsciously. This is immensely important in combat scenarios, where soldiers need to determine an unknown party’s intent almost instantly, with fractions of a second making the difference between life and death. The science of computer vision is advancing rapidly, but it is likely to be decades before AWS can even approach the visual acuity of human soldiers, if ever.
Even if scientists eventually develop autonomous weapons systems with visual processing skills superior to our own, a human soldier would still have many advantages over killer robots. The highly flexible and adaptive nature of the human mind is perhaps the most distinct advantage. This flexibility allows us to receive and process information both from our natural senses and external sources. In addition to acquiring information by communicating with other soldiers, humans can quickly learn to integrate data from radar, night vision, infrared, and other technologies. Furthermore, to analyze this information human soldiers draw on a wealth of learning and experience from all areas of life. Robots, however, are generally designed to analyze specific information sources using pre-determined metrics, making it impossible for them to evaluate or even to detect unanticipated information. In many situations, the success of a mission could balance on the ability to respond to such information.
The human mind’s flexibility also means soldiers can perform any number of activities a situation requires. This is invaluable during military conflict. In his famous work The Art of War, Chinese military strategist Sun Tzu explains “just as water retains no constant shape, so in warfare there are no constant conditions.” Truly successful military tactics, he writes, are “regulated by the infinite variety of circumstances.” Humans are well-equipped to respond to this infinite variety. A modern infantry soldier can fire a rifle accurately, provide emergency medical aid, accept a prisoner’s surrender, operate a vehicle, assess enemy tactics, and perform any number of other necessary tasks. Robots however are specialists, designed to respond to a specific scenario or perform a single task, often in controlled environments. In his recent piece on killer robots for Just Security, retired Canadian military officer John MacBride quotes famed German military theorist Helmuth von Moltke’s observation that “no operation extends with any certainty beyond the first encounter with the main body of the enemy.” When a mission’s parameters change quickly, human minds learn and adapt, developing creative solutions to novel problems. However, when robots meet unanticipated challenges, they often fail spectacularly, necessitating significant human intervention. As MacBride explains, this is distinct cause for concern. There are bound to be programming flaws and oversights when a machine developed years in advance under controlled conditions makes its debut on a battlefield. IBM’s famed computing system Watson illustrated this perfectly during its star turn on the television game show Jeopardy!. Despite its dominant win over two human champions, in response to a question in the Final Jeopardy category of US Cities, Watson answered ‘Toronto’. Such failure is humourous in a game show setting, but the consequences of a similar error on the battlefield could be deadly.
In spite of Watson’s amazing performance, its failures demonstrate that neither human beings nor technological systems can be perfect. Whether out of fatigue, emotion, prejudice, or simple lack of information, human soldiers can and do make poor decisions. When these mistakes result in the deaths of fellow soldiers or innocent civilians, judicial systems are in place to hold military personnel accountable for their unethical behaviour or poor judgement. If AWS are deployed it is inevitable they too will perpetrate atrocities, whether from programming error, technical failure, or unpredictable variables. However, our society has no recourse for crimes committed by robots. Our justice system rests upon punishing immoral acts, but an autonomous weapons system has about as much sense of right and wrong as a toaster. Robots lack the capacity to make ethical decisions, acting only as their programming dictates. Nonetheless, a crime perpetrated by a robot is still a crime. Should society therefore pursue justice with the programmer? The commander? Or would leaders deem certain levels of ‘collateral damage’ acceptable and overlook any atrocities perpetrated by an AWS?
Our respect for the capacity of others to make moral choices is one among many reasons we value human life so highly. As such, the supporters of autonomous weapons systems often claim the best argument for AWS adoption is the potential they have to reduce human casualties. This assertion is tenuous at best. Given that autonomous weapons systems would already require remote oversight and operation capabilities, it would be a simple matter of procedure to give human operators final approval over the use of lethal force on a given target. It is unlikely fully ceding authority over weapons systems to computers would do anything to make military personnel safer. In fact, AWS might actually increase the likelihood of military engagement. Operating an AWS is far cheaper than training and deploying a human soldier, making them relatively expendable. Having access to relatively cheap and easily-replaced military assets significantly lowers the political and financial costs of military action, making states more likely to wage war in the first place. We have already witnessed the advent of this trend with the proliferation of unmanned military drones. Drone technology now allows leaders to conduct military campaigns abroad while their citizens pay little attention. Autonomous weapons systems could take this trend to its extreme, with robots conducting foreign bombing raids or assassinations with little human involvement. Protecting military personnel is a worthy goal, but our aversion to the human cost of war is the reason we place such high value on peace in the first place. Each tragic loss of life compels a society to consider the worthiness of its cause. Sending robots to do the killing externalizes the horrific consequences of war, making governments more willing to wage wars and less concerned with ending them.
We live in a world that sometimes forces us to take human lives. For thousands of years, some of humanity’s greatest minds have worked to develop philosophical and ethical frameworks to guide our decisions in war. Recently however, it has been difficult for us to keep pace with technology’s rapid proliferation. As technology revolutionizes all aspects of society, we can scarcely consider the social and ethical consequences of each new development before it arrives. The advent of nuclear weapons, the internet and countless other scientific advances all bear witness to our ethical tardiness. Although scientists are now making huge breakthroughs in robotics and artificial intelligence, no matter how skilled robots become at distinguishing between targets, we owe it to ourselves and all of humanity to fully consider each decision to use deadly force. Passing this choice off to an amoral machine would be unethical by definition. We currently live in a world where killer robots appear only in movies and other works of fiction, but it may not be long before they make the jump from movie screens to the real world. The international community must take action and ban these immoral weapons before they become a reality.
After graduating from Calvin College in Grand Rapids, Michigan, Andrew Luth spent two years living and working in China. He is currently pursuing his master’s degree at Carleton University’s Norman Paterson School of International Affairs in Ottawa, Canada. His academic interests include disarmament, conflict analysis and resolution, and the Asia-Pacific region.
Faith Groups Take Action on Killer Robots
In the past we’ve posted about scientists, human rights advocates, disarmament organizations and politicians who have spoken out against killer robots and the support for a ban on autonomous weapons continues to grow. Faith groups, religious leaders and faith-based organizations are beginning to call for a ban on killer robots.
In November 2013,the World Council of Churches made a statement that recommends governments to: “Declare their support for a pre-emptive ban on drones and other robotic weapons systems that will select and strike targets without human intervention when operating in fully autonomous mode;”.
Building on that recommendation, our colleagues in the Netherlands have launched an Interfaith Declaration that says:
we, as religious leaders, faith groups and faith-based organizations, raise our collective voice to
call on all governments to participate in the international debate on the issue, and to work
towards a ban on the development, production and use of fully autonomous weapons.
The team at PAX put together a Factsheet on the Interfaith Declaration and you can find even more information on their website.
We’re calling on all Canadian religious leaders, faith based organization and faith groups to support a ban on autonomous weapons and to sign the Interfaith Declaration. Here is the full text of the Declaration: Interfaith Declaration.pdf (EN) and Interfaith Declaration FR.pdf (FR). To sign the declaation digitally visit /stay-informed/news/interfaith-declaration or you can contact PAX directly at [email protected]. In addition to the Interfaith Declaration for religious leaders and faith groups, individuals can sign Mines Action Canada’s Keep Killer Robots Fiction petition.
Avoiding Rabbit Holes Through Policy and Law
All the discussions we’ve been having since the launch of the Campaign to Stop Killer Robots make me think about Alice in Wonderland and therefore I’ve been thinking a lot about rabbit holes. I feel like current technology has us poised at the edge of a rabbit hole and if we take that extra step and create fully autonomous weapons we are going to fall – down that rabbit hole into the unknown, down into a future where a machine could make the decision to kill you, down into a situation that science fiction books have been warning us about for decades.
The best way to prevent such a horrific fall is going to be to create laws and policies that will block off the entrance to the rabbit hole so to speak. At the moment, not many countries have policies to temporarily block the entrance and no one has laws to ban killer robots and close off the rabbit hole permanently. It is really only the US and the UK who have even put up warning signs and a little bit of chicken wire around the entrance to this rabbit hole of killer robots through recently released policies and statements.
Over the past few weeks our colleagues at Human Rights Watch (HRW) and Article 36 have released reports on the US and UK policies towards fully autonomous weapons (killer robots). HRW analyzed the 2012 US policy on autonomous weapons found in Department of Defense Directive Number 3000.09. You can find the full review online. Article 36 has a lot to say about the UK policy in their paper available online as well.
So naturally after reading these papers, I went in search of Canada’s policy. That search left me feeling a little like Alice lost in Wonderland just trying to keep my head or at least my sanity in the face of a policy that like the Cheshire Cat might not be all there.
After my futile search, it became even more important that we talk to the government to find out if Canada has a policy on fully autonomous weapons. Until those conversations happen, let’s see what we can learn from the US and UK policies and the analysis done by HRW and Article 36.
The US Policy
I like that the US Directive notes the risks to civilians including “unintended engagements” and failure. One key point that Human Rights Watch’s analysis highlights is that the Directive states that for up to 10 years the US Department of Defense can only develop and use fully autonomous weapons that have non-lethal force. The moratorium on lethal fully autonomous weapons is a good start but there are also some serious concerns about the inclusion of waivers that could override the moratorium. HRW believes that “[t]hese loopholes open the door to the development and use of fully autonomous weapons that could apply lethal force and thus have the potential to endanger civilians in armed conflict.”[1]
In summary Human Rights Watch believes that:
The Department of Defense Directive on autonomy in weapon systems has several positive elements that could have humanitarian benefits. It establishes that fully autonomous weapons are an important and pressing issue deserving of serious concern by the United States as well as other nations. It makes clear that fully autonomous weapons could pose grave dangers and are in need of restrictions or prohibitions. It is only valid for a limited time period of five to ten years, however, and contains a number of provisions that could weaken its intended effect considerably. The Directive’s restrictions regarding development and use can be waived under certain circumstances. In addition, the Directive highlights the challenges of designing adequate testing and technology, is subject to certain ambiguity, opens the door to proliferation, and applies only to the Department of Defense.[2]
In terms of what this all means for us in Canada, we can see there may be some aspects of the American policy that are worth adopting. The restrictions on the use of lethal force by fully autonomous weapons should be adopted by Canada to protect civilians from harm without the limited time period and waivers. I believe that Canadians would want to ensure that humans always make the final decision about who lives and who dies in combat.
The UK Policy
Now our friends at Article 36 have pointed out the UK situation is a little more convoluted – and they are not quite ready to call it a comprehensive policy but since “the UK assortment of policy-type statements” sounds ridiculous, for the purposes of this post I’m shortening it to the UK almost-policy with the hope that one day it will morph into a full policy. Unlike the US policy which is found in a neat little directive, the UK almost-policy is cobbled together from some statements and a note from the Ministry of Defense. You have a closer look at the Article 36 analysis of the almost-policy.
To sum up Article 36 outlines three main shortcomings of the UK almost-policy:
- The policy does not set out what is meant by human control over weapon systems.
- The policy does not prevent the future development of fully autonomous weapons.
- The policy says that existing international law is sufficient to “regulate the use” of autonomous weapons.[3]
One of the most interesting points that Article 36 makes is the need for a definition of what human control over weapons systems means. If you are like me, you probably think that would be that humans get to make the decision to fire on a target making the final decision of who lives or who dies but we need to know exactly what governments mean when they say that humans will always been in control. The Campaign to Stop Killer Robots wants to ensure that there is always meaningful human control over lethal weapons systems.
Defining what we mean by meaningful human control is going to be a very large discussion that we want to have with governments, with civil society, with the military, with roboticists and with everyone else. This discussion will raise some very interesting moral and ethical questions especially since a two-star American general recently said that he thought it was “the ultimate human indignity to have a machine decide to kill you.” The problem is once that technology exists it is going to be incredibly difficult to know where that is going to go and how on earth we are going to get back up that rabbit hole. For us as Canadians it is key to start having that conversation as soon as possible so we don’t end up stumbling down the rabbit hole of fully autonomous weapons by accident.
– Erin Hunt, Program Officer