Blog Archives

Killer Robots at TEDx

Prof Noel Sharkey gave a talk at TEDx Sheffield about fully autonomous weapons and the Campaign to Stop Killer Robots.  Prof. Sharkey is one of the founders of the International Committee for Robot Arms Control.  He delivers a passionate call to action to stop killer robots.  Take a few minutes out of your day to see him talk about his journey from a boy who loved toy soldiers growing up in the shadow of World War II to a leading campaigner in the effort to stop killer robots and protect civilians.  Plus he even shares a little song about the CIA!

UK Parliament Debates Fully Autonomous Weapons

Last month at the United Nations Human Rights Council, we were slightly concerned when the UK was the only state opposed to a moratorium or a ban on fully autonomous weapons.  After a parliamentary debate on June 17, 2013, we have a little more clarity.  In response to a speech by Nia Griffith, MP, the Minister for Counter Proliferation, Alistair Burt MP, agreed that fully autonomous weapons will not “be able to meet the requirements of international humanitarian law” and stressed that the UK does not have fully autonomous weapons and does not plan to acquire any.

Our colleagues at Article 36 have done a detailed analysis of the debate.  In light of the stronger language in this debate, there is some room to optimistic

It would seem straightforward to move from such a strong national position to a formalised national moratorium and a leading role within an international process to prohibit such weapons. The government did not provide any reason as to why a moratorium would be inappropriate, other than to speculate on the level of support amongst other countries for such a course of action.

Whilst significant issues still require more detailed elaboration, Article 36 believes this parliamentary debate has been very valuable in prompting reflection and Ministerial scrutiny of UK policy on fully autonomous weapons and narrowing down the areas on which further discussions should focus. It appears clear now that there will be scope for such discussions to take place with the UK and other states in the near future.

The UK parliamentary debate and Article 36’s analysis of it, coming so soon after the Human Rights Council debate and the widespread media coverage of the issue make it quite clear that it is time to have such a substantive and non-partisan debate in the Canadian House of Commons as the government works out its policy on this important issue.

First ever UN debate on killer robots

This week, the United Nations Human Rights Council became the first UN body to discuss the issue of killer robots.  To mark the occasion, the Campaign to Stop Killer Robots headed to Geneva to introduce our campaign to diplomats, UN agencies and civil society.  Check out the full report from the international campaign.

Asimov’s Three Laws of Robotics

In the weeks since the Campaign to Stop Killer Robots launched, there has been a lot of media coverage.  The media coverage is very exciting and what I have found to be very interesting is the number of articles that refer to Isaac Asimov’s Three Laws of Robotics.

Now unless like me you grew up with a sci-fi geek for a father who introduced you to various fictional worlds like those in Star Wars, Star Trek and 2001: A Space Odyssey at a young age, you might not know who Isaac Asimov is, what his Three Laws of Robotics are and why these laws are relevant to the Campaign to Stop Killer Robots.

Isaac Asimov (1920-1992) was an American scientist and writer, best known for his science fiction writings especially short stories.  In his writings, Asimov created the Three Laws of Robotics which govern the action of his robot characters.  In his stories, the Three Laws were programmed into robots as a safety function.  The laws were first stated in the short story Runaround but you can see them in many of his other writings and since then they have shown up in other authors’ work as well.

The Three Laws of Robotics are:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

After reading the Three Laws, it might be pretty clear why Mr. Asimov’s ideas are frequently mentioned in media coverage of our campaign to stop fully autonomous weapons.  A fully autonomous weapon will most definitely violate the first and second laws of robotics.

To me, the Three Laws seem to be pretty common sense guides for the actions of autonomous robots.  It is probably a good idea to protect yourself from being killed by your own machine – ok not probably – it is a good idea to make sure your machine does not kill you!  It also is important for us to remember that Asimov recognized that just regular robots with artificial intelligence (not even fully autonomous weapons) could pose a threat to humanity at large so he also added a fourth, or zeroth law, to come before the others:

0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

“But Erin,” you say, “these are just fictional stories; the Campaign to Stop Killer Robots is dealing with how things really will be.  We need to focus on reality not fiction!”  I hear you but since fully autonomous weapons do not yet exist we need to take what we know about robotics, warfare and law and add a little imagination to foresee some of the possible problems with fully autonomous weapons.  Who better to help us consider the possibilities than science fiction writers who have been thinking about these types of issues for decades?

At the moment, Asimov’s Three Laws are currently the closest thing we have to laws explicitly governing the use of fully autonomous weapons.  Asimov’s stories often tell tales of how the application of these laws result in robots acting in weird and dangerous ways the programmers did not predict.  By articulating some pretty common sense laws for robots and then showing how those laws can have unintended negative consequences when implemented by artificial intelligence, Asimov’s writings may have made the first argument that a set of parameters to guide the actions of fully autonomous weapons will not be sufficient.  Even if you did not have a geeky childhood like I did, you can still see the problems with creating fully autonomous weapons.  You don’t have to read Asimov, know who HAL is or have a disliking for the Borg to worry that we won’t be able to control how artificial intelligence will interpret our commands and anyone who has tried to use a computer, a printer or a cell phone knows that there is no end to the number of ways technology can go wrong.  We need a pre-emptive ban on fully autonomous weapons before it is too late and that is what the Campaign to Stop Killer Robots will be telling the diplomats at the UN in Geneva at the end of the month.

- Erin Hunt, Program Officer

Avoiding Rabbit Holes Through Policy and Law

All the discussions we’ve been having since the launch of the Campaign to Stop Killer Robots make me think about Alice in Wonderland and therefore I’ve been thinking a lot about rabbit holes.  I feel like current technology has us poised at the edge of a rabbit hole and if we take that extra step and create fully autonomous weapons we are going to fall – down that rabbit hole into the unknown, down into a future where a machine could make the decision to kill you, down into a situation that science fiction books have been warning us about for decades.

The best way to prevent such a horrific fall is going to be to create laws and policies that will block off the entrance to the rabbit hole so to speak.  At the moment, not many countries have policies to temporarily block the entrance and no one has laws to ban killer robots and close off the rabbit hole permanently.  It is really only the US and the UK who have even put up warning signs and a little bit of chicken wire around the entrance to this rabbit hole of killer robots through recently released policies and statements.

Over the past few weeks our colleagues at Human Rights Watch (HRW) and Article 36 have released reports on the US and UK policies towards fully autonomous weapons (killer robots).  HRW analyzed the 2012 US policy on autonomous weapons found in Department of Defense Directive Number 3000.09.  You can find the full review online.  Article 36 has a lot to say about the UK policy in their paper available online as well.

So naturally after reading these papers, I went in search of Canada’s policy.  That search left me feeling a little like Alice lost in Wonderland just trying to keep my head or at least my sanity in the face of a policy that like the Cheshire Cat might not be all there.

After my futile search, it became even more important that we talk to the government to find out if Canada has a policy on fully autonomous weapons.  Until those conversations happen, let’s see what we can learn from the US and UK policies and the analysis done by HRW and Article 36.

The US Policy

I like that the US Directive notes the risks to civilians including “unintended engagements” and failure.  One key point that Human Rights Watch’s analysis highlights is that the Directive states that for up to 10 years the US Department of Defense can only develop and use fully autonomous weapons that have non-lethal force.  The moratorium on lethal fully autonomous weapons is a good start but there are also some serious concerns about the inclusion of waivers that could override the moratorium.  HRW believes that “[t]hese loopholes open the door to the development and use of fully autonomous weapons that could apply lethal force and thus have the potential to endanger civilians in armed conflict.”[1]

In summary Human Rights Watch believes that:

The Department of Defense Directive on autonomy in weapon systems has several positive elements that could have humanitarian benefits. It establishes that fully autonomous weapons are an important and pressing issue deserving of serious concern by the United States as well as other nations. It makes clear that fully autonomous weapons could pose grave dangers and are in need of restrictions or prohibitions. It is only valid for a limited time period of five to ten years, however, and contains a number of provisions that could weaken its intended effect considerably. The Directive’s restrictions regarding development and use can be waived under certain circumstances. In addition, the Directive highlights the challenges of designing adequate testing and technology, is subject to certain ambiguity, opens the door to proliferation, and applies only to the Department of Defense.[2]

In terms of what this all means for us in Canada, we can see there may be some aspects of the American policy that are worth adopting.  The restrictions on the use of lethal force by fully autonomous weapons should be adopted by Canada to protect civilians from harm without the limited time period and waivers.  I believe that Canadians would want to ensure that humans always make the final decision about who lives and who dies in combat.

The UK Policy

Now our friends at Article 36 have pointed out the UK situation is a little more convoluted – and they are not quite ready to call it a comprehensive policy but since “the UK assortment of policy-type statements” sounds ridiculous, for the purposes of this post I’m shortening it to the UK almost-policy with the hope that one day it will morph into a full policy.  Unlike the US policy which is found in a neat little directive, the UK almost-policy is cobbled together from some statements and a note from the Ministry of Defense.  You have a closer look at the Article 36 analysis of the almost-policy.

To sum up Article 36 outlines three main shortcomings of the UK almost-policy:

  • The policy does not set out what is meant by human control over weapon systems.
  • The policy does not prevent the future development of fully autonomous weapons.
  • The policy says that existing international law is sufficient to “regulate the use” of autonomous weapons.[3]

One of the most interesting points that Article 36 makes is the need for a definition of what human control over weapons systems means.  If you are like me, you probably think that would be that humans get to make the decision to fire on a target making the final decision of who lives or who dies but we need to know exactly what governments mean when they say that humans will always been in control.  The Campaign to Stop Killer Robots wants to ensure that there is always meaningful human control over lethal weapons systems.

Defining what we mean by meaningful human control is going to be a very large discussion that we want to have with governments, with civil society, with the military, with roboticists and with everyone else.  This discussion will raise some very interesting moral and ethical questions especially since a two-star American general recently said that he thought it was “the ultimate human indignity to have a machine decide to kill you.”  The problem is once that technology exists it is going to be incredibly difficult to know where that is going to go and how on earth we are going to get back up that rabbit hole.  For us as Canadians it is key to start having that conversation as soon as possible so we don’t end up stumbling down the rabbit hole of fully autonomous weapons by accident.

- Erin Hunt, Program Officer


[1] See http://pages.citebite.com/s1x4b0y9k8mii

[2] See http://pages.citebite.com/g1p4t0m9s9res

Meet the Human Campaigners!

Yesterday you met David Wreckham, the Campaign to Stop Killer Robots’ first robot campaigner.  David isn’t alone in the campaign and most of his current colleagues are human.  Let’s meet some of them and learn why they are so excited to stop killer robots!

(c) Sharron Ward for the Campaign to Stop Killer Robots

Human or friendly robot?  The Campaign to Stop Killer Robots welcomes all campaigners who want to make history and stop killer robots!  Join us!

Meet David Wreckham – Robot Campaigner

David Wreckham is a friendly robot campaigning for a ban on killer robots.  See him in action during the launch of the Campaign to Stop Killer Robots in London last week.  You can follow David Wreckham on Twitter.

(c) Sharron Ward for the campaign, 23 April 2013.

Press Release – Urgent Action Needed to Ban Fully Autonomous Weapons

                                         

Non-governmental organizations convene to launch Campaign to Stop Killer Robots

(London, April 23, 2013) – Urgent action is needed to pre-emptively ban lethal robot weapons that would be able to select and attack targets without any human intervention, said a new campaign launched in London today. The Campaign to Stop Killer Robots is a coordinated international coalition of non-governmental organizations concerned with the implications of fully autonomous weapons, also called “killer robots.”

The Campaign to Stop Killer Robots calls for a pre-emptive and comprehensive ban on the development, production, and use of fully autonomous weapons. The prohibition should be achieved through an international treaty, as well as through national laws and other measures.

“Allowing life or death decisions on the battlefield to be made by machines crosses a fundamental moral line and represents an unacceptable application of technology,” said Nobel Peace Laureate Jody Williams of the Nobel Women’s Initiative. “Human control of autonomous weapons is essential to protect humanity from a new method of warfare that should never be allowed to come into existence.”

Over the past decade, the expanded use of unmanned armed vehicles or drones has dramatically changed warfare, bringing new humanitarian and legal challenges. Now rapid advances in technology are permitting the United States and other nations with high-tech militaries, including China, Israel, Russia, and the United Kingdom, to move toward systems that would give full combat autonomy to machines.

“Killer robots are not self-willed ‘Terminator’-style robots, but computer-directed weapons systems that once launched can identify targets and attack them without further human involvement,” said roboticist Noel Sharkey, chair of the International Committee for Robot Arms Control. “Using such weapons against an adaptive enemy in unanticipated circumstances and in an unstructured environment would be a grave military error. Computer controlled devices can be hacked, jammed, spoofed, or can be simply fooled and misdirected by humans.”

The Campaign to Stop Killer Robots seeks to provide a coordinated civil society response to the multiple challenges that fully autonomous weapons pose to humanity. It is concerned about weapons that operate on their own without human supervision. The campaign seeks to prohibit taking a human out-of-the-loop with respect to targeting and attack decisions on the battlefield.

“The capability of fully autonomous weapons to choose and fire on targets on their own poses a fundamental challenge to the protection of civilians and to compliance with international law,” said Steve Goose, Arms Division director at Human Rights Watch. “Nations concerned with keeping a human in the decision-making loop should acknowledge that international rules on fully autonomous weapons systems are urgently needed and work to achieve them.”

The UN Special Rapporteur on extrajudicial, summary or arbitrary executions for the Office of the High Commissioner for Human Rights, Professor Christof Heyns, is due to deliver his report on lethal autonomous robotics to the second session of the Human Rights Council in Geneva, starting May 27, 2013. The report is expected to contain recommendations for government action on fully autonomous weapons.

“One key lesson learned from the Canadian led initiative to ban landmines was that we should not wait until there is a global crisis before taking action.” said Paul Hannon, Executive Director of Mines Action Canada. “The time to act on killer robots is now”

“We cannot afford to sleepwalk into an acceptance of these weapons. New military technologies tend to be put in action before the wider society can assess the implications, but public debate on such a change to warfare is crucial,” said Thomas Nash, Director of Article 36.  “A pre-emptive ban on lethal autonomous robots is both necessary and achievable, but only if action is taken now.”

The Campaign to Stop Killer Robots believes that humans should not delegate the responsibility of making lethal decisions to machines. It has multiple moral, legal, technical, and policy concerns with the prospect of fully autonomous weapons, including:

  • Autonomous robots would lack human judgment and the ability to understand context. These human qualities are necessary to make complex legal choices on a dynamic battlefield, to distinguish adequately between soldiers and civilians, and to evaluate the proportionality of an attack.  As a result, fully autonomous weapons would not meet the requirements of the laws of war.
  • The use of fully autonomous weapons would create an accountability gap as there is no clarity on who would be legally responsible for a robot’s actions: the commander, programmer, or one of the manufacturers of the many sensing, computing, and mechanical components? Without accountability, these parties would have less incentive to ensure robots did not endanger civilians and victims would be left unsatisfied that someone was punished for wrongful harm they experienced.
  • If fully autonomous weapons are deployed, other nations may feel compelled to abandon policies of restraint, leading to a destabilizing robotic arms race. Agreement is needed now to establish controls on these weapons before investments, technological momentum, and new military doctrine make it difficult to change course.
  • The proliferation of fully autonomous weapons could make resort to war and armed attacks more likely by reducing the possibility of military casualties.

The Campaign to Stop Killer Robots includes several non-governmental organizations (NGOs) associated with the successful efforts to ban landmines, cluster munitions, and blinding lasers. Its members collectively have a wide range of expertise in robotics and science, aid and development, human rights, humanitarian disarmament, international law and diplomacy, and the empowerment of women, children, and persons with disabilities. The campaign is building a worldwide network of civil society contacts in countries including Canada, Egypt, Japan, The Netherlands, New Zealand, Pakistan, United Kingdom, and the United States.

The Steering Committee is the principal leadership and decision-making body for of the Campaign to Stop Killer Robots and is comprised of nine NGOs: five international NGOs Human Rights Watch, International Committee for Robot Arms Control, Nobel Women’s Initiative, Pugwash Conferences on Science & World Affairs, and Women’s International League for Peace and Freedom, and four national NGOs Article 36 (UK), Association for Aid and Relief Japan, Mines Action Canada, and IKV Pax Christi (The Netherlands).

The Campaign to Stop Killer Robots was established by representatives of seven of these NGOs at a meeting in New York on 19 October 2012. It is an inclusive and diverse coalition open to NGOs, community groups, and professional associations that support the campaign’s call for a ban and are willing to undertake actions and activities in support of the campaign’s objectives. The campaign’s initial coordinator is Mary Wareham of Human Rights Watch.

On Monday, April 22, the Steering Committee of the Campaign to Stop Killer Robots convened a day-long conference for 60 representatives from 33 NGOs from ten countries to discuss the potential harm that fully autonomous weapons could pose to civilians and to strategize on actions that could be taken at the national, regional, and international levels to ban the weapons.

Contact information for the Campaign to Stop Killer Robots:

To schedule a media interview (see list of spokespersons), please contact:

  • UK media – Laura Boillot at Article 36, +44(0)7515-575-175, [email protected]
  • International media - Kate Castenson at Human Rights Watch, +1 (646) 203-8292, [email protected]

Video Footage

  • Raw interview footage of Williams, Sharkey, Goose, and Docherty: http://multimedia.hrw.org/distribute/hpgicavqly
  • Playlist of precursors to fully autonomous weapons: /YQe4w8

For more information, see:

  • Human Rights Watch “Losing Humanity” report on fully autonomous weapons: /UQscFA
  • Human Rights Watch “Review of the New US Policy on Autonomy in Weapons Systems” briefing paper: /17FDTTj

List of Spokespersons

The following campaign spokespersons will be speaking at the launch events in London on 22-24 April and are available for interview on request. In addition, raw interview footage of Williams, Sharkey, Goose, and Docherty is available here: http://multimedia.hrw.org/distribute/hpgicavqly

Principal Spokespersons

Ms. Jody Williams – Nobel Women’s Initiative, @JodyWilliams97 @NobelWomen

Jody Williams received the Nobel Peace Prize in 1997 for her work to ban landmines through the International Campaign to Ban Landmines, which shared the Peace Prize. In January 2006, Jody established the Nobel Women’s Initiative together with five of her sister Nobel Peace laureates. In an April 2011 article for the International Journal of Intelligence Ethics, Nobel Peace Laureate Jody Williams calls for a ban on “fully autonomous attack and kill robotic weapons.” In March 2013, the University of California Press published a memoir on her work entitled My Name is Jody Williams: A Vermont Girl’s Winding Path to the Nobel Peace Prize. Williams can speak on why civil society is coming together and partnering with other actors to pursue a pre-emptive ban on fully autonomous weapons. Longer biography available here: /JKVvBd

Prof. Noel Sharkey – International Committee for Robot Arms Control, @StopTheRobotWar

Roboticist Noel Sharkey is Professor of Artificial Intelligence and Robotics and Professor of Public Engagement at the University of Sheffield. He is co-founder and chair of the International Committee for Robot Arms Control (ICRAC), a group of experts concerned with the pressing dangers that military robots pose to peace and international security. Sharkey can speak on the technology that the campaign is seeking to prohibit and its ethical implications. See also: /9fJQ7j

Mr. Steve Goose – Human Rights Watch, @hrw

Steve Goose is executive director of the Arms Division of Human Rights Watch and chair of the International Campaign to Ban Landmines and Cluster Munition Coalition (ICBL-CMC). Goose and Human Rights Watch were instrumental in bringing about the 2008 Convention on Cluster Munitions, the 1997 international treaty banning antipersonnel mines, the 1995 protocol banning blinding lasers, and the 2003 protocol on explosive remnants of war. Goose can speak on why a ban on fully autonomous weapons is necessary and achievable, and explain current US policy and practice. See also: /USEBZo

Mr. Thomas Nash – Article 36, @nashthomas @article36

Thomas Nash is director of Article 36 and joint coordinator of the International Network on Explosive Weapons. As Coordinator of the Cluster Munition Coalition from 2004 to 2011, Nash led the global civil society efforts to secure the Convention on Cluster Munitions. Nash can speak about civil society expectations of UK policy, practice, and diplomacy on fully autonomous weapons.

Ms. Mary Wareham – Human Rights Watch, @marywareham, @hrw

Mary Wareham is advocacy director of the Arms Division of Human Rights Watch and initial coordinator of the Campaign to Stop Killer Robots. She worked on the processes that created the Convention on Cluster Munitions and the Mine Ban Treaty, and has worked to ensure their universalization and implementation.  Wareham can speak about the new Campaign to Stop Killer Robots and its initial plans.

Technical Experts

Dr. Jürgen Altmann - International Committee for Robot Arms Control

Jürgen Altmann is co-founder and vice-chair of the International Committee for Robot Arms Control. He is a physicist and peace researcher at Dortmund Technical University in Germany. Altmann has studied preventive arms control of new military technologies and new methods for the verification of disarmament agreements. He can speak about Germany’s policy and practice on fully autonomous weapons.

Dr. Peter Asaro – International Committee for Robot Arms Control, @peterasaro

Peter Asaro is co-founder and vice-chair of the International Committee for Robot Arms Control. He is a philosopher of technology who has worked in Artificial Intelligence, neural networks, natural language processing and robot vision research. Asaro is director of Graduate Programs for the School of Media Studies at The New School for Public Engagement in New York City. See also:  /73JqBw

Ms. Bonnie Docherty – Human Rights Watch, @hrw

Bonnie Docherty is senior researcher in the Arms Division at Human Rights Watch and also a lecturer on law and senior clinical instructor at the International Human Rights Clinic at Harvard Law School. She has played an active role, as both lawyer and field researcher, in the campaign against cluster munitions. Docherty’s report Losing Humanity: The Case against Killer Robots outlines how fully autonomous weapons could violate the laws of war and undermine fundamental protections for civilians. See also: /103PV4t

Mr. Richard Moyes – Article 36, @rjmoyes @article36

Richard Moyes is a managing partner at Article 36 and an honorary fellow at the University of Exeter. He was previously director of policy at Action on Armed Violence (formerly Landmine Action) and served as co-chair of the Cluster Munition Coalition. Moyes can speak about civil society expectations of UK policy, practice, and diplomacy on fully autonomous weapons. See also: /103SAuS

 Steering Committee members

Human Rights Watch, www.hrw.org

Human Rights Watch is serving as initial coordinator of the Campaign to Stop Killer Robots. Over the past two decades, the Arms Division of Human Rights Watch has been instrumental in enhancing protections for civilians affected by conflict, leading the International Campaign to Ban Landmines that resulted in the 1997 Mine Ban Treaty and the Cluster Munition Coalition, which spurred the 2008 Convention on Cluster Munitions. It also led the effort that resulted in the pre-emptive prohibition on blinding laser weapons in 1995. In November 2012, Human Rights Watch and Harvard Law School’s International Human Rights Clinic launched the report Losing Humanity: The Case against Killer Robots, the first in-depth report by a non-governmental organization on the challenges posed by fully autonomous weapons.

Article 36 (UK), www.article36.org

Article 36 is a UK-based not-for-profit organization working to prevent the unintended, unnecessary or unacceptable harm caused by certain weapons. It undertakes research, policy and advocacy and promotes civil society partnerships to respond to harm caused by existing weapons and to build a stronger framework to prevent harm as weapons are used or developed in the future. In March 2012, Article 36 called for a ban on military systems that are able to select and attack targets autonomously.

Association for Aid and Relief Japan, www.aarjapan.gr.jp

Association for Aid and Relief, Japan is an international non-governmental organization founded in Japan in 1979. As a committed member of the International Campaign to Ban Landmines, Association for Aid and Relief, Japan played a central role in convincing Japan to ban antipersonnel landmines and join the 1997 Mine Ban Treaty.

IKV Pax Christi  (The Netherlands)- www.ikvpaxchristi.nl

IKV Pax Christi is a peace organization based in the Netherlands. It works with local partners in conflict areas and seeks political solutions to crises and armed conflicts. In May 2011, Dutch NGO IKV Pax Christi published a report entitled Does Unmanned Make Unacceptable? Exploring the Debate on using Drones and Robots in Warfare.

International Committee for Robot Arms Control, http://icrac.net

The International Committee for Robot Arms Control (ICRAC) is a not-for-profit organization comprised of scientists, ethicists, lawyers, roboticists, and other experts. It works to address the potential dangers involved with the development of armed military robots and autonomous weapons. Given the rapid pace of development of military robots and the pressing dangers their use poses to peace, international security, the rule of law, and to civilians, ICRAC supports a ban on armed robots with autonomous targeting capability.

Mines Action Canada, www.minesactioncanada.org

Mines Action Canada is a coalition of over 35 Canadian non-governmental organizations working in mine action, peace, development, labour, health and human rights that came together in 1994. It is the Canadian partner of the International Campaign to Ban Landmines and a founding member of the Cluster Munition Coalition.

Nobel Women’s Initiative, nobelwomensinitiative.org

The Nobel Women’s Initiative was established in January 2006 by 1997 Nobel Peace Laureate and five of her sister Nobel Peace laureates. The Nobel Women’s Initiative uses the prestige of the Nobel Peace Prize and of courageous women peace laureates to magnify the power and visibility of women working in countries around the world for peace, justice and equality. In an April 2011 article for the International Journal of Intelligence Ethics, Nobel Peace Laureate Jody Williams calls for a ban on “fully autonomous attack and kill robotic weapons.”

Pugwash Conferences on Science & World Affairs, www.pugwash.org

A central main objective of Pugwash is the elimination of all weapons of mass destruction (nuclear, chemical and biological) and of war as a social institution to settle international disputes. To that extent, peaceful resolution of conflicts through dialogue and mutual understanding is an essential part of Pugwash activities, that is particularly relevant when and where nuclear weapons and other weapons of mass destruction are deployed or could be used.

Women’s International League for Peace and Freedom www.wilpf.org

The Women’s International League for Peace and Freedom (WILPF) is the oldest women’s peace organization in the world. Its aims and principles include working toward world peace; total and universal disarmament; the abolition of violence and coercion in the settlement of conflict and their substitution in every case of negotiation and conciliation; the strengthening of the United Nations system; the continuous development and implementation of international law; political and social equality and economic equity; co-operation among all people; and an environmentally sustainable development.

#          #          #

Follow

Get every new post delivered to your Inbox.