When watching a film or imagining a future life similar to something seen in a movie like The Terminator or I, Robot, we seldom feel that the concept of facing killer robots is an actual threat in real life.

But what if killer robots are the feasible next step in developing weapons and technology? Killer robots are no longer a concept limited to sci-fi movies; they are being developed today, utilizing technology that could emerge in the near future. Can you imagine what it would be like to stand in front of an armed robot, which is not controlled by any human near or far, while that robot decides whether or not to use force against you? The image in itself should send a shiver right down your spine, but what should really frighten you is that this idea is not far from becoming a reality.

From Sci-Fi to Reality

The advancement of weaponry technology over the last century has been dramatic. We have developed weapons from canons, to guns, to atomic bombs, and to armed drones. It has now become part of human nature to quickly move on to the next big thing, and use technology to invent something that simplifies our lives or jobs. Unfortunately, we hardly ever think ahead to what these new developments and inventions could mean in terms of legality, but most importantly, in terms of human cost and the preservation of human life and dignity.

Lethal autonomous weapons systems, also known as killer robots, are a new type of weapons technology that will have the ability to select targets and use force at that target without any human intervention. This emerging weapons system has become hotly discussed in the international community, and was most recently discussed at the Convention on Certain Conventional Weapons (CCW) at the United Nations in Geneva.1 These autonomous weapons pose fundamental challenges to the compliance with human rights law (HRL) and international humanitarian law (IHL).

As human beings, how can we allow robots that lack human judgment or empathy to understand different situations, circumstance, and environments, make life or death decisions for other human beings? Many believe that this characteristic, or lack thereof, crosses a fundamental moral line. Autonomous weapons would certainly pose a challenge in the battlefield when distinguishing between a soldier and a civilian, between a combatant and a child. It will lack the capability to evaluate its surroundings as a human would and could misjudge the proportionality of an attack. All of these traits, which only humans are capable of, will be lacking in an autonomous weapon, making their use a violation of both HRL and IHL. Because even in terrible times of war, we have established an international convention, IHL, that bind us to certain ethics, such as not harming civilians and treating prisoners of war with dignity.

The decision to enter into war should not be taken lightly. Before anything, there is always the cost of human life to bear in mind when making such decisions. One must consider the soldiers who will be sacrificing their lives in the name of their country, their grieving families, and the impact of such loss on the state as a whole. Deploying machines instead of soldiers to fight in wars will remove the human factor from the attacking state’s perspective, likely making it easier to go to war without the lives of its own citizens at risk. This in return will increase the possibility of human casualties in the attacked state, which consequently creates another issue these weapons create, one of accountability.

In the case of civilian casualty, who could be held accountable for the robot’s actions? Would it be the commander, the programmer, or the manufacturer? Someone needs to be held accountable in the case of human causality, and it can’t be the machine. You can’t take a robot to court. With this accountability gap, there is little incentive to create a robot that will not endanger civilian life, when there is no clarity on who is responsible for the robot’s harmful actions.

China, Israel, Russia, South Korea, the United Kingdom, and the United States are all countries with high-tech militaries that have autonomous weapons systems in development with various degrees of human control. If at least one of these nations makes the decision to deploy a fully autonomous weapon in a conflict, as has been done with the use of armed drones, who’s to say that the others won’t follow suit? It will create a situation of asymmetric warfare, where other countries, especially opponents, will want to get hold of similar weapons so as to not be left behind or threatened. This very possible scenario could quickly lead the world into a robotic arms race, even while the international community still struggles to limit the potential damage of the nuclear arms race.

Killing ‘Killer Robots’


Cluster Munition Coalition
The Convention on Certain Conventional Weapons fourth review conference in 2011.

It is essential, now more than ever, to address the threat of these weapons by preemptively banning their development. While a handful may argue that a treaty on addressing a weapon that doesn’t exist yet is too difficult to achieve, it has been done before when blinding lasers were banned by a UN Protocol in 1995. Killer robots must be addressed now before they are developed, otherwise it would be even more difficult to establish control over their creation and deployment after they are fully utilized.

In October 2012, nine international NGOs came together to start a campaign that would address the emerging issue of autonomous weapons system. Human Rights Watch,2 Article 36,3 Association for Aid and Relief Japan,4 International Committee for Robots Arms Control (ICRAC),5 Mines Action Canada,6 Nobel Women’s Initiative,7 PAX (formerly known as IKV Pax Christi),8 Pugwash Conference on Science and World Affairs,9 and Women’s International League for Peace and Freedom worked collaboratively to launch the Campaign to Stop Killer Robots in April 2013.10,11 The Campaign is “an international coalition that is working to preemptively ban fully autonomous weapons.”

Professor Noel Sharkey, ICRAC chair, first brought world attention to this issue in an August 18, 2007 article in The Guardian titled, “Robot Wars are a Reality.”12 Different members of the campaign have published reports and hosted events aimed at addressing the legal, humanitarian, and technical perspectives involved.13

The Campaign to Stop Killer Robots works to bring attention to the issue of killer robots and urge action to ban these weapons. It participated in the second informal meeting of experts of the CCW from April 13 to 17, 2015 at the UN offices in Geneva, where it joined representatives from 90 governments, UN agencies, and The International Committee of the Red Cross at the second meeting of experts on lethal autonomous weapons systems.14 At the CCW’s Meeting of the High Contracting Parties to the Convention in November, nations will have to officially decide whether to continue the CCW deliberations in 2016 or dedicate more time to a more substantive process with a tangible outcome.

The Solution: Stopping Killer Robots before they Start

Human control of any weapon is both essential and necessary to guarantee the compliance of international law and the preservation of human life and dignity. The Campaign to Stop Killer Robots seeks to ensure that the human factor is not taken out of targeting and attacking decisions.


Campaign to Stop Killer Robots
A workshop session hosted by the Campaign to Stop Killer Robots shortly after the coalition’s creation in April 2013. The Campaign is comprised of nine international NGOs.

A full preemptive ban on the development, production, and use of these weapon systems through an international treaty is the only feasible solution to eradicate the threat that these weapons pose. The treaty should put into consideration the ethical, legal, technical, and humanitarian concerns that have been raised not only by the Campaign and other organizations, but also by the concerns posed by a number of countries and governments, highlighted in several statements that were made during the CCW meetings on lethal autonomous weapons systems.15

Additionally, the Campaign summarizes the appropriate approach to ban these weapons by emphasizing the recommendations made in the 2013 report on these weapons by the UN Special Rapporteur on extrajudicial, summary or arbitrary executions, Professor Christof Heyns.16 These recommendations are stated as followed:

  • Place a national moratorium on lethal autonomous robots (Paragraph 118).
  • Declare—unilaterally and through multilateral fora—a commitment to abide by International Humanitarian Law and international human rights law in all activities surrounding robot weapons and put in place and implement rigorous processes to ensure compliance at all stages of development (Paragraph 119).
  • Commit to being as transparent as possible about internal weapons review processes, including metrics used to test robot systems. States should at a minimum provide the international community with transparency regarding the processes they follow (if not the substantive outcomes) and commit to making the reviews as robust as possible (Paragraph 120).
  • Participate in international debate and trans-governmental dialogue on the issue of lethal autonomous robots, be prepared to exchange best practices with other states, and collaborate with the High Level Panel on lethal autonomous robotics (Paragraph 121).

The Way Forward

The issue of killer robots has been discussed informally within the international community for almost two years now. Following the deliberations in Geneva this past April, the time to progress to the next step is now. The Campaign to Stop Killer Robots is pushing states to begin formal negotiations to create a new CCW protocol on autonomous weapons.

There is a huge emphasis from the Campaign that further CCW deliberations should not only consider transparency measures. The first step that is required, which was emphasized by a number of states and by the Campaign through a letter to the chair of the Convention, is to create a Group of Governmental Experts (GGE) during the CCW’s annual meeting in November 2015. The GGE is a known method that has been used within the CCW for the past two decades to address issues and concerns of various types of weapons.

The GGE would be open to all states as well as accredited NGOs. The GGE could provide essential documents in the six official UN languages which would help encourage participation by as many countries and organizations as possible. Furthermore, a GGE could dedicate more time to hold meetings in 2016 to provide in-depth analysis of crucial issues such as meaningful human control. Finally, a GGE would facilitate more concrete outcomes and push the issue to be discussed in formal negotiations. This will help governments and NGOs to pave the way to establish a negotiating mandate in the Fifth CCW Review Conference in 2016.

To date, no nation has clearly stated that they are pursuing this new weaponry system. During the CCW’s informal meetings in April 2015, only two countries, Israel and the US, have indicated that they are not leaving any option out when it comes to acquiring this type of technology. On the other hand, Canada, France, Japan, and the UK have all clearly stated that they have no interest in pursuing the development of such weapons. However, none have expressed explicit support for a preemptive ban on killer robots.

At the meeting, campaigners took on the perception that these weapons are “inevitable.” “I must note that we are beaten over the head that killer robots are inevitable. ‘Inevitable’ is disempowering and deadening. If something is inevitable, there is no use in trying to change that inevitability,” stated Nobel Peace Laureate Jody Williams of the Nobel Women’s Initiative, a co-founder of the Campaign to Stop Killer Robots, at the CCW meeting in April.


Sharon Ward / Campaign to Stop Killer Robots
An NGO conference hosted by the Campaign to Stop Killer Robots in 2013.

The concept of human control was the center of much of the debate at the CCW meetings on killer robots held in 2014 to 2015. A weapon that can fully function on its own without the interference of a human operator should never come to exist. This technology would violate basic international law and cross a dangerous moral line threatening the preservation of human life and dignity. Trying to improve these weapons to make them “better” or “safer” should not be a viable solution.

“We absolutely reject the notion that we as human beings do not have control over deciding our own future. We reject the notion that killer robots are inevitable. They are only inevitable if those in this room and countless others around the world who oppose lethal weapons without meaningful human control are willing to roll over and allow the not necessarily inevitable to become a deadly and terrifying reality,” said Williams. A preemptive ban is the only viable solution to avoid the legal and humanitarian threats that these weapons could impose if ever created and deployed.


Maisam Alahmed

Maisam Alahmed is currently interning as a freelance journalist with the Fuller Project for International Reporting in Istanbul, Turkey. In the past, Maisam worked as a researcher for the Boston Consortium...

Leave a comment

Your email address will not be published. Required fields are marked *