(c) Simon Kneebone

Killer robots: the race for Autonomous Weapons


Who would be stupid enough to give the power of life-and-death decisions to weapons – and then release them? Why, that would be the great nations of America, Russia, China and Israel. And they are just the front-runners for the new breed of Autonomous Weapons Systems (AWS) aka Killer Robots. That’s why a large coalition of international disarmament organizations began the Campaign to Stop Killer Robots aimed at persuading the UN to write an international, legally binding treaty that will prohibit their use and development.

No, you haven’t just woken up in 2050 with a bad hangover. It’s happening now, in 2017. This is not the rise of killer robots threatening our very existence – not yet. It is the development of new weapons designed and programmed to hunt for targets and kill them without human supervision. But let’s take a step back and I’ll explain.

At the start of the new millennium, the US eyed the rapidly evolving robotics technology and saw the military potential for autonomous killing machines. Think-tanks and roadmap writers for the US army, navy and air force were all over it. New tech would give America the destructive edge now that other nations had caught up with the production of missile technology.

It began with sabre rattling but soon other nations were worried about what they heard and began thinking about making such weapons as well. By 2006, DARPA (the US Department of Defense agency responsible for the development of emerging technologies for use by the military) produced the CRUSHER, a 6,000 kg six-wheeled autonomous ground combat vehicle, as a proof-of-concept prototype. Then came a whole slew of developments across the globe – autonomous ships, submarines, ground combat vehicles and fighter jets. The race for fully Autonomous Weapons had begun. Now the stakes are high and 19 nations have called for an immediate prohibition of AWS at the UN. But proponents of the weapons have tried to slow the momentum towards a treaty by pushing a number of myths: including the five listed below.

Five myths about AWS

Killer robots. Cartoon by Simon Kneebone
Cartoon by Simon Kneebone

Myth #1
AWS are superior to human soldiers because they won’t get tired, they won’t get angry, they won’t seek revenge and they won’t rape.

But this is also true of my electric toothbrush or a Kalashnikov. It misses the point that, like the Kalashnikov, AWS are powerful new weapons that can be used by those who get angry, to seek revenge or to round up women to be raped. We could add some more important features to the list, such as: they cannot discriminate between soldiers, insurgents and civilians, and they have no way to calculate a proportionate response to the use of violent force.

Killer robots. Cartoon by Simon Kneebone
Cartoon by Simon Kneebone

Myth #2 
AWS will never be used unless they can comply with the laws of war.

If it weren’t such a serious issue, I would laugh. This is an ingenuous idea for anyone who has even glanced at the progress of weapons in war. Think aerial bombing – the most indiscriminate weapon of World War Two. After failed attempts at treaties, President FD Roosevelt wrote to European heads of state in 1939 requesting them to confine aerial bombardment to military targets. Well, that worked out well, didn’t it? Similarly, once AWS are out there, proliferation will rapidly expand their uses in an out-of-control spiral.

Killer robots. Cartoon by Simon Kneebone
Cartoon by Simon Kneebone

Myth #3
We have been using simple Automatic Weapons for years without problems so what’s new?

A sticking point in national and UN negotiations on AWS is that many militaries have long had weapons that sense and react to military objects (‘SARMO’) such as incoming missiles or mortar shells. But these are naturally restricted by their placement in static positions and the proximity of their operators – vastly different from mobile machines able to hunt and kill targets with no-one to legally verify their actions.

Killer robots. Cartoon by Simon Kneebone
Cartoon by Simon Kneebone

Myth #4
Banning AWS will stifle innovation.

The Campaign to Stop Killer Robots is emphatically not trying to ban autonomous robots, even in the military. The call for a ban only concerns the critical functions of target selection and the application of violent force without human supervision. It will not prevent innovation in autonomy from flourishing.

Killer robots. Cartoon by Simon Kneebone
Cartoon by Simon Kneebone

Myth #5 
Using AWS will save soldiers’ lives and kill fewer civilians.

This is the dumbest myth of all based on the notion that only ‘our’ country (whichever that is) will have AWS as a military advantage. Wrong! How are deadly, high-speed weapons that can penetrate defences without risk of human injury going to help save the lives of soldiers or civilians? An arms race is already emerging that will spread AWS rapidly to many nations. Then what? We may end up with lowered thresholds for conflict resulting in a continuous global battlefield, and accidental wars triggered automatically by unintended interactions of AWS.

I have not even mentioned the non-military use of these weapons – for policing and the suppression of populations making peaceful protests. And what about groups like ISIS who have already successfully used bad copies of our technologies to make crude drones loaded with explosives? Do we want them to have the technology to develop autonomous weapons that could sweep through a city, killing all in their wake? If we fully develop AWS, a black market of crude copies is inevitable.

What’s the solution?

State parties need to reject these myths, take off their blinkers and think beyond the narrow arguments of national security. It’s time to look at the bigger picture and see the truth – AWS mean broken and disrupted global security.

If we must have conflicts, let us at least have zero tolerance for civilian casualties. We need to ensure full human control of all weapons systems and ensure that humans are always responsible and accountable for injustices, mishaps and the legitimacy of targets.

To keep AWS in the box, ideally humans should:

  1. have full contextual and situational awareness of the target area at the time of initiating any specific attack;
  2. be able to perceive and react to any change or unanticipated situations that have arisen since planning the attack, like the legitimacy of targets;
  3. have active cognitive participation in the attack;
  4. have sufficient time for deliberation on the nature of targets, their significance in terms of necessity and appropriateness, and likely incidental or possible accidental effects;
  5. have a means for the rapid suspension or abortion of all attacks.

These are ideals that we should strive towards if we want our children and our grandchildren to grow and flourish in a world where technology helps humanity to create justice and harmony between nations.

The Campaign to Stop Killer Robots has been keeping the subject on the table at the UN by successfully urging the adoption of a mandate for a week-long meeting of experts every year since 2014. Now a new mandate has moved the discussions forward to the next level with meetings of a Group of Government Experts in 2017 to discuss what to do about AWS. Let’s hope they will decide to rid us of the automation of violent force while we still have time.

We do not have long to act

‘Lethal autonomous weapons threaten to become the third revolution in warfare [after the invention of gunpowder and nuclear bombs]. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.’

From an open letter sent in August 2017 to the United Nations Convention on Certain Conventional Weapons, signed by 116 founders of robotics and artificial intelligence companies. It is the first time that AI and robotics companies have taken a joint stance on this issue.

Noel Sharkey is professor of AI and robotics at the University of Sheffield, spokesperson of the Campaign to Stop Killer Robots, Chair of the International Committee for Robot Arms Control and co-director of the Foundation for Responsible Robotics.

mag cover This article is from the November 2017 issue of New Internationalist.
You can access the entire archive of over 500 issues with a digital subscription. Get a free trial now »

Subscribe   Ethical Shop