We use cookies for site personalization, analytics and advertising. You can opt out of third party cookies. More info in our privacy policy.   Got it

Rise of the killer robots!

A Terminator in Whitehall, 2030? Campaigners call for a ban on autonomous weapons at a recent London protest.

Luke MacGregor/Reuters

Irrespective of intelligence, no machine is capable of morality. So, if you thought drones were bad, you are likely to take an even dimmer view of their successors: ‘lethal autonomous robotic weapons’. Unlike remote-controlled drones, ‘killer robots’ require no external ‘live’ human input at all, and can be pre-programmed to select and destroy specific targets.

The new weaponry poses a grave threat to human rights, according to the Campaign to Stop Killer Robots, which argues that the arms undermine international law by eliminating all human culpability.

In the tradition of militarized drones, developed by contractors behind closed doors and unleashed, almost without warning, on to the battlefield, lethal autonomous weapons could be put into use in combat without further public debate.

It’s a trend that concerns human rights groups and military organizations in equal measure. The latter consider it a slight on military practice to suggest that wars should not be fought by trained individuals, acting under certain codes.

The ‘killer robot’ technology is being developed in the US, Britain, Russia, China and Israel. Israel already has the ‘Harpy’ – a ‘fire and forget’ weapon capable of detecting and destroying radar emitters.

‘If this is coupled with greater autonomy of movement and operation,’ explains Laura Boillot of Article36, a not-for-profit working to prevent unacceptable harm caused by weapons, ‘we will start to see fully autonomous weapons in combat.’

So when will governments discuss putting controls on fully autonomous weapons?

The UN Human Rights Council hosted its first debate on the ethics of these weapons last May. Britain opposed a moratorium on development of the arms – the only state out of 24 in attendance to do so.

‘A couple of states have recommended that this issue be discussed at the next meeting of the Convention on Certain Conventional Weapons (CCW) in November,’ says Boillot. ‘Around 100 states are party to the treaty, which managed to ban blinding lasers, comparable to killer robots in that they were banned before coming into use. But on the whole, it is not famous for ambitious, standard-setting results.’

Even if fully autonomous weapons are blocked by the CCW, the technology now exists. In the long run, it will become increasingly difficult to govern.

 

Help us produce more like this

Editor Portrait Patreon is a platform that enables us to offer more to our readership. With a new podcast, eBooks, tote bags and magazine subscriptions on offer, as well as early access to video and articles, we’re very excited about our Patreon! If you’re not on board yet then check it out here.

Support us »

Killer robots: the race for Autonomous Weapons

Who would be stupid enough to give the power of life-and-death decisions to weapons – and then release them? Why, that would be the great nations of America, Russia, China and Israel. And they are just the front-runners for the new breed of Autonomous Weapons Systems (AWS) aka Killer Robots. That’s why a large coalition of international disarmament organizations began the Campaign to Stop Killer Robots aimed at persuading the UN to write an international, legally binding treaty that will prohibit their use and development.

No, you haven’t just woken up in 2050 with a bad hangover. It’s happening now, in 2017. This is not the rise of killer robots threatening our very existence – not yet. It is the development of new weapons designed and programmed to hunt for targets and kill them without human supervision. But let’s take a step back and I’ll explain.

At the start of the new millennium, the US eyed the rapidly evolving robotics technology and saw the military potential for autonomous killing machines. Think-tanks and roadmap writers for the US army, navy and air force were all over it. New tech would give America the destructive edge now that other nations had caught up with the production of missile technology.

It began with sabre rattling but soon other nations were worried about what they heard and began thinking about making such weapons as well. By 2006, DARPA (the US Department of Defense agency responsible for the development of emerging technologies for use by the military) produced the CRUSHER, a 6,000 kg six-wheeled autonomous ground combat vehicle, as a proof-of-concept prototype. Then came a whole slew of developments across the globe – autonomous ships, submarines, ground combat vehicles and fighter jets. The race for fully Autonomous Weapons had begun. Now the stakes are high and 19 nations have called for an immediate prohibition of AWS at the UN. But proponents of the weapons have tried to slow the momentum towards a treaty by pushing a number of myths: including the five listed below.

Five myths about AWS

Killer robots. Cartoon by Simon Kneebone
Cartoon by Simon Kneebone

Myth #1
AWS are superior to human soldiers because they won’t get tired, they won’t get angry, they won’t seek revenge and they won’t rape.

But this is also true of my electric toothbrush or a Kalashnikov. It misses the point that, like the Kalashnikov, AWS are powerful new weapons that can be used by those who get angry, to seek revenge or to round up women to be raped. We could add some more important features to the list, such as: they cannot discriminate between soldiers, insurgents and civilians, and they have no way to calculate a proportionate response to the use of violent force.

Killer robots. Cartoon by Simon Kneebone
Cartoon by Simon Kneebone

Myth #2 
AWS will never be used unless they can comply with the laws of war.

If it weren’t such a serious issue, I would laugh. This is an ingenuous idea for anyone who has even glanced at the progress of weapons in war. Think aerial bombing – the most indiscriminate weapon of World War Two. After failed attempts at treaties, President FD Roosevelt wrote to European heads of state in 1939 requesting them to confine aerial bombardment to military targets. Well, that worked out well, didn’t it? Similarly, once AWS are out there, proliferation will rapidly expand their uses in an out-of-control spiral.

Killer robots. Cartoon by Simon Kneebone
Cartoon by Simon Kneebone

Myth #3
We have been using simple Automatic Weapons for years without problems so what’s new?

A sticking point in national and UN negotiations on AWS is that many militaries have long had weapons that sense and react to military objects (‘SARMO’) such as incoming missiles or mortar shells. But these are naturally restricted by their placement in static positions and the proximity of their operators – vastly different from mobile machines able to hunt and kill targets with no-one to legally verify their actions.

Killer robots. Cartoon by Simon Kneebone
Cartoon by Simon Kneebone

Myth #4
Banning AWS will stifle innovation.

The Campaign to Stop Killer Robots is emphatically not trying to ban autonomous robots, even in the military. The call for a ban only concerns the critical functions of target selection and the application of violent force without human supervision. It will not prevent innovation in autonomy from flourishing.

Killer robots. Cartoon by Simon Kneebone
Cartoon by Simon Kneebone

Myth #5 
Using AWS will save soldiers’ lives and kill fewer civilians.

This is the dumbest myth of all based on the notion that only ‘our’ country (whichever that is) will have AWS as a military advantage. Wrong! How are deadly, high-speed weapons that can penetrate defences without risk of human injury going to help save the lives of soldiers or civilians? An arms race is already emerging that will spread AWS rapidly to many nations. Then what? We may end up with lowered thresholds for conflict resulting in a continuous global battlefield, and accidental wars triggered automatically by unintended interactions of AWS.

I have not even mentioned the non-military use of these weapons – for policing and the suppression of populations making peaceful protests. And what about groups like ISIS who have already successfully used bad copies of our technologies to make crude drones loaded with explosives? Do we want them to have the technology to develop autonomous weapons that could sweep through a city, killing all in their wake? If we fully develop AWS, a black market of crude copies is inevitable.

What’s the solution?

State parties need to reject these myths, take off their blinkers and think beyond the narrow arguments of national security. It’s time to look at the bigger picture and see the truth – AWS mean broken and disrupted global security.

If we must have conflicts, let us at least have zero tolerance for civilian casualties. We need to ensure full human control of all weapons systems and ensure that humans are always responsible and accountable for injustices, mishaps and the legitimacy of targets.

To keep AWS in the box, ideally humans should:

  1. have full contextual and situational awareness of the target area at the time of initiating any specific attack;
  2. be able to perceive and react to any change or unanticipated situations that have arisen since planning the attack, like the legitimacy of targets;
  3. have active cognitive participation in the attack;
  4. have sufficient time for deliberation on the nature of targets, their significance in terms of necessity and appropriateness, and likely incidental or possible accidental effects;
  5. have a means for the rapid suspension or abortion of all attacks.

These are ideals that we should strive towards if we want our children and our grandchildren to grow and flourish in a world where technology helps humanity to create justice and harmony between nations.

The Campaign to Stop Killer Robots has been keeping the subject on the table at the UN by successfully urging the adoption of a mandate for a week-long meeting of experts every year since 2014. Now a new mandate has moved the discussions forward to the next level with meetings of a Group of Government Experts in 2017 to discuss what to do about AWS. Let’s hope they will decide to rid us of the automation of violent force while we still have time.

We do not have long to act

‘Lethal autonomous weapons threaten to become the third revolution in warfare [after the invention of gunpowder and nuclear bombs]. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close.’

From an open letter sent in August 2017 to the United Nations Convention on Certain Conventional Weapons, signed by 116 founders of robotics and artificial intelligence companies. It is the first time that AI and robotics companies have taken a joint stance on this issue.

Noel Sharkey is professor of AI and robotics at the University of Sheffield, spokesperson of the Campaign to Stop Killer Robots, Chair of the International Committee for Robot Arms Control and co-director of the Foundation for Responsible Robotics.

 

Help us produce more like this

Editor Portrait Patreon is a platform that enables us to offer more to our readership. With a new podcast, eBooks, tote bags and magazine subscriptions on offer, as well as early access to video and articles, we’re very excited about our Patreon! If you’re not on board yet then check it out here.

Support us »

The age of disruption

We are always at the threshold of the futureBut whereas in the past, the path beyond seemed like a gradient, with a horizon that one might dimly view, today it seems to resemble a graph of seismic activity. Our threshold is a brink.

The main reason for this altered future landscape is often given as the breakneck acceleration of technology. While previous technological revolutions occurred over millennia (farming) or centuries (industrialization), comparable breakthroughs today happen in a matter of years, with little predictability. And with an engulfing wave of automation rearing up – think not just industrial robots and driverless cars, but also the myriad ways in which computerized and digital technology has colonized our work and personal lives – the stage is set for an age of disruption.

Intractable challenges suddenly yield. Researchers had spent years trying to get computer systems to identify objects, only to be overtaken by a machine-learning approach – computer systems using methodical problem-solving steps (called algorithms) to learn from examples, data and experience. Google’s image recognition technology now produces results that beat average human scores for the same task.

Dentistry, as another example, is considered one of the jobs least at risk from automation. Yet this September in China, a robot dentist successfully implanted two teeth unaided by humans. The teeth themselves had been 3D printed.

Disruption is, of course, an article of faith for the Silicon Valley vanguard of new technology – preferably disruption of entire industries because that swiftly leads to a winner-takes-all monopoly with big money to be made. Mark Zuckerberg’s mantra ‘Move fast and break things’ might have been intended for Facebook’s developers, but it fits perfectly with the techno-capitalists who have ascended the ranks of the new global elite.

This has led to hand-wringing from some unlikely quarters. In an October 2016 speech, Klaus Schwab, the founder of the World Economic Forum, lamented this ‘new normal’ of disruption in the global economy, saying ‘society is facing the “new unknown”, adding to the general morosity.’ The World Economic Forum is, after all, most famous for its annual gathering of the super-wealthy at Davos, and not particularly regarded as a cradle of progressive concern. But Schwab had put his finger on the technological revolution afoot. He defined it in terms of unprecedented ‘velocity, scope and systems impact’, evolving at an ‘exponential rather than a linear pace’ and ‘disrupting almost every industry in every country’.1

Going, going, gone…?

Schwab’s dismally resonant phrase ‘general morosity’ is most applicable to the work sphere, where precarity is currently the order of the day and dire warnings about the effects of automation abound.

The latest report from Citi and Oxford Martin School announces that 80 per cent of retail jobs are at risk of automation – this is the sector next in line after the losses already sweeping through manufacturing, mining and agriculture. It’s not just the people at the tills being replaced by machines, but, with the rise of internet shopping, also warehouse, transport and logistics workers.

The narrative is always one of inevitability – fit your skills to the needs of the intelligent machines and those who control them – or else

Another paper from the Royal Society for the encouragement of Arts, Manufactures and Commerce (RSA) predicts that in Britain, finance and accounting, transport and distribution, and media marketing and advertising could suffer heavy losses in the next decade. Yet another report claims one in three British jobs are on the chopping block.

As for the Majority World, nothing to whistle about. A study building on World Bank data predicts an even worse situation, with the most populous nations, China, and India having 77 and 69 per cent respectively of jobs deemed ‘high risk’. Uzbekistan leads this dubious ranking at 85 per cent. Such predictions are based on a combination of assumptions. One, that the low-wage labour advantage of Global South manufacturers disappears, with automation allowing purchasing countries to bring production back to their own shores. Two, that increasing automation in poorer countries would displace larger numbers of jobs than in the West.

There are of course opposing views that hold that, given time, new technology will create different kinds of jobs, or usher in an age of plenty where work will matter less. These are incredibly benevolent predictions, when one considers the job-creation record of new technology industries. In the US, just 0.5 per cent of the work force has been able to shift to them from other sectors.

There is also the argument that we are better off automating jobs that are boring, repetitive, dirty or dangerous. No-one would argue against the robots currently being developed to scope out nuclear radiation levels, for example. But what if those are the only jobs left…?

This is not to deny the incredible benefits of technological advances. Algorithm-based artificial intelligence can now identify cancers better than trained pathologists and outperforms doctors in the accurate diagnosis of symptoms. Robots can undertake delicate surgery with absolutely steady hands. Great news of course for patients who may benefit, but not so great if it means the de-skilling of healthcare professionals. Computer programs could likely scan case law much more thoroughly and swiftly in legal situations and propose lines of defence, but would one want them to be the sole recourse? Algorithms can detect fraudulent financial transactions in a millisecond. But they can also accomplish 100,000 high-frequency trades on the stock market, while simultaneously manoeuvring to mislead their electronic competitors in that same split second. It’s brilliant that drones can deliver essential medicines to remote rural areas in Malawi, but less so if that means the areas themselves are consigned to remoteness for the foreseeable future and that’s all the healthcare on offer. And as for China’s robot dentist – well, the country has a shortage of dentists. But would training more humans be a better option?

The bottom line should always be, what is the human impact? And that consideration lags far behind in the tech race.

Consider this: a recent report laid out evidence that US workers who had been exposed to automation in the workplace had had a higher propensity to vote for Donald Trump (also when accounting for a variety of other explanations). Quite possibly they were responding to Trump’s boast of bringing manufacturing back home again and reviving the rust belt. But in 2016, the US had hit a manufacturing record, producing more goods than ever (85 per cent more than in 1987) – with one crucial difference: it did so using 30 per cent fewer workers. Manufacturing was already home, but it was increasingly being done by the machines. Trump might have blamed globalization for job losses but today many commentators insist automation poses far greater challenges.

Meanwhile, citrus growers in California, worried about no longer being able to rely on cheap migrant labour, are investing in the development of an orange-picking robot. And in Brexit-facing Britain, farmers are considering automated strawberry-pickers at $250,000 a pop. Strange days indeed.

Supremely exploitable

Many predictions about work converge on a few cheerless points: that jobs will get increasingly divided into low-paid/low-skilled and high-paid/high-skilled, with the latter reserved for a select few; that worker bargaining power and wages will take a dive; that algorithmic-intense management with a high degree of surveillance will lead to increasingly robotic working conditions for humans; and many could end up doing work that is just the interface between machines.

Such predictions are already conspiring to shape the present. ‘The threat that work could disappear is an excellent way to make us work more cheaply,’ says Dutch sociologist Willem Schinkel, and the fear of this threat makes us ‘supremely exploitable’. It’s a kind of capitalism that is ‘benefited by flexible people, by people who work but preferably don’t have jobs’.

This results in a 24/7 connectedness to work, which makes many long for the boring 9-to-5 of yore.

Robotization dangers: Robocop for real, a police robot makes its debut in dubai, May 2017. It will help citizens report crimes and answer parking ticket queries, rather than make arrests. 25 per cent of the dubai police force will be robotic by 2030.
Robocop for real, a police robot makes its debut in Dubai, May 2017. It will help citizens report crimes and answer parking ticket queries, rather than make arrests. Some 25 per cent of the Dubai police force will be robotic by 2030. Giuseppe Cacace/AFP/Getty Images

Another oft-repeated prescription is that job obsolescence will be regular and recurring. So to be economically valid we must continually relearn skills, continually reinvent ourselves and adjust, continually study – regardless of whether the majority of us are capable or desirous of doing so. The narrative is always one of inevitability, the agency of ordinary humans has little place in it. The message is: fit your skills to the needs of the intelligent machines – and those who control them – or else.

The next prediction is also presented as inevitable, and even those on the Right who are usually squeamish about such things are making it: inequality will get techno-charged and widen ever further. The consequences would be particularly devastating in the Global South where social provision can be scant anyway. At the beginning of this year Oxfam warned that just eight rich men now control as much wealth as the world’s poorest 50 per cent; all 3.6 billion of them. How much further can things go?

As social critic Curtis White pithily put it: ‘Robots are brilliant at supply but they don’t create demand.’ So if the triumph of techno-capitalism would render most of us economically worthless in terms of our labour, the ultimate dystopic conclusion is that we could become disposable to a rich ruling elite.

Reinvented regulators

Currently our policymakers seem to be asleep at the wheel when it comes to the question of regulating new technology and its effects. The changes are coming so fast that regulators are finding themselves unable to cope. Schwab talks in terms of regulators having to ‘reinvent’ themselves and somewhat demurely of the ‘decentralization of power that new technologies make possible’. In effect, we are talking about a Wild West scenario where tech billionaires are gathering decision-making power by stealth because no-one will stop them. (Google’s Larry Page has gone one step further, expressing a desire to set aside a part of the planet for unlimited experimentation completely without regulation.)

This sneaky power grab is most evident in the world of Big Data. A common complaint is that all our digital interactions, the way we are watched over by the web-connected appliances we use (the so-called internet of things), are yielding a rich lode of data that is being mined almost solely by a handful of mainly US-based corporations. This data is being deployed with increasing opacity by systems that can generate individually tailored messages to influence our political behaviour, consumption of products and many other aspects of our lives.

Critics warn that governments are unthinkingly ceding public statistics to the data giants. But at digital trade talks, any moves towards regulation – now or in the future – by Majority World countries that would impede the completely free flow of their citizens’ data across borders, is stamped upon by wealthy nations fronting for big business.

Big data is also being deployed in the service of the dominant socio-political model of technocracy, which installs technical thinking as the supreme discourse and dictates society must be made to fit assumed principles of ‘scientific’ organization. Author Cathy O’Neill has outlined in her book Weapons of Math Destruction how algorithmic assessment is being levelled at almost every level of human interaction from employee effectiveness to credit-worthiness to policing. Viewed as objective and neutral, she demonstrates how algorithms are riddled with human bias, and all too often work against democracy and perpetuate inequality.

Solve everything

If Big Data is all about the asymmetries of power, the zenith is reached in the quest to take AI further and further. This September, Vladimir Putin declared: ‘Artificial intelligence is the future, not only for Russia, but for humankind… Whoever becomes the leader in this sphere will become ruler of the world.’ Putting aside the desirability or otherwise of one such overlord for the moment, what Putin was talking about was not the AI that already exists – the kind of task-solving AI that has defeated the best human minds in strategic or linguistic games or which can turn out 5,000 soul-stirring chorales in the manner of Bach in a single day.2

He was instead referring to its dreamed-of successor: general artificial intelligence (AGI), a superior AI that could deploy its enhanced capacity across a wide range of activities like humans do. Notionally, it could be capable, in the words of the Future of Life Institute, of ‘outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand’. The race to develop AGI is already on.

Reactions to the idea of AGI among the new tech community vary from ‘it is aeons away, if it will ever happen’, to outright dread, and blissful anticipation. The worriers are urging action now to address potential safety problems so complex that they may take decades to solve.

The enthusiasts have a rather simpler plan. The folks at GoodAI, which attempts to direct machine intelligence to tackle ethical decision-making, has the speedy development of AGI ‘to help humanity and understand the universe’ at the core of its mission. Dennis Hassabis, the founder of Google-owned DeepMind, outlines his company’s mission as ‘Step one, solve intelligence. Step two, use it to solve everything else.’

AI ethicist John C Havens points out the naked emperor: ‘It is tempting to wonder what would happen if we spent more time focusing on helping each other directly, versus relying on machines to essentially grow brains for us.’

Take back the future

The discussion about our tech future must revolve around rescuing our humanity and agency from the corporate narrative of inevitability. The governance questions are complex and unwieldy and we need our would-be regulators to wake up.

David King of the tech discussion group Breaking the Frame believes this requires wide engagement: ‘We need a movement that understands the importance of corporate, military and state control over technology and puts struggles over technology at the heart of radical politics.’

There are calls for ethical frameworks to be put in place to take control of the development of technology and to scrutinize the algorithms that affect so many of our lives, often without us knowing. These are political battles. We need oversight particularly of technology that is capable of acting autonomously. Especially compelling is the need to ban autonomous weapons, which many nations are already developing. The UN is only just establishing a Centre for AI and Robotics at the Hague; action on the burning issues seems remote.

Much needs to be done about the monopoly ownership of entire sections of the new tech economy. If data is publicly generated, it is time it was used for public ends, paid public dividends and was publicly owned. It may seem like a crazy idea now that corporations run the show, but imagine if Big Data had not already happened and we had the chance to consider how it may best suit us; would it be so crazy then?

On the job front, beyond the obvious ask for the strengthening of labour rights and trade unions to improve bargaining power, we need to look deeper. If automation can detach capital from labour, then proposals like basic income will go nowhere near far enough. As writer Ben Tarnoff puts it: ‘Better to own the robots collectively, and allocate the surplus democratically, than leave society’s wealth in the hands of its luckiest members.’

We could embrace and benefit from the disruptions of new tech much more easily if we could focus on the bottom line considerations of public good and the social creation of wealth. These are not things we can leave to the technological elite.

And sometimes we may just need to resort to plain old bias – a computer may generate a multitude of art works of undisputable quality. Choosing one created by a human is expressing what we cherish about ourselves.

Before we part company, gentle reader, a reassurance that this piece was written by a person and not by a narrative language generation programme. (Or was it?)

Explore further

ETHICS
Foundation for Responsible Robotics: responsiblerobotics.org
‘Accountable innovation for  the humans behind the robots’.
Open Roboethics initiative (ORI): openroboethics.org
Has published surveys on a range of ethical issues relating to robots.

RISKS OF AI
Future of Life Institute: futureoflife.org
Future of Humanity Institute: fhi.ox.ac.uk
Centre for the Study of Existential Risk: cser.org

AUTONOMOUS WEAPONS
Campaign to Stop Killer Robots: stopkillerrobots.org
International Committee for Robot Arms Control (ICRAC): icrac.net

PLUS…
Electronic Frontier Foundation: eff.org
Defends civil liberties in the digital world; has a project measuring the progress of AI.
Oxford Martin School: oxfordmartin.ox.ac.uk
Regular reports on automation and jobs.
Breaking the Frame: breakingtheframe.org.uk
Challenging the politics of technology.
OpenAI: openai.com
Non-profit research company with commercial sponsors working on safe general AI.

‘The Fourth Industrial Revolution’, 14 January 2016, nin.tl/SchwabWEF
The reference is to the Experiments in Musical Intelligence programme developed by composer David Cope.

 

Help us produce more like this

Editor Portrait Patreon is a platform that enables us to offer more to our readership. With a new podcast, eBooks, tote bags and magazine subscriptions on offer, as well as early access to video and articles, we’re very excited about our Patreon! If you’re not on board yet then check it out here.

Support us »

Subscribe   Ethical Shop