The age of disruption
We are always at the threshold of the future. But whereas in the past, the path beyond seemed like a gradient, with a horizon that one might dimly view, today it seems to resemble a graph of seismic activity. Our threshold is a brink.
The main reason for this altered future landscape is often given as the breakneck acceleration of technology. While previous technological revolutions occurred over millennia (farming) or centuries (industrialization), comparable breakthroughs today happen in a matter of years, with little predictability. And with an engulfing wave of automation rearing up – think not just industrial robots and driverless cars, but also the myriad ways in which computerized and digital technology has colonized our work and personal lives – the stage is set for an age of disruption.
Intractable challenges suddenly yield. Researchers had spent years trying to get computer systems to identify objects, only to be overtaken by a machine-learning approach – computer systems using methodical problem-solving steps (called algorithms) to learn from examples, data and experience. Google’s image recognition technology now produces results that beat average human scores for the same task.
Dentistry, as another example, is considered one of the jobs least at risk from automation. Yet this September in China, a robot dentist successfully implanted two teeth unaided by humans. The teeth themselves had been 3D printed.
Disruption is, of course, an article of faith for the Silicon Valley vanguard of new technology – preferably disruption of entire industries because that swiftly leads to a winner-takes-all monopoly with big money to be made. Mark Zuckerberg’s mantra ‘Move fast and break things’ might have been intended for Facebook’s developers, but it fits perfectly with the techno-capitalists who have ascended the ranks of the new global elite.
This has led to hand-wringing from some unlikely quarters. In an October 2016 speech, Klaus Schwab, the founder of the World Economic Forum, lamented this ‘new normal’ of disruption in the global economy, saying ‘society is facing the “new unknown”, adding to the general morosity.’ The World Economic Forum is, after all, most famous for its annual gathering of the super-wealthy at Davos, and not particularly regarded as a cradle of progressive concern. But Schwab had put his finger on the technological revolution afoot. He defined it in terms of unprecedented ‘velocity, scope and systems impact’, evolving at an ‘exponential rather than a linear pace’ and ‘disrupting almost every industry in every country’.1
Going, going, gone…?
Schwab’s dismally resonant phrase ‘general morosity’ is most applicable to the work sphere, where precarity is currently the order of the day and dire warnings about the effects of automation abound.
The latest report from Citi and Oxford Martin School announces that 80 per cent of retail jobs are at risk of automation – this is the sector next in line after the losses already sweeping through manufacturing, mining and agriculture. It’s not just the people at the tills being replaced by machines, but, with the rise of internet shopping, also warehouse, transport and logistics workers.
Another paper from the Royal Society for the encouragement of Arts, Manufactures and Commerce (RSA) predicts that in Britain, finance and accounting, transport and distribution, and media marketing and advertising could suffer heavy losses in the next decade. Yet another report claims one in three British jobs are on the chopping block.
As for the Majority World, nothing to whistle about. A study building on World Bank data predicts an even worse situation, with the most populous nations, China, and India having 77 and 69 per cent respectively of jobs deemed ‘high risk’. Uzbekistan leads this dubious ranking at 85 per cent. Such predictions are based on a combination of assumptions. One, that the low-wage labour advantage of Global South manufacturers disappears, with automation allowing purchasing countries to bring production back to their own shores. Two, that increasing automation in poorer countries would displace larger numbers of jobs than in the West.
There are of course opposing views that hold that, given time, new technology will create different kinds of jobs, or usher in an age of plenty where work will matter less. These are incredibly benevolent predictions, when one considers the job-creation record of new technology industries. In the US, just 0.5 per cent of the work force has been able to shift to them from other sectors.
There is also the argument that we are better off automating jobs that are boring, repetitive, dirty or dangerous. No-one would argue against the robots currently being developed to scope out nuclear radiation levels, for example. But what if those are the only jobs left…?
This is not to deny the incredible benefits of technological advances. Algorithm-based artificial intelligence can now identify cancers better than trained pathologists and outperforms doctors in the accurate diagnosis of symptoms. Robots can undertake delicate surgery with absolutely steady hands. Great news of course for patients who may benefit, but not so great if it means the de-skilling of healthcare professionals. Computer programs could likely scan case law much more thoroughly and swiftly in legal situations and propose lines of defence, but would one want them to be the sole recourse? Algorithms can detect fraudulent financial transactions in a millisecond. But they can also accomplish 100,000 high-frequency trades on the stock market, while simultaneously manoeuvring to mislead their electronic competitors in that same split second. It’s brilliant that drones can deliver essential medicines to remote rural areas in Malawi, but less so if that means the areas themselves are consigned to remoteness for the foreseeable future and that’s all the healthcare on offer. And as for China’s robot dentist – well, the country has a shortage of dentists. But would training more humans be a better option?
The bottom line should always be, what is the human impact? And that consideration lags far behind in the tech race.
Consider this: a recent report laid out evidence that US workers who had been exposed to automation in the workplace had had a higher propensity to vote for Donald Trump (also when accounting for a variety of other explanations). Quite possibly they were responding to Trump’s boast of bringing manufacturing back home again and reviving the rust belt. But in 2016, the US had hit a manufacturing record, producing more goods than ever (85 per cent more than in 1987) – with one crucial difference: it did so using 30 per cent fewer workers. Manufacturing was already home, but it was increasingly being done by the machines. Trump might have blamed globalization for job losses but today many commentators insist automation poses far greater challenges.
Meanwhile, citrus growers in California, worried about no longer being able to rely on cheap migrant labour, are investing in the development of an orange-picking robot. And in Brexit-facing Britain, farmers are considering automated strawberry-pickers at $250,000 a pop. Strange days indeed.
Many predictions about work converge on a few cheerless points: that jobs will get increasingly divided into low-paid/low-skilled and high-paid/high-skilled, with the latter reserved for a select few; that worker bargaining power and wages will take a dive; that algorithmic-intense management with a high degree of surveillance will lead to increasingly robotic working conditions for humans; and many could end up doing work that is just the interface between machines.
Such predictions are already conspiring to shape the present. ‘The threat that work could disappear is an excellent way to make us work more cheaply,’ says Dutch sociologist Willem Schinkel, and the fear of this threat makes us ‘supremely exploitable’. It’s a kind of capitalism that is ‘benefited by flexible people, by people who work but preferably don’t have jobs’.
This results in a 24/7 connectedness to work, which makes many long for the boring 9-to-5 of yore.
Another oft-repeated prescription is that job obsolescence will be regular and recurring. So to be economically valid we must continually relearn skills, continually reinvent ourselves and adjust, continually study – regardless of whether the majority of us are capable or desirous of doing so. The narrative is always one of inevitability, the agency of ordinary humans has little place in it. The message is: fit your skills to the needs of the intelligent machines – and those who control them – or else.
The next prediction is also presented as inevitable, and even those on the Right who are usually squeamish about such things are making it: inequality will get techno-charged and widen ever further. The consequences would be particularly devastating in the Global South where social provision can be scant anyway. At the beginning of this year Oxfam warned that just eight rich men now control as much wealth as the world’s poorest 50 per cent; all 3.6 billion of them. How much further can things go?
As social critic Curtis White pithily put it: ‘Robots are brilliant at supply but they don’t create demand.’ So if the triumph of techno-capitalism would render most of us economically worthless in terms of our labour, the ultimate dystopic conclusion is that we could become disposable to a rich ruling elite.
Currently our policymakers seem to be asleep at the wheel when it comes to the question of regulating new technology and its effects. The changes are coming so fast that regulators are finding themselves unable to cope. Schwab talks in terms of regulators having to ‘reinvent’ themselves and somewhat demurely of the ‘decentralization of power that new technologies make possible’. In effect, we are talking about a Wild West scenario where tech billionaires are gathering decision-making power by stealth because no-one will stop them. (Google’s Larry Page has gone one step further, expressing a desire to set aside a part of the planet for unlimited experimentation completely without regulation.)
This sneaky power grab is most evident in the world of Big Data. A common complaint is that all our digital interactions, the way we are watched over by the web-connected appliances we use (the so-called internet of things), are yielding a rich lode of data that is being mined almost solely by a handful of mainly US-based corporations. This data is being deployed with increasing opacity by systems that can generate individually tailored messages to influence our political behaviour, consumption of products and many other aspects of our lives.
Critics warn that governments are unthinkingly ceding public statistics to the data giants. But at digital trade talks, any moves towards regulation – now or in the future – by Majority World countries that would impede the completely free flow of their citizens’ data across borders, is stamped upon by wealthy nations fronting for big business.
Big data is also being deployed in the service of the dominant socio-political model of technocracy, which installs technical thinking as the supreme discourse and dictates society must be made to fit assumed principles of ‘scientific’ organization. Author Cathy O’Neill has outlined in her book Weapons of Math Destruction how algorithmic assessment is being levelled at almost every level of human interaction from employee effectiveness to credit-worthiness to policing. Viewed as objective and neutral, she demonstrates how algorithms are riddled with human bias, and all too often work against democracy and perpetuate inequality.
If Big Data is all about the asymmetries of power, the zenith is reached in the quest to take AI further and further. This September, Vladimir Putin declared: ‘Artificial intelligence is the future, not only for Russia, but for humankind… Whoever becomes the leader in this sphere will become ruler of the world.’ Putting aside the desirability or otherwise of one such overlord for the moment, what Putin was talking about was not the AI that already exists – the kind of task-solving AI that has defeated the best human minds in strategic or linguistic games or which can turn out 5,000 soul-stirring chorales in the manner of Bach in a single day.2
He was instead referring to its dreamed-of successor: general artificial intelligence (AGI), a superior AI that could deploy its enhanced capacity across a wide range of activities like humans do. Notionally, it could be capable, in the words of the Future of Life Institute, of ‘outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand’. The race to develop AGI is already on.
Reactions to the idea of AGI among the new tech community vary from ‘it is aeons away, if it will ever happen’, to outright dread, and blissful anticipation. The worriers are urging action now to address potential safety problems so complex that they may take decades to solve.
The enthusiasts have a rather simpler plan. The folks at GoodAI, which attempts to direct machine intelligence to tackle ethical decision-making, has the speedy development of AGI ‘to help humanity and understand the universe’ at the core of its mission. Dennis Hassabis, the founder of Google-owned DeepMind, outlines his company’s mission as ‘Step one, solve intelligence. Step two, use it to solve everything else.’
AI ethicist John C Havens points out the naked emperor: ‘It is tempting to wonder what would happen if we spent more time focusing on helping each other directly, versus relying on machines to essentially grow brains for us.’
Take back the future
The discussion about our tech future must revolve around rescuing our humanity and agency from the corporate narrative of inevitability. The governance questions are complex and unwieldy and we need our would-be regulators to wake up.
David King of the tech discussion group Breaking the Frame believes this requires wide engagement: ‘We need a movement that understands the importance of corporate, military and state control over technology and puts struggles over technology at the heart of radical politics.’
There are calls for ethical frameworks to be put in place to take control of the development of technology and to scrutinize the algorithms that affect so many of our lives, often without us knowing. These are political battles. We need oversight particularly of technology that is capable of acting autonomously. Especially compelling is the need to ban autonomous weapons, which many nations are already developing. The UN is only just establishing a Centre for AI and Robotics at the Hague; action on the burning issues seems remote.
Much needs to be done about the monopoly ownership of entire sections of the new tech economy. If data is publicly generated, it is time it was used for public ends, paid public dividends and was publicly owned. It may seem like a crazy idea now that corporations run the show, but imagine if Big Data had not already happened and we had the chance to consider how it may best suit us; would it be so crazy then?
On the job front, beyond the obvious ask for the strengthening of labour rights and trade unions to improve bargaining power, we need to look deeper. If automation can detach capital from labour, then proposals like basic income will go nowhere near far enough. As writer Ben Tarnoff puts it: ‘Better to own the robots collectively, and allocate the surplus democratically, than leave society’s wealth in the hands of its luckiest members.’
We could embrace and benefit from the disruptions of new tech much more easily if we could focus on the bottom line considerations of public good and the social creation of wealth. These are not things we can leave to the technological elite.
And sometimes we may just need to resort to plain old bias – a computer may generate a multitude of art works of undisputable quality. Choosing one created by a human is expressing what we cherish about ourselves.
Before we part company, gentle reader, a reassurance that this piece was written by a person and not by a narrative language generation programme. (Or was it?)
Foundation for Responsible Robotics: responsiblerobotics.org
‘Accountable innovation for the humans behind the robots’.
Open Roboethics initiative (ORI): openroboethics.org
Has published surveys on a range of ethical issues relating to robots.
RISKS OF AI
Future of Life Institute: futureoflife.org
Future of Humanity Institute: fhi.ox.ac.uk
Centre for the Study of Existential Risk: cser.org
Campaign to Stop Killer Robots: stopkillerrobots.org
International Committee for Robot Arms Control (ICRAC): icrac.net
Electronic Frontier Foundation: eff.org
Defends civil liberties in the digital world; has a project measuring the progress of AI.
Oxford Martin School: oxfordmartin.ox.ac.uk
Regular reports on automation and jobs.
Breaking the Frame: breakingtheframe.org.uk
Challenging the politics of technology.
Non-profit research company with commercial sponsors working on safe general AI.