We use cookies for site personalization and analytics. You can opt out of third party cookies. More info in our privacy policy.   Got it

Mechanized Mayhem


new internationalist
issue 162 - August 1986

[image, unknown]
Illustration: Hans-Georg Rauch
Mechanized mayhem
No animal species can now think or act fast enough to deal
with the technology the arms manufacturers are bent on creating.
Tim Williams explains why control has to be passed over to
computers - and why this is almost certain to end in disaster.

'One of the abort simulations the crew chose to test on the Shuttle Mission Simulator is called a 'Transatlantic abort', which supposes that the crew can neither return to the launch site nor go into orbit. The objective is to land in Spain after dumping some fuel. The crew was about to go into this dump sequence when all four of our flight computer machines locked up and went 'catatonic', with all displays showing a big 'X'. Had this been the real thing, the shuttle would probably have had difficulty landing. This kind of scenario could only occur under a very specific and unlikely combination of physical and aerodynamic conditions; but there it was; our machines all stopped. Our greatest fear had materialised - a generic software problem.

'He went off to look at the problem. The crew was rather upset, and they went off to lunch.'

THUS one of IBM's software managers described an accident that didn't happen to the Space Shuttle. The Shuttle is an extreme example of a life-and-death undertaking controlled almost entirely by high technology. But the same kind of people are designing systems on which all of our lives depend - F-111s, Cruise missiles, Trident submarines and SS-20s. And, if no-one stops them, they will go on to put lasers and nuclear weapons in space.

The military has fostered or taken over every advance in technology since the bow and arrow. But the longbow, the catapult and the rifle all had one thing in common: they were controlled by humans. The current generation of high-tech weapons is controlled by computers.

Technology is seen by the military as a 'force multiplier': a means of enhancing capability. The variety of weapons which have been 'enhanced' by technology is enormous, but some examples may bring out the practical drawbacks.

In August 1985 the Pentagon cancelled its purchase of the DIVAD anti-aircraft gun system. The official reason was its inadequate performance against countermeasures. Unofficially it was cancelled after a demonstration attended by top Pentagon officials, in which the DIVAD's radar completely ignored its target and instead locked on to a bathroom fan in a nearby toilet block, successfully blowing the building to pieces.

This kind of automatic control is needed because modern attack helicopters fly fast and low, allowing a few seconds at most for anti-aircraft defences to shoot them down. Only computers have fast enough response times; humans are too slow. Unfortunately, the engineers who programmed the computer to recognise the radar signature of whirling helicopter blades forgot to warn it that bathroom fans look pretty similar.

With the increasing power of conventional weapons (and the latest explosives can rival the destructive blast of a small nuclear bomb) the prospect of this kind of computer-controlled battlefield is bad enough. But it is the encroachment of information technology into the command and control of nuclear forces which worries observers most. This could be too big a job, even for computers.

The American nuclear command, control and communications (C3) system spreads its web of sensors, data links, computers and command channels worldwide. The Soviet version, though more centralised, is basically similar. In each case the overall system is immensely complex. In the words of one observer, 'the interactions among its constituent parts - technologies, procedures and people - under conditions of damage defy full comprehension. Since it has never been put to a real test, understanding becomes based largely on analytic representations such as computer models.' Even the most sophisticated of these can handle only a small fraction of the possible interactions. So it is impossible to fully test the system and be confident of its performance in a future crisis.

Threats to the North American subcontinent are at present evaluated by human beings - officers at North American Air Defense HQ (NORAD). If the threat appears to be serious, other military command centres are brought together in a Threat Evaluation Conference. In 1983 there were 255 of these, and 3,294 'routine missile displays'. In other words, the system showed, on average nine times a day, that the Soviet Union had launched a missile.

If, because of shorter reaction times, it is necessary to fully automate the C3 network. the computer system must be capable of discriminating against false alarms every time - a requirement that not even the most optimistic programmer would stake the fate of the world on.

These developments in military technology are already under way. Others, in particular the Strategic Defense Initiative (Star Wars), are potentially even more destabilising. How has this state of affairs come about? How is it that we have reached the position where a misplaced comma in a computer program - a 'bug' - or a single faulty microchip could bring down the final curtain?

The simple answer is that the paranoia of the arms race calls forth continual advances in technology which eventually leave no room for human intervention. In fact the picture is more complex.

Weapon design and development in each country is, you might think, determined by the goals set by the military, who in turn are guided by long-term political perspectives. But this 'pull' from the military is also reinforced by technology 'push'. In each of the advanced industrialized countries there is a lobby which is at least as powerful as the military - and that is the arms manufacturers themselves.

Their direct political weapon is the threat of unemployment and they can generally defend themselves against defence spending cuts by raising the spectre of job losses, But these companies also subtly dictate the direction of military procurement.

The history of the Cruise missile is a classic example. In the early seventies, three different technologies - miniature jet engines, digital computers allied to high-resolution radar and small nuclear warheads - were developing at the same time. All it needed was a designer in the right place to put these together and come up with a highly accurate nuclear missile that could follow its own pre-programmed 'route map'.

The military, however, could not see the use for such a device and it took several years of intensive lobbying by Boeing and General Dynamics, and inter-service politicking - including the Carter Administration's cancellation of the B1 bomber - before the Cruise programme was finally sanctioned. The confusion over its military role persists: is it first strike? Is it deterrent? Is it counterforce? Nobody can tell, but it sure as hell is cute.

Two conclusions can be drawn. The first is that the effective use of military force depends on the reliability and dependability of the high-tech systems which deliver it. Of the 14 American F-111s which took part in the attack on Tripoli in mid-April, five returned without dropping their bombs because of system failures. Of those that were dropped, many missed their intended targets, showing up deficiencies in the laser guidance system which is supposed to increase their accuracy. Technology may be a 'force multiplier' when it works, but when it doesn't - which is increasingly often - it also multiplies the consequences of error. The results in a conventional war are clumsy and chaotic, but immensely destructive. In a nuclear confrontation they would be catastrophic.

Second, technology has shifted the centres of power away from politicians and the military and, in peacetime, towards the arms manufactures and their technologists. It is simply impossible to exert direct political control over an automatic system that can initiate the destruction of half of the earth within a matter of minutes.

In the countries which produce these sophisticated weapons the technology creates its own momentum. Politicians and the military, unable to grasp the complexities of the systems they are asked to sanction, are reduced to impotent acquiescence in the procurement process. Manufacturers are able to deploy the threat of unemployment on one hand and the sacred cow of national security on the other to ensure that their programmes escape unscathed from public spending cuts. Their technologists, happy with the prospect of interesting and challenging technical problems, take no thought for the application of their work.

The President of Sandia National Laboratories, one of the foremost US weapons development establishments, was asked his views on the nuclear freeze. His reply encapsulates the weapons community's concept of progress: 'If you mean by "freeze" that you intend to stop thinking, to stop considering what the weapon possibilities are, what modern warfare can in fact become, then I think you are taking a dangerous risk with this country's security ... From a technical standpoint I think there are enormous possibilities for improvement ahead. We do need to make it clear that such progress is in the public interest and that we should charge on at full speed'. Sentiments like these show the weapons technologists to be the lemmings of the human race.

Tim Williams is an electronics design engineer and a co-founder of Electronics for Peace.

Not all computer professionals ignore the social implications
of their work. Organizations like the Canadian group INPUT are
springing up all over the world. Richard Swift reports.

ALAYNE McGregor is a computer programmer with Gandolf Data based In Canada's own 'Silicon Valley' belt that has grown up around the national capital, Ottawa. In 1983 Alayne attended a peace conference as a result of which she and several other socially conscious programmers got together to form INPUT - initiative for the Peaceful Use of Technology.

Today INPUT is a small but very active group of programmers and other hi-tech professionals who publish a regular newsletter INPUT/OUTPUT and who are actively challenging the misuse of micro-technologies as part of the US war machine.

There is a core group of about 15 or 20 people with another 100 supporters who turn up to monthly meetings. Members are mostly computer programmers with a sprinkling of other scientific workers. In addition to publishing the newsletter, INPUT Is currently working to establish a dial-up bulletin board' which allows people with personal computers to read the latest Information on peace and disarmament. Two similar groups are currently being formed In Montreal and Toronto.

INPUT sees itself as part of an informal network of computer workers adding their voices to the peace movement. Other groups Include Computer Professionals for Social Responsibility In California, High Technology Professionals for Peace from the Boston area and Electronics for Peace In the UK.

INPUT members are frustrated that micro-technology is being used for military ends and not potentially creative peaceful uses. According to Alayne McGregor 'the technology will allow for things that management Just won't go for'. INPUT tries to advocate some peaceful use alternatives - high-tech applications for health systems, Third World development and the satellite monitoring of disarmament agreements. But the computer activists at INPUT are confronted with the same obstacles as the rest of the peace movement Many hi-tech workers, according to Alayne McGregor, 'feel that they can't make a difference so there's no point even talking about things like peace and disarmament'. INPUT has found their moat effective arguments to be those that point out the technical flaws in sophisticated weapons schemes. INPUT activist Marcus Leech points out that 'if you show the technical bugs In an overly-complex system like Star Wars It Is something programmers immediately recognize because of the problems they have had In their own work'.

Alayne McGregor believes: 'You can't separate out what you do for forty hours a week from the values you have in the rest of your life. We must think about what the consequences are of the work we do'.

For the address of INPUT and other similar groups, see action here

last page choose another issue go to the contents page [image, unknown] next page

Subscribe   Ethical Shop