Born To Be Good?

Born to be good?
What motivates us to be good, bad or indifferent towards others?
Celia Kitzinger examines the psychology of morality.

What looks like callous indifference is often fear of acting inappropriately.

Many of us, much of the time, act to benefit others. There are small kindnesses of everyday life – like holding open a door, sharing food or expressing compassion for someone in distress. Things so ordinary that we simply take them for granted.

We are pleased, but not particularly surprised that people commonly care for sick relatives, give money to help famine victims, donate blood to hospitals, or volunteer to assist at hospices. At times what people do for others is truly spectacular. In the US, Lenny Skutnik risked his life diving into the icy waters of the Potomac River to save an airline crash victim; in Nazi Europe many people risked their lives in offering protection to Jews. In both mundane and exceptional ways people often act to help others – which is why psychologists describe human beings not just as ‘social’ but also as ‘pro-social’ animals .

But why do people spend so much time and money and effort on others, when we could keep it all for ourselves? One argument is that self-interest lies at the root of all superficially ‘moral’ behaviour. According to sociobiologists, we are biologically driven towards those forms of altruism – caring for our families, for example – which improve the survival of our genes.1 Moral actions are simply automatic and instinctive, of no greater or lesser significance than the behaviour of a mother bird putting her own life at risk leading a predator away from her chicks. Helping people who are not genetically related to us can also be in the best interest of our genes if it sets up the expectation that we – or those who share our genes – will be helped in turn.

There are many subtle ways in which helping others can offer rewards which serve our self-interest. These include the praise of onlookers; gratitude from the person being helped; the warm glow of knowing we have done a good deed; and the benefit of avoiding guilt, shame or punishment. Most people agree that some good behaviour can be attributed to self-interest. But is that all there is?

In an ingenious set of experiments, a group of psychologists set out to test the idea that empathy – the ability to imagine ourselves in the place of another and to feel their emotions – can result in genuine altruism.2 Subjects were encouraged to be empathetic while watching a ‘worker’ who they believed was reacting badly to a series of uncomfortable electric shocks. They were then given a chance to help the worker by receiving the shocks themselves. If helping were only self-serving egoism, then people who felt empathy for the victim would simply want to escape from the upsetting experience. But researchers found that those with strong empathetic feelings volunteered to take the worker’s place, even when told they that they could leave immediately if they refused. The researchers also found that high-empathy people, who were deprived of the opportunity to help, felt just as good when someone else helped instead. This suggests that the offer to help reflected a genuine wish to relieve the victim’s suffering, rather than a desire for praise from other people. So it looks as if the cynical view that even good actions have selfish motives may well be wrong. Empathy is common in very small children who often respond to another’s distress with crying and sadness, and may attempt to comfort them with a hug or a cuddly toy. Some psychologists believe that behaviour like this signals the start of moral development.3

Although empathy may be an important component of moral behaviour, morality cannot rely on empathy alone because this emotion is too circumscribed and partial. It can also lead us to make unfair decisions – taking sides in a dispute, for example. Another explanation for why people behave well is that they are motivated not by emotions but by reasoned moral principles. This is what Lawrence Kohlberg proposes in his ‘cognitive-development model’ theory.4 Children, he says, begin at a ‘preconventional’ level in which they see morality in relation to obedience and punishments from adults. At the second, ‘conventional’ level, reached in late childhood or early adolescence, they are oriented first to pleasing and helping others and later to maintaining the existing social order. At the third and highest stage of moral development – reached by only a small proportion of adults – people begin to define moral values and principles such as human dignity, justice, universal human rights. According to this theory, morality is a matter of cognitive (not emotional) development: it matters not one whit whether we care about or empathize with other people so long as we respect their rights as human beings.

Some critics, notably feminist psychologist Carol Gilligan, have challenged the theory as sexist: men may favour abstract theoretical notions of rights and justice, but women, she says, are more likely to construct morality rooted in their sense of connection with other people, a morality of care and empathy.5 Others criticize the ethnocentrism of the model, pointing out that Kohlberg has elevated to the highest stage of moral development precisely those views most likely to be held by white, middle-class, educated North Americans.6

It’s more likely that moral behaviour comes about in a variety of ways: sometimes we may act well in the hope of rewards; other times good behaviour may be motivated by empathy; sometimes it is the outcome of reasoned moral arguments. Crucially, though, neither strong feelings of empathy nor high moral principles guarantee that people will behave well. There is often a gap between moral beliefs and moral action – between how people think and hope they would behave in a situation and how they actually do behave. Some of the classic studies of psychology were prompted by situations in which people failed to act in accordance with their moral values.

In the 1960s a young woman named Kitty Genovese was murdered by a man who raped and stabbed her repeatedly for half an hour in front of 38 residents of a respectable New York City neighbourhood. Nobody went to help her. Only one person finally called the police, after she was dead. This incident prompted a flood of research into what became known as the ‘bystander effect’ which examined why people don’t intervene when others are in pain or in danger.7 Sometimes people fail to intervene out of callousness or indifference. But more often they fail to act in spite of what they feel they should do, and then feel ashamed afterwards. Why is this?

A common finding is that people are uncertain how to behave because, unsure about what they are seeing, they conform with the behaviour of others, who are equally unsure. Emergencies are rare events which happen suddenly and unexpectedly. How can we know that an emergency is real and is not a prank, a game, or a film being produced? The safest thing is to sit tight and wait to see how others react. If nobody else does anything, then people worry about making fools of themselves. A large group can stand by and do nothing, each lulled into thinking that there is no need to act, each waiting for someone else to make the first move. What looks like callous indifference is actually fear of what other people will think if they make an inappropriate response in an ambiguous situation.

Someone in Kitty Genovese’s situation is less likely to be helped if many people are watching than if only one person witnesses the attack. For example, subjects asked to wait in a room before being interviewed heard a woman in the next room apparently fall, hurt herself, and cry out in distress. Of those waiting alone, 70 per cent went to help her, compared with only 7 per cent of those waiting with a stranger who did nothing. Today’s altruist may be tomorrow’s passive bystander; it all depends on the social situation because people tend to behave in accordance with socially prescribed roles rather than as individuals.

In a well-known study by Stanley Milgram, subjects were recruited through newspaper advertisements for what was described as ‘an experiment in learning’. They were seated in front of a shock machine that could administer up 450 volts to the ‘learner’, a man strapped into a chair.8 Each time the ‘learner’ made a mistake the subject had to pull a lever to give him an electric shock, increasing the voltage each time. (In fact, the lever was a dummy, and the ‘learner’ was acting out his response). At 150 volts the learner started shouting. At 180 volts, he cried out in pain and pleaded to be released. At 300 volts he screamed with pain and yelled about his heart condition. Later still there was only deathly silence. If subjects wanted to stop giving shocks, the experimenter said only ‘the experiment requires that you continue’. No threats, no incentives to go on, just the order. Under these conditions – and contrary to the predictions of psychiatrists who had guessed that virtually no-one would obey to the end – nearly two-thirds of subjects delivered the full range of shocks, proceeding beyond the levers marked ‘Danger: Severe Shock’ to the ones marked ‘XXX’.

These people were not sadists or psychopaths. They were ordinary people who believed that you shouldn’t hurt others, who often showed empathy for the learner, and who disliked what they were ordered to do. Virtually all of them complained to the experimenter and asked for permission to stop giving shocks. But when ordered to continue the majority did as they were told. As Milgram says: ‘With numbing regularity, good people were seen to knuckle under the demands of authority and perform actions that were callous and severe.’ Women were as likely as men to deliver shocks up to maximum intensity.

What all these studies illustrate is the extent to which moral behaviour is a social, not an individual issue. In thinking about why people fail to offer help, why they behave punitively, or why they inflict pain on others, we often resort to explanations which depend on individual characteristics – their personal religious beliefs, their capacity for empathy, their understanding of moral principles, or the kind of upbringing they had. But these explanations overlook the key role of social context. The frightening truth uncovered by these classic psychological studies is that it is not too difficult to set up situations in which most of us behave worse than we could have thought possible, out of conformity, fear of what others might think, loss of individual identity or obedience to authority.

The traditional view of moral behaviour is that people are intrinsically selfish beings whose natural anti-social impulses have been curbed by social structures designed to promote obedience to authority, law and order. An alternative possibility is that people are fundamentally pro-social beings, whose ability to act on altruistic impulses and moral principles is sometimes inhibited by precisely these social pressures. At the very least it is obvious that this is sometimes true, and that we need to develop ways of recognizing and challenging those social pressures which result in apathetic or cruel behaviour in our everyday lives.

Celia Kitzinger teaches psychology at the University of Loughborough, England.

1 Richard Dawkins, The Selfish Gene, OUP 1976.
2 CD Batson, The Altruism Question, Erlbaum Associates 1991.
3 C Zahn-Waxler & M Radke-Yarrow, ‘The Development of Altruism’ in N Eisenberg-Berg (ed.) The Development of Prosocial Behaviour, Academic Press 1986.
4 L Kohlberg, The Philosophy of Moral Development, Harper and Row 1981.
5 C Gilligan, In a Different Voice, Harvard University Press 1982.
6 EEL Simpson, ‘Moral Development Research: A Case Study of Scientific Cultural Bias’, Human Development 17, 1974.
7 B Latané & JM Draley, The Unresponsive Bystander: Why doesn’t he help? Appleton-Century-Croft 1970.
8 S Milgram, ‘Some Conditions of Obedience and Disobedience to Authority’, Human Relations 18, 1965.

Go to the Contents page Go to the NI Home page