In his New York Times article, The Moral Instinct, Steven Pinker proposes a new science of the moral sense which will allow us “to see through the illusions that evolution and culture have saddled us with….” However, most of what he proposes of value is not new, but was known by philosophers from antiquity and in some cases even by the man on the street.

The Harvard psychologist opens by suggesting that people are misled by Mother Theresa’s garb and ascetic appearance into ranking her as morally superior to Bill Gates and Norman Borlaug. His evidence that they are mistaken about her moral superiority lies in the fact that Gates has helped more people by contributing money to fight parasitic diseases, and Borlaug has saved more lives by his contribution to agricultural science reducing hunger, while Mother Theresa really did not help people all that much because her clinics offered primitive care despite her missions being well-funded.

Pinker asks whether the new science of the moral sense is morally corrosive. Does it reduce human behavior to genes? Does it promote relativism? Does it reduce morality to a figment of our neural circuitry? Does it lead to these things? Maybe, maybe not. The main question though is, does it teach us anything that we didn’t already know on the basis of common sense?

Pinker seems to be have fallen victim here to moral illusion. The absolute number of people helped is not a measure of moral superiority. It is the intention that matters. This, of course, is something very difficult to judge, and indeed most of the time it is not our business to judge it. While I do not doubt that Mother Theresa’s “image” does have weight in some people’s minds when it comes to evaluating her moral goodness, certainly many who regard her as holy have looked beyond her “aura of sanctity”. Pinker’s general point here is nothing new: custom and emotion can bias moral judgments. He goes on to proclaim that a new science of the moral sense can help uncover such biases. But does it?

Reasoning, or rationalizing?

Pinker begins with a couple of non-scientific remarks, namely, that moral prohibitions are thought not to be a matter of mere custom, but to “be universally and objectively warranted”; and that people think immoral acts should be punished. Then in the section entitled “Reasoning and Rationalizing” he claims that it is “not just the content of our moral judgments that is often questionable, but the way we arrive at them. We like to think that when we have a conviction, there are good reasons that drove us to adopt them.” He goes on to give scenarios which most people would identify as involving immoral behaviour, without necessarily being able to justify their view. For example, people generally say that a single, never-to-be repeated act of incest between brother and sister (both using contraception) is wrong, but are unable to say why. This leads Pinker to endorse the view that people do not engage in moral reasoning, but “they begin with the conclusion, coughed up by an unconscious emotion, and then work backward to a plausible justification”.

Certainly people do on occasion engage in after-the-fact rationalizations. Pinker, however, fails to consider a view Aristotle proposed long ago: in moral reasoning we have to start from what is best known, and what is best known is the fact that certain things are right or wrong (see Nicomachean Ethics, Bk 1, Ch 4). Why is it wrong to kill innocent human beings, but not to kill at least certain other life forms? This is not easy to explain, but it does not prevent people from ordinarily recognizing murder as morally wrong. It would be a mistake to hold in doubt that murder is wrong until one could come up with a rationale why it is so. This is not to say that all knowledge of right and wrong can be arrived at without a rationale. However, it is fairly infrequently that people are faced with a moral dilemma that takes extended reasoning to resolve.

Pinker talks about a fictive scenario that some contemporary moralists make much to-do about. Here you are to imagine a trolley bus, whose conductor has passed out, about to hit five people. In version one you have the opportunity to turn a switch that would result in the trolley changing tracks and killing only one person. In version two you have the opportunity to throw a person in front of the trolley, thus saving the five people.

Now, without doing a survey or brain scans, common sense indicates that people would almost universally reject the second because this would involve directly doing evil (using a bad means to achieve a good end), whereas people might be more confused in the first case because it is not so obvious that one is using a bad means to achieve a good end. Also, we would expect people to feel an emotional revulsion at the thought of directly harming an innocent person that they would not feel in the more distant second situation. When Pinker tells us that the emotional part of the brain lights up in the former case, what’s new? He notes approvingly the view of Joshua Greene, a philosopher and neuroscientist, who “suggests that evolution equipped people with a revulsion to manhandling an innocent person” and it is this impulse that overwhelms any utilitarian calculus as to number of lives saved. However, both Pinker and Greene fail to note that humans are also guided by rational principles, such as “do good and avoid evil”, a consequence of which is that one may not directly do evil — despite any good which might result. Making a non-utilitarian decision is not just a matter of “the victory of emotional impulse over a cost-benefit analysis” — which is not deny that some people blindly follow emotion.

Moral sense is natural, but not just ‘in the brain’ 

Pinker asserts that the “idea that the moral sense is an innate part of nature is not far-fetched” and follows this claim by a list of behaviours censured or praised by humans pretty much across the board. Rape and murder, for example, are considered bad and generosity good. The idea of natural law is hardly a discovery of modern science. Pinker makes the claim that the “moral sense, then, may be rooted in the design of the normal human brain”. The statement is ambiguous. Aristotle would root our ability to make moral judgment primarily in reason, while recognizing that reason depends on experiences that are stored in the brain. Thus, damage to certain parts of the brain would impede or incapacitate moral reasoning. Pinker only mentions reason at the very end of the article in an attempt to explain how we distinguish what is truly moral from what is not. Nothing he says about reason and about what is in effect the “Golden Rule” is the fruit of scientific research. Indeed, he even acknowledges that “the core of this idea — the interchangeability of perspectives — keeps reappearing in history’s best-thought-out moral philosophies”.

We might note here that it is not uncommon for neuroscientists to give the impression that morality is simply a function of the brain, and that if one rewired the brain, morality would change. So, for example, people might be rewired to crave incest and to have a taste for eating dirt. Even then, however, the natural law would not change. It still would not be rational to engage in incestuous intercourse given the harmful effects of interbreeding to human communities. Nor would it become rational for people to eat dirt because they felt like it. It would still be bad for their health. Irrational emotions would not become rational and moral simply because the majority felt them. Now one might press the case and ask, “Well, what if human nature was changed via genetic engineering so that people would thrive on dirt?” Even then, the natural law would not change. The natural law does not command specific acts, but only general things, such as “eat foods conducive to health”. It is already the case that different foods are conducive to the health of different individuals, and this in no way affects the universal content of natural law. The same holds true for the moral imperative “do not harm the innocent”. What is harmful to a given individual depends in part on their physical make-up. Plainly, this must be taken in account when we are dealing with them. We may learn, as we have in the case of second-hand cigarette smoke, that things we previously thought harmless are harmful. This allows us to extend the natural law concerning harm, but it doesn’t change the core idea that innocent people are not to be harmed.

According to Pinker, “most of our moral illusions come from the unwarranted intrusion of one of the moral spheres into our judgments”. Where do these “moral spheres” or “primary colours of our moral sense” come from? They come from the interpretation of a worldwide survey of responses to specific scenarios conducted by Richard Shweder and Alan Fiske. These primary colors are: “harm, fairness, community (or group loyalty), authority and purity”. Now, it is certainly worthwhile to see if humans in general share the same moral norms; this supports the idea of a natural law. However, when it comes to designating certain elements of the moral life as primary, more systematic analysis is needed than a mere poll, and one at that which involves principles of interpretation which are not made explicit. To say that “purity” is a primary moral concern is ambiguous at best. The person caring for a bed-ridden individual by emptying that individual’s bedpan obviously does not put purity before care of others. Those who were involved in freeing slaves through the underground railroad did not regard deferring to legitimate authorities as something moral. Cases such as these plainly show that this study and its interpretation are inadequate to giving us a genuine understanding of human moral nature. Is Pinker unaware that many philosophers have addressed this subject? Or is he too enamored of science to even look in that direction?

Evolutionary tendencies, or virtues?

Pinker tries to justify the five spheres of morality showing that they have deep evolutionary roots. In regard to three, he points out that rhesus monkeys will avoid pulling a chain that delivers food to them when pulling that chain results in a shock to other monkeys, that many animals have dominance hierarchies, and that purity concerns are rooted in the natural reactions of disgust triggered by potential disease vectors. Again, the issue is not that such things are found in other animals, but what form they take in human morality. As noted above, humans are capable of making rational judgments vis-à-vis authorities and purity concerns. As for dispositions of relative helpfulness, there is nothing particularly surprising that we, like other social primates, are naturally disposed to be somewhat cooperative rather than aggressive. This doesn’t mean, though, that following one’s feelings of sympathy is always morally right. People who enable others merely out of feelings of sympathy are morally blameworthy for any resulting harm.

In the 1960s and 1970s, sociobiology — made famous by Richard Dawkins’ book The Selfish Gene — gave an account of altruism which pretends to show that altruism leads to the maximal spread of one’s genes. If we concede for the sake of the argument that people’s altruistic tendencies originated in this fashion, as Pinker is inclined to do, the evolutionary account still fails to distinguish clearly between a tendency and a deliberated decision to act on that tendency. Aristotle noted long ago that certain people were born disposed to being generous. Such a disposition, however, is not the same thing as having the virtue of generosity. A person who acted upon this feeling without reflection would not be morally virtuous, and would be liable on occasion to perform blameworthy acts, such as helping a person who should be told to help him/herself. To know how such dispositions of this sort arose in evolutionary terms does not shed any light on their relationship to morality. A philosophical discussion of the relationship of “natural virtue” of this sort to moral virtue is required.

Towards the end of the paper Pinker asks whether the new science of the moral sense is morally corrosive. Does it reduce human behavior to genes? Does it promote relativism? Does it reduce morality to a figment of our neural circuitry? Does it lead to these things? Maybe, maybe not. The main question though is, does it teach us anything that we didn’t already know on the basis of common sense? According to Pinker, science teaches us that those with whom we have a moral disagreement may be mistaken and not base, and that the moral sense is vulnerable to illusions. But people figured out long before the rise of science that mistaken moral judgments do not always proceed from bad will, and that people’s moral judgments are often biased by bad customs and by emotion.

Pinker’s new science of morality might sound impressive to someone who is unfamiliar with moral philosophy. However, as we have seen, it fails to deliver anything new. And this is not surprising. Science’s task is to give detailed knowledge about the natural world; thus it can provide us detailed knowledge about human nature. Such knowledge can be very useful in making specific moral decisions — for example, alerting us that it would be wrong to give peanuts to a child one is babysitting before asking about nut allergies. Human morality in the sense of a system of general moral guidelines is derived from general knowledge about human nature: from reflections on the nature of conscience, choice, basic human goods, the general role the emotions play in human happiness and so on. When it comes to these points, Pinker is clearly out of his depth, as can be seen from things such as his ready acceptance of the five primary moral colours theory.

Marie I. George is Professor of Philosophy at St. John’s University, New York. An Aristotelian-Thomist, she holds a PhD from Laval University, and a MA in biology from Queens College, NY. She has received a number of awards from the Templeton Foundation for her work in science and religion.