not reasoning their way toward them.
The observation prompted him to develop a new perspective on how people make moral decisions. Drawing on his own research and that of others, he argued that people make two kinds of moral decision. One, which he called moral intuition, comes from the unconscious mind and is made instantly. The other, moral reasoning, is a slower, after-the-fact process made by the conscious mind. “Moral judgments appear in consciousness automatically and effortlessly as the result of moral intuitions.... Moral reasoning is an effortful process, engaged in after a moral judgment is made, in which a person searches for arguments that will support an already made judgment,” he wrote. 15
The moral reasoning decision, which had received the almost exclusive attention of philosophers and psychologists for centuries, is just a façade, in Haidt’s view, and it is mostly intended to impress others that a person has made the right decision. People don’t in fact know how they make their morally intuitive decisions, because these are formed in the unconscious mind and are inaccessible to them. So when asked why they made a certain decision, they will review a menu of logically possible explanations, choose the one that seems closest to the facts, and argue like a lawyer that that was their reason. This, he points out, is why moral arguments are often so bitter and indecisive. Each party makes lawyerlike rebuttals of the opponent’s arguments in the hope of changing his mind. But since the opponent arrived at his position intuitively, not for his stated reasons, he is of course not persuaded. The hope of changing his mind by reasoning is as futile as trying to make a dog happy by wagging its tail for it.
Haidt then turned to exploring how the moral intuition process works. He argued, based on a range of psychological experiments, that the intuitive process is partly genetic, built in by evolution, and partly shaped by culture.
The genetic component of the process probably shapes specialized neural circuits or modules in the brain. Some of these may prompt universal moral behaviors such as empathy and reciprocity. Others probably predispose people to learn the particular moral values of their society at an appropriate age.
This learning process begins early in life. By the age of two, writes the psychologist Jerome Kagan, children have developed a mental list of prohibited actions. By three, they apply the concepts of good and bad to things and actions, including their own behavior. Between the ages of three and six, they show feelings of guilt at having violated a standard. They also learn to distinguish between absolute standards and mere conventions. “As children grow, they follow a universal sequence of stages in the development of morality,” Kagan writes. 16
That children everywhere follow the same sequence of stages suggests that a genetic program is unfolding to guide the learning of morality, including the development of what Haidt calls moral intuition.
Such a program would resemble those known to shape other important brain functions. The brain does much of its maturing after birth, forming connections and refining its neural circuitry when the infant encounters relevant experience from the outside world. Vision is one faculty that matures at a critical age; language is another, and moral intuition is a third.
Damage to a special region of the prefrontal cortex, its ventromedial area located just behind the bridge of the nose, is associated with poor judgment and antisocial behavior. Neural circuitry in the brain’s prefrontal cortex is evidently associated with the cultural shaping of moral intuitions.
The existence of special neural circuitry in the brain dedicated to moral decisions is further evidence that morality is an evolved faculty with a genetic basis. In the well-known case of Phineas Gage, a thin iron rod was shot through Gage’s frontal lobe in a railroad construction
David Stuckler Sanjay Basu
Aiden James, Patrick Burdine