There is a major psychological flaw in how society punishes people


People transgress. They are punished. They start to cooperate. This fundamental intuition that people are rational and therefore respond to punishment by changing their behavior is at the heart of Western legal systems, economic theories of crime, and evolutionary theories of cooperation. The only problem is that decades of research suggests that punishment doesn’t actually seem to work.
Analyzes of past studies consistently find that harsher penalties, such as “three strikes and you’re out” laws, do not reliably reduce crime. In its report on the death penalty, the US National Research Council was unable to draw any conclusions about its effectiveness. Meanwhile, the United States, home to one of the most punitive criminal justice systems in the world, experiences high rates of incarceration and recidivism.
These concrete results are in contradiction with much of the experimental literature. In a famous study, economists Ernst Fehr and Simon Gächter created a game in which players were given money and the opportunity to contribute to a shared prize pool. The prize pool was multiplied and redistributed to players, meaning everyone benefited more when everyone contributed. But it was better for each individual not to contribute while others did. When participants could not punish free riders, cooperation decreased – but when punishment was introduced, contributions to the pool increased significantly.
So what is happening in the real world that experiments fail to capture? We explored this conundrum in a recent article in PNAS. We began by observing that, in society, people with a punishing role are often encouraged to undermine their legitimacy and erode our trust in them. In Ferguson, Missouri, officials used fines to fund city services, disproportionately targeting black residents. In the United States, billions of dollars have been seized through civil asset forfeiture, which allows police to confiscate the property of people suspected of being involved in a crime.
We hypothesized that these types of self-serving punishment motives may break cooperation because they confuse its moral signaling. Unlike other animals, humans have a “theory of mind”: we are hyper-tuned to the intentions and motivations of others. Punishment sends a message of disapproval that demands a change in behavior. But this signal only works if we believe the punisher’s motives are right. Humans are social beings who ask, “Why are you doing this to us?” If the response seems selfish, punishment loses its power to promote cooperation.
To test this idea, we conducted a series of experiments using the same games that had shown how punishment stimulates cooperation. In these games, one player (the dictator) decides whether or not to share money with another (the receiver), while a third (the punisher) can choose to take money from the dictator. But we added a special feature: we paid the punishers. As is the case if a police department relies on ticket quotas to increase its revenue, our punishers received a financial bonus each time they punished the dictator. And when we did that, the classic effect was reversed: instead of strengthening cooperation, punishment weakened it. People were less willing to cooperate because their trust in punishers diminished.
Our results suggest that we need to rethink crime control. When punishers are seen as self-interested, punishment breeds distrust and undermines the cooperation it is intended to maintain. If we want to build safer, more collaborative communities, we must dismantle practices that undermine the moral message of punishment. This includes ending policies like speeding quotas and for-profit incarceration – practices that show punishment is driven by profit, not justice.
Raihan Alam and Tage Rai are at the Rady School of Management at the University of California, San Diego.
Topics:


