The original “Hamilton” score includes a number of quotations from American hip-hop songs. Most of them were cut from the German version because the translations made them unrecognizable…The original language is packed with American metaphors and idioms that just don’t translate. So the translators were given license to come up with their own turns of phrase.
“Morality is analogous to language. Both are human universals, but the specifics of each vary by culture and change over time.
Morality and language are both governed by certain rules. Though languages differ, they all have some underlying similarities. This is also the case with moral edicts. The specifics may differ by time and place, but all languages have rules about nouns. Though the specifics differ, all moralities have rules about harm.
We also absorb both language and moral beliefs through osmosis. Who you grew up around is the best predictor of what language you speak and what your moral values are.
Some people are moral nihilists who say nothing is really right or wrong because morality differs so much by time and place and culture. Others are moral realists who think there is one true morality, and that moral values are indistinguishable from objective facts.
My view is that morality is ‘real’ in the same way that language is real. Both can change, but still operate within certain constraints. There are rules to every language, and rules to every morality.
Saying morality isn’t real is like saying language isn’t real. Saying there is one true morality is like saying there is only one true language.”
The great thinkers of the Enlightenment, and many others since then, have believed that science can deliver moral truths. It can’t.
But the methods of psychological research can tell us what people think and feel about deception, betrayal, subversion, violence, and so on.
Psychology can shed light on our intuitions and beliefs about this peculiar thing we call morality.
I submitted my PhD thesis a few weeks ago. The title:
“Physical and Social Threats Fortify Moral Judgements.”
I figured I’d summarize what I did in my thesis here. This exercise will serve as a refresher for me before my upcoming defense.
In addition to presenting something new (either empirical or theoretical) that no one has said before, PhD candidates are typically expected to demonstrate mastery, or at least in-depth knowledge, of their area of inquiry. This isn’t the case for math, where there are no word count requirements, because math is real and everything else is fake. (Joking, sort of).
I’ll try to compress the 60K~ words I wrote into something that isn’t too unwieldy or uninteresting.
I include links to studies only when I directly refer to a specific finding in previous research rather than for broad or general claims.
With any luck, you’ll enjoy reading this summary more than a typical PhD student enjoys writing their dissertation.
I begin with an overview of the state of moral psychology research. Then I outline three separate but related lines of research I conducted. I conclude by discussing the implications of my findings, how they relate to previous research, and why it matters.
The Roots of Morality
Scientists, philosophers, and theologians have long struggled to understand the origins of morality.
For many people, the nature of morality is so bewildering that they believe it must have a supernatural origin. However, research across different disciplines has helped to resolve this enduring mystery.
Morality arose to address evolutionary challenges.
Evolutionary biologists, animal researchers, and comparative psychologists have documented instances of selfless and empathic acts from nonhuman animal species. These findings are key to understanding the origins of human morality.
Moreover, developmental psychologists have found that human infants show preferences that indicate a rudimentary sense of morality that is later refined and shaped by culture.
The discovery of such behaviors and preferences in animals and human infants suggests that morality serves an adaptive purpose. Moral edicts suppress selfishness, prevent cheating, and promote cooperation and trust. The importance of morality in promoting social cohesion has been explored by thinkers and scholars from a range of disciplines.
In 1739, the Enlightenment philosopher David Hume wrote that the moral passions promote “the general interests of society.”
In 1971, the political philosopher John Rawls suggested that “The circumstances of justice may be described as the normal conditions under which human cooperation is both possible and necessary.”
In 2013, the comparative psychologists Michael Tomasello and Amrisha Vaish wrote that “Human morality arose evolutionarily as a set of skills and motives for cooperating with others.”
In 2015, the neuroscientist Joshua Greene wrote “The core function of morality is to promote and sustain cooperation.”
Modern accounts informed by evolutionary theory suggest that morality is not solely due to socialization. In the human ancestral environment, beliefs about right and wrong were crucial for survival.
Throughout the course of evolution, human ancestors developed mental adaptations shaped by selection pressures to address recurring threats. The moral sense is one such mechanism; a solution to resolve frequently encountered threats.
Morality confers benefits.
Across societies and throughout history, morality aids in solidifying social bonds, enhancing trust, minimizing resource depletion, and reducing the odds of infection, illness, and death, among other benefits.
Moreover, people who report greater commitment to moral principles exhibit a greater sense of purpose in life. They are less likely to report feelings of alienation, which in turn is associated with greater life satisfaction.
A 2014 field study found that people who committed moral deeds in everyday life experienced a greater sense of purpose, increased feelings of meaning in life, and improved happiness. In contrast, people who performed immoral deeds reported reduced feelings of happiness. Intriguingly, committing an immoral act was associated with a happiness penalty that was greater than the happiness gain from committing a moral act.
Thus, moral behavior is intertwined with emotions. This suggests an evolutionary benefit to adhering to moral proscriptions.
This is because positive emotions arose to direct organisms toward goals that, on average, increase the likelihood of survival and reproduction. Positive emotions did not evolve just because they happen to feel good. Good feelings evolved to motivate certain behaviors that have some adaptive benefit.
A modification is needed for Homo sapiens in the modern world: Positive emotions direct humans to do things that, on average, in the ancestral environment, would have increased the likelihood of survival and reproduction. This is because human culture has evolved far faster than human bodies.
A common example is the consumption of calories. Twenty-thousand years ago, gorging on energy-rich foods was adaptive. But today, when you can order a 3,000 calorie pizza on your iPhone, it is maladaptive. Within developed countries, abundance has become at least as much of a challenge to health as scarcity. We are adapted not for modern environments, but for ancestral ones. So when people say such and such behavior is evolutionarily adaptive, they mean in the ancestral environment—before Uber Eats, digital porn, free heroin needles, and so on.
The psychologists Daniel Nettle and Melissa Bateson have defined emotions as “suites of cognitive, motivational, and physiological changes that are triggered by appraisal of specific classes of environmental situations.”
This is consistent with the “affect-as-information” hypothesis, which suggests that the function of emotions is to alert individuals to changes in their environment. And to respond appropriately.
In addition to good feelings, people have bad feelings too.
Emotions signal the presence of events that may have important survival implications. Different emotions are activated depending on whether an event is a threat or an opportunity. Emotions motivate action.
To influence outcomes and shape behavior, knowledge—cognitive awareness—is not enough.
Knowledge has little motivational power. I can know that drugs are bad for me yet still do them. I can know that being cruel to marginalized individuals is bad but still do it. But if my community condemns me for doing it, then certain emotions will activate (e.g., shame, humiliation, embarrassment) that motivate me to change my behavior.
With regard to morality, when people do things that are in accordance with their local moral norms, they typically experience good feelings. When they do things that violate the community’s moral norms, they tend to experience negative feelings (e.g., guilt, shame).
Such feelings about oneself arose in part because of concerns about reputation. Upholding moral norms boosts social standing. Failing to do so does the opposite—signaling that one is untrustworthy, dangerous, or does not value the individual or group. In other words, behaving unethically can result in severe social consequences with long-term costs. This is why people adjust their behaviors according to reputational concerns. When people fail to live up to their local moral norms, they often experience intense guilt and shame, which then motivates them to repair the harm done to their reputations and resolve impairments to their relationships.
In fact, a key function of morality is to forego immediate personal gain for long-term benefits. Stealing may pay off in the short term. But developing a reputation as a thief can elicit condemnation, reduce social standing, diminish the odds of social and romantic relationships, decrease access to important resources, and, ultimately, thwart fitness.
Thus, morality is a compelling motivational force that holds a unique place in human social interactions.
Some researchers have considered both proximate and ultimate levels of analysis for understanding human nature.
Proximate explanations focus on the direct reason why an individual behaves in a certain way
Ultimate explanations focus on why evolution favors such behavior in the first place
People like to eat pastries. The proximate reason why we like them is that they taste good. The ultimate reason is that calorie-rich foods gave our ancestors energy. Nature selected those who enjoyed sugar. And passed this affinity for sugary treats to us.
If you ask people why they do something, they will typically give you the proximate explanation.
No one says, “I’m eating this cronut because it is full of energy which will aid my evolutionary goals.” They just like the taste.
With respect to morality, the proximate reason people favor adherence to moral norms is because it feels good. It improves personal well-being and maintains and enhances social standing.
The ultimate reason morality arose is because in the ancestral environment, moral norms tended to help overcome obstacles to the ability to survive and reproduce.
The purpose of this section is to convey that morality is evolutionarily adaptive and arose to resolve challenges to fitness.
The Function of Moral Condemnation
When people assess whether moral wrongdoing has occurred, they consider both intentions, beliefs, and desires, as well as outcomes. They determine how much the behavior departs from the “right,” or “just,” or “ethical” way to act.
Moral condemnation plays a distinct role in deterring unethical behavior. This is because people care deeply about maintaining good reputations.
Humans punish and ostracize those who violate their community’s taboos or engage in acts that inflict undesirable costs on themselves and other group members. Once reputation became a foremost concern among early humans, undermining reputations or threatening to do so became an effective strategy for shaping behavior and promoting cooperation.
In the ancestral environment, a good reputation would have led to a host of benefits (e.g., social allies, romantic partners, greater access to resources). A bad reputation would have led to penalties (e.g., social exclusion, ostracism, fewer allies and romantic prospects, reduced access to resources).
Humans experience outrage when they observe others breach moralized customs and are typically motivated to inflict costs on the perpetrators. Intriguingly, moral violations fuel third party-punishment, such that even observers who are not personally victimized but see or learn of a violation subsequently support punishment for the wrongdoer. Third-party punishment in itself bolsters reputation. Many people who observe an individual express outrage on behalf of a victim subsequently view the individual as more trustworthy.
Moral condemnation can avert would-be exploiters by altering their cost-benefit calculus, leading them to consider whether the benefit of the transgression is worth the damage to their image of themselves and their social reputation. It is an important device for deterring morally objectionable behavior. Indeed, being the target of opprobrium is itself a form of punishment-by-stigmatization. Public ridicule leads observers to subsequently lower how much they value the transgressor.
Thus, condemnation is a powerful tool to deter unethical actions, because people typically do not want to incur damage to their reputations.
Other research indicates that although people are quick to assign blame for the negative outcomes of an individual’s actions, they are less likely to dole out praise if an individual is responsible for positive outcomes.
One possible reason for this is that people infer ambiguity in the motives of people who do good things. Because moral acts enhance social standing, people might evaluate such acts as stemming from calculated rather than truly kind motives in order to appear moral.
In fact, studies have found that people tend to overestimate how self-interested people are when they engage in selfless actions. In contrast, people are accurate in judging how self-interested people are when they engage in selfish actions. The researchers termed this phenomenon “attributional cynicism” because people often view moral acts as at least partially a reflection of self-interest, despite the fact that kind gestures tend to be sincere.
Moral judgments appear to be confined not only to acts alone, but to considerations of the risk an individual poses more generally. That is, morality entails judgments not just about isolated behavior, but character as well.
A study from 1992 asked people to read stories about a driver involved in a traffic accident. Despite several plausible factors (e.g., an oil slick, a blind intersection, another driver), people were more likely to blame the driver when he or she was driving home to hide cocaine compared with driving home to hide their parents’ anniversary gift (as I write this summary, I wonder how people’s judgments would have differed if the driver’s anniversary gift for his parents was a brick of cocaine).
The strength of people’s willingness to assign blame shifted depending on the morality of the errand and, presumably of the driver.
More recently, a 2011 study found that people are more willing to assign blame and punishment for individuals described as having negative versus positive character traits even if they committed the same moral transgressions. For example, people assigned less blame to a person described as a loving aunt who spoils her nieces and spends her free time volunteering at local charities compared with a person described as rarely visiting her nieces and spends her free time watching “trash-TV talk shows” even when both characters committed identical infractions.
Interestingly, though, for individual acts, people appear to err on the side of condemnation rather than exoneration.
A 2020 study found evidence for what the researchers termed “promiscuous condemnation.” Across nine studies, people tended to assume that ambiguous actions (e.g., “John pelled Mary”) are immoral, unless additional contextual cues clearly indicated the absence of wrongdoing.
In other words, when confronted with moral ambiguity, people respond with suspicion that something bad has occurred, unless given a plausible reason to believe otherwise. The authors of the study suggest that this variation of the negativity bias might lead to Type I errors, or false accusations of wrongdoing. However, it might also foster prosociality, because people may work extra hard to adhere to moral norms to avert any potential suspicion about their moral character.
Concern about reputation is why humans adhere to local moral norms, cooperate more frequently, and behave more generously than is predicted by standard economic theories. A simple example comes from studies on the Dictator Game. In this game, there are two players. One player gets some amount of money and is told they can give some amount of it to the other player (who is usually a stranger in these kind of lab studies). So you get, say, ten bucks, and I get zero. You can give me some if you want. That’s the whole game.
Studies indicate that, on average, people in the dictator game give about 30% of their endowment to the other player. This finding holds in both WEIRD (Western Educated Industrialized Rich Democratic) and non-WEIRD countries. Why are people relatively generous to strangers in one-shot interactions?
Because early humans never encountered these contrived scenarios. All of their interactions involved dealing with people they would almost certainly meet again. And thus, it was in their interest to cultivate a reputation as a decent and generous person. We carry those inclinations in our minds today, and typically have to override this natural propensity for generosity to behave in an economically “rational” manner and give the other player nothing. Which is actually in the most profitable thing to do in the context of the Dictator Game.
Returning to reputation, a 2018 study analyzed data from 100 countries and found that people around the world rated their moral reputation as more important than everything except physical safety. In a follow up study, more than half of participants responded that, given the choice, they would rather die than live the rest of their lives being falsely accused of being a pedophile. Seventy percent of participants stated they would rather have one of their limbs amputated than for others to believe they are a Nazi.
That so many individuals report that they would sacrifice a body part in order to preserve their moral reputation suggests that Thomas Jefferson was on the right track when he wrote in 1787 that, “The moral sense, or conscience, is as much a part of man as his leg or arm.” Interestingly, this metaphor drawing a comparison between human limbs and morality extends at least as far back as the fourth century BCE, when the Chinese philosopher Mencius enumerated what he considered to be the bedrock features of a virtuous disposition— ritual, propriety, benevolence, and righteousness—and stated that “Humans have these Four Beginnings as we have our four limbs.”
When people form impressions of others, they assign the highest importance to moral character. Morality eclipses warmth (e.g., sociability, enthusiasm, agreeableness) and ability (e.g., intelligence, athleticism, creativeness) in impression formation.
A 1998 study found that morality and competence account for 82% of the variance in impressions when evaluating others, with morality accounting for more of the variance than competence. In other words, morality and competence are largely responsible for our impressions of other people. But morality plays a larger role.
Moreover, when people evaluate others, they judge them mostly by the goodness or badness of their acts, rather than by how proficiently they achieve their objectives.
In other words, people ask “Do I approve of what this person is doing?” rather than “Am I impressed with how well they are doing it?” Moral information dominates how people form judgments about others, and competence is only a mild attenuator.
Interestingly, other studies have investigated whether warmth (also called “communion”)—which encompasses sociability, friendliness, enthusiasm, and likability—might play a stronger role than morality in impression formation.
Still, though, morality was more important than warmth. This is likely because although people tend to like individuals who are sociable, such individuals may pose risks if they are insincere, dishonest, and duplicitous.
People also tend to favor friends and romantic partners who are moral, regardless of their other qualities. However, people tend to favor sociability or competence in social partners only to the extend that they are moral.
Even in organizational settings, people favor moral character above other qualities for job candidates.
A 2021 study asked participants to list the most important qualities when seeking a new employee. They selected competence first, morality second, and sociability last. However, when participants were actually asked to assess job candidates, they assigned greater weight to moral information than other factors.
Collectively, these findings indicating the importance of moral character forms what researchers have termed the “Moral Primacy Model.”
People believe moral character is particularly diagnostic of who others really are, which is consistent with studies indicating that people believe moral traits to be more central to identity than any other mental faculty (e.g., autobiographical memories, desires, and preferences).
Gossip appears to be more effective than punishment in promoting and sustaining cooperative behavior. In a public goods game, people were more cooperative in response to potential gossip (sharing notes about other players) than they were in response to the threat of punishment (deducting earnings). In conditions where participants could compare notes about other players with one another, they reported greater trust in one another and behaved in a more cooperative manner than in conditions without the possibility to gossip. In contrast, the ability to punish other players did not have a significant effect on trust. Gossip was how moral norms were enforced in human ancestral societies. In hunter-gatherer communities, gossip and subsequent reputation devaluation was a strong punishment in itself.
In contrast, private penalties no one knew about were nonexistent in the ancestral environment. For this reason, public knowledge of the infliction of penalties may be more important than the punishment in itself.
Gossip wouldn’t work to change people’s behavior unless people cared deeply about their moral reputations. Thus, moral condemnation—which damages reputation—is an effective tool to shape behavior. Most people are extremely sensitive to criticism and ridicule for their behavior, because among early humans, negative comments could potentially have led to ostracism and even death.
The purpose of this section is to convey that moral condemnation serves a purpose. It is a useful tool to modify behavior and defend against threats wrongdoers pose.
Moral Foundations Theory
One of the best known frameworks in moral psychology is Moral Foundations Theory.
According to Moral Foundations Theory (MFT), our intuitions about what is right or wrong rest on a set of universal domains. It is a diverse account of morality, defined in this framework as “interlocking sets of values, practices, institutions, and evolved psychological mechanisms that work together to suppress or regulate selfishness and make social life possible.”
Selection pressures favored the emergence of quick intuitions and emotions sensitive to a variety of “moral foundations” that arose to overcome adaptive threats in the environment.
Here are the original five moral foundations that encompass the behaviors likely to elicit condemnation:
1. Care/harm: Motivated by feelings of compassion, care, and empathy. This domain leads people to feel that actions displaying cruelty or disregard for others are wrong.
2. Fairness/cheating: Guided by the feeling of resentment people experience when someone benefits from a greater share of resources than other members of the group. This domain leads people to condemn those who take more than what the observer believes they deserve.
3. Loyalty/betrayal: Driven by feelings of community and belonging. This domain produces a sense of shared responsibility to the community, and condemnation of those who betray the core values of the group.
4. Authority/subversion: Expressed by feelings of respect for order and stability. This foundation instills a negative evaluation of behaviours aimed at denying the hierarchical structure that is considered legitimate by the group.
5. Purity/degradation: Based on feelings that one’s body has immaterial and sacred value. The activation of this domain leads people to react with disgust from things they consider contaminated, unnatural or degrading.
According to Moral Foundations Theory, people have a propensity to react intuitively and emotionally in response to morally relevant situations. But how each moral domain is expressed is shaped by cultural factors, life events, and relational contexts. Put differently, while each moral foundation has likely existed throughout human history and is present across cultures, the expression of each domain will vary depending on the local context. MFT is an evolutionarily-informed cultural theory of morality.
My studies in this PhD thesis draw on the Moral Foundations Theory framework to explore the factors that underlie moral judgments.
Intuition versus Deliberation
Despite widespread variation across cultures and throughout history, people often believe morality is firmly grounded in rational thought. Or “common sense.” For instance, legal systems depend on judges and jurors taking the facts at hand and making an objective detached evaluation of the available evidence.
Research indicates that people consider their moral judgments to be as objective as scientific statements. People in one study rated the objectivity of moral judgments (e.g., “Robbing a bank in order to pay for an expensive holiday is a morally bad action”) as equally high as that of factual statements (e.g., “Boston, Massachusetts, is further north than Los Angeles, California”).
Compared to non-moral, conventional preferences (e.g., vanilla versus chocolate), moralized convictions are held as principles that are both universally and objectively true, regardless of whether they are mandated or not.
This suggests that the psychology of moral judgment is more consistent with deontological (action-based) as opposed to utilitarian (outcome-based) principles. Recent studies have found that people deem certain actions to be moral in and of themselves, regardless of the outcomes. In certain instances, even if what you are doing does not have the intended positive outcome, and even if it backfires, people will still confer moral praise to you if your actions appear to be virtuous. “Your heart was in the right place.”
Nevertheless, despite the widespread belief that moral views are equivalent to factual statements, several lines of evidence suggest that external factors can aggravate condemnation for immoral acts.
For the past two decades, psychologists, experimental philosophers, and neuroscientists have measured responses people give to morally dubious acts, and how situational factors and individual differences formerly thought to be irrelevant to moral considerations can affect their evaluations.
This emerging literature indicates that people’s judgments of right and wrong are often colored by factors that are unrelated to the actions being evaluated, such as emotions and intuitions.
This view has its roots in moral philosophy. It is exemplified by the Enlightenment philosopher David Hume (1711-1776), who famously proposed that reason plays a subordinate role to “the passions.” That is, emotions, intuitions, and gut feelings are what chiefly propel human thought and action, with reason playing only a supporting role.
To some extent, theoretical and empirical research has supported this account of human moral psychology. In 2001, Jonathan Haidt proposed his Social Intuitionist Model, which emphasizes the importance of automatic intuitions in shaping moral considerations.
The idea is that moral judgments arise from quick emotional reactions, followed by rationalizations. Intuition and emotions generate the judgments, and the reason offers post-hoc justifications for moral evaluations after they have been made. Following their “gut feeling” about an action, people then try to come up with good reasons for their decision, even when such reasons may not be sound.
Empirical research suggests that moral evaluations are often influenced by negative feelings, such as disgust. The effect of such negative feelings on subsequent judgments typically take place outside of conscious awareness.
In fact, the goals of a motivational system do not need to be consciously understood by an individual to be effective. In some cases, understanding the function of such systems do not need to be internally represented at all. For instance, single-celled organisms swim toward nutrients without any awareness or understanding of the reason for their behavior. Understanding the reasons behind why we do things is not necessary for survival and reproduction.
At first glance, morality and disgust appear to be unrelated. However, disgust does in fact appear to play a role in moral considerations due to its evolutionary function of disease avoidance.
Along with other basic emotions such as anger, anxiety, and sadness, disgust exists across cultures. When people are disgusted, they exhibit the universally recognizable facial expression of tongue protrusion, retraction of the upper lip, and the nose wrinkle to defend against entry into the body. These actions reduce the possibility of infection and illness. Which is the primary function of disgust.
Humans tend to moralize behaviors that play an important role in procreation and disease transmission (which are often the same behaviors). People can come to view certain acts as immoral if they have the potential to serve as vectors for illness and infection. Thus, concrete pathogen threats can influence abstract moral judgments about right and wrong.
This section conveys that although people hold their moral beliefs to be objective and universally valid, moral judgments are not always arrived at by rational and deliberate means. Moral judgments are often driven by gut feelings, and can be influenced by individual and situational factors that are unrelated to the specific infraction being evaluated.
Disgust Induction Fortifies Moral Judgments
Disgust originally evolved to defend against contaminants. Illnesses and infection.
However, research indicates that it has been co-opted for use in evaluating moral transgressions. People are often repulsed by moral transgressions.
That is, disgust is a feeling that signals that something, or someone, could pose a threat to your health or survival.
Many studies make people feel grossed out, and then ask them to rate the severity of different moral transgressions. Relative to a control condition, disgusted participants rate unethical acts more harshly.
A 2019 study found that making people think about catching a disease makes them evaluate moral transgressions more harshly compared with a neutral control condition. They also found that people who are generally inclined to be afraid getting sick are more likely to rate moral violations as particularly objectionable.
Another 2019 study looked at how decreasing disgust might influence moral judgments. They did this by giving participants ginger pills, because ginger is a root often used to reduce feelings of nausea.
In a double-blind study, some participants ingested placebo pills. Others received ginger pills. Then they all rated a series of disgusting images. People who took ginger pills rated the images as less repulsive than those who took a placebo.
In another study, people evaluated various moderately disgusting acts such as a man eating sterilized feces (sanitized by purifier chemicals). People who took ginger pills reported less disgust and less condemnation of such acts compared with those who took placebo pills.
The ginger pills didn’t work for extremely disgusting acts, though. Likely because if an act is truly gross, gut feelings override the effects of the pill. Similar to how ibuprofen can help with mild pain, but doesn’t do much of anything for extreme pain.
The ginger pill study support the general notion that physical disgust is indeed related to moral transgressions, especially those involving purity violations.
This section conveys that gut feelings and automatic responses such as disgust can amplify moral judgments, and that inhibiting such feelings can weaken them.
People Who are Sensitive to Disgust Tend to Deliver Harsher Moral Judgments
Some people are more easily repulsed than others. The phenomenological experience of disgust is a state, or a feeling you have in the moment (e.g., a stranger right next to you sneezes into the air without covering their face—practically everyone would find this unpleasant).
But disgust sensitivity is a trait, an individual difference measure that captures how grossed out you are in different situations (e.g., you pick up a random pencil and notice it has a stranger’s bite marks in it—some people would find this stomach-turning, others wouldn’t care).
People who score highly on measures of disgust sensitivity also tend to rate moral violations to be more objectionable. They are especially likely to rate purity transgressions as “very wrong,” but disgust sensitivity also predicts stronger negative responses to harm, fairness, loyalty, and authority violations.
There’s also a measure called Perceived Vulnerability to Disease. It assesses individual differences in concerns about contracting an infectious illness.
People who score highly on this scale—indicating they are especially fearful of getting sick—are also generally more cautious and deliver stronger moral verdicts.
What could explain this link between disgust, fear of sickness, and moral judgments?
Throughout human history, people didn’t know what parasites or pathogens were. Only recently have people learned the true cause of illness and infection and developed methods to combat them with medical approaches.
Before these advancements, moral and behavioral norms arose to attempt to quell the spread of contamination. Individuals who deviated from these normative rules were considered to pose a threat to the health of the community.
It is true that some of these rules arose as superstitions or misconceptions that had little to no effect on illness transmission. Others, though, reliably defended against such threats (e.g., adherence to food preparation rituals, personal hygiene, sexual proscriptions).
Conformity to norms and vigilance against violators served to inhibit the possibility of infection.
Both unfailing conformity and norm violations entail costs. But in conditions of where the risk of dying of illness is especially high, people generally deem conformity to be less costly.
People who are particularly fearful of getting sick tend to agree with statements such as “Breaking social norms can have harmful, unintended consequences.” They are also likely to express greater liking for people described as “conventional.” When threats are made particularly salient, people are more likely to conform to majority opinion. This is likely because they believe following majority opinion is safe. And also because when one is already in a state of danger, going against group opinion may pose additional risks.
Feelings of threat may help to explain both individual and cultural variations. Some researchers have proposed that “many worldwide cultural differences—in personality, values, and behaviour—maybe be partially the product of psychological responses to the threat of infection.”
Traditional societies have faced greater risk of illness than advanced modern ones. This is one reason why they are more likely to adhere to conventional moral customs. In contrast, modern societies have overcome much of the burden of illness and infection, and are thus express weaker support for conventional morality.
A large body of research has suggested that political conservatives are more sensitive to disgust than political liberals. However, a 2020 study administered a straightforward, elicitor-unspecific scale where participants responded to how much they agreed with statements such as “I am easily disgusted.” No differences emerged between liberals and conservatives.
The purpose of this section is to indicate that sensitivity to disgust and fear of getting sick are individual factors that predict stronger moral judgments. These feelings are associated with moral judgment and are less tied to political orientation than formerly thought.
Disease Avoidance and the Behavioral Immune System
The challenges faced by human ancestors gave rise to unique survival strategies. Among both hunter-foraging human communities and our nearest evolutionary relatives, chimpanzees, infections are responsible for about 70% of all deaths.
Even in the context of armed conflict, illness is responsible for far more deaths than combat. In the American civil war, the vast majority of the estimated 660,000 deaths were caused by pneumonia, typhoid, dysentery, and malaria. During famines, illness and infections kill far more people than starvation, largely because people are less likely to adhere to hygiene practices under conditions of extreme hunger. Even today, nearly a quarter of all worldwide deaths are due to disease, more than double that of violence and injury.
Because of the relentless threat of illness, humans have evolved to effectively respond to such deadly challenges.
Researchers have termed the specific system that deals with contamination threats as the behavioral immune system (BIS). Your immune system fights infections you already have. Your behavioral immune system prevents you from getting sick in the first place. The BIS is characterized as a constellation of human psychological adaptations, cognitions, emotions, and behaviors that help you avoid contact with pathogens, parasites, and other forms of contamination that could compromise your health and survival.
Specific manifestations of the behavioral immune system include:
-Preference for in-group members to mitigate the chances of getting sick and bringing infectious illnesses to your community
-Careful selection of romantic partners (especially among women, who, as individuals more likely to be the receptive partner, are more likely to contract STIs from sex than men) to reduce contagion and increase offspring defense against parasites and illnesses
-Avoidance of unfamiliar or atypical people places, and foods where previously unknown contaminants may be present and could pose threats to health for immune systems that have not yet developed defenses
Disgust is an expression of the behavioral immune system, because it is an emotion that is aroused by the immediate risk of infection. Disgust elicits a rejection response that prevents touching or ingesting potentially contaminated materials. Humans evolved to detect potential contaminants, feel repulsed by them, and thus avoid them.
Intriguingly, the behavioral immune system operates largely outside of conscious awareness. Because pathogens pose a challenge to survival but are difficult to detect, the BIS evolved to be especially vigilant toward unfamiliar stimuli that could carry the risk of infection. But people don’t necessarily grasp this at a conscious level. You can say something is “gross,” and that’s the end of it. You don’t necessarily know why you find it gross, or the specific purpose of that feeling. The feeling is all your ancestors needed to stay alive. Furthermore, one reason threats of contamination strengthen moral judgment is because people tend to distance themselves from individuals who commit immoral acts.
However, the behavioral immune system is not completely rigid. It follows a “functional flexibility principle.” When situational cues suggest we are relatively safe from infection, our BIS produces relatively muted responses. But when it detects potential threats, it elicits emotions and behaviors designed to keep us safe. Like other adaptations, it follows a cost-benefit approach.
A 2020 study found that people are more willing to do activities that involve the risk of infection with friends relative to strangers. Even within relationship categories (e.g., friend, romantic partner), people’s willingness to engage in potentially risky behaviors depended on how much they valued a given individual. People are willing to risk infection to maintain social ties with those they value and trust. Because investing in such ties offset the potential costs posed by contracting an illness. Moreover, risking illness itself a costly signal of how much you value a given person. We put ourselves in harm’s way to show how much we care about a person. And if we care about a person, we are more willing to risk the chance of harm.
This section describes the behavioral immune system, its role in keeping people safe from contaminants, its functional flexibility, and how it overlaps with moral judgments.
Morality and Precaution
An active and/or highly sensitive behavioral immune system amplifies moral judgments.
This is because false positive errors would lead you to incorrectly avoid or condemn someone for violating a purity norm. And a false negative would lead you to overlook a potentially threatening transgression.
When your behavioral immune system is on high alert, false negatives are more dangerous. This is especially true in the ancestral environment in which our ancestors evolved, before the advent of modern medicine, advanced scientific knowledge, and contemporary hygiene products.
Thus, in addition to the behavioral immune system, researchers have proposed other processes that motivate people to adhere to a precautionary position on matters concerning health and safety. Put otherwise, we generally favor Type I (false positive) errors when it comes to matters of survival. Here I’ll discuss a few of these specialized processes proposed by various scholars.
First is the “cheater detection module” developed by evolutionary psychology pioneers Leda Cosmides and John Tooby. Humans are equipped to identify, remember, and predict noncooperative people. We are alert to the possibility of defection and exploitation by others. This is because such behavior could have posed fatal consequences for early humans. In the ancestral environment, reciprocity was crucial. So keen awareness of cheaters was adaptive. The cheater detection module impels us to be on guard for defectors. It operates without conscious effort, and identifies potentially immoral behavior.
The second specialized adaptation is the “hazard-precaution system” developed by cognitive anthropologists Pascal Boyer and Pierre Liénard. This system defends against low-probability/high-consequence threats such as predators, contaminants, and social exclusion. It is “devoted to detecting subtle signs of potential danger and eliciting precautionary responses to ameliorate the eventuality of such threats.” It scans the environment for cues indicating danger, and then prepares the body for defensive responses. Boyer and Liénard also suggest that certain rituals are extensions of adaptive behaviors such as cleansing and conformity. People respond to pathogen threats by cleaning themselves, and respond to social threats by imitating others and conforming. Similarly, rituals involve cleansing, imitation, and conformity, suggesting some overlap between ceremonial rites and safety-enhancing behaviors. I co-authored a commentary on a related idea, proposing that certain psychiatric conditions that involve cleansing rituals, compulsive counting, and a fixation with the placement of possessions may reflect an extreme variant of a typically adaptive behavior, namely, concern about health and relevant resources.
The third process is the “smoke detector principle” proposed by the evolutionary psychiatrist Randolph Nesse. A smoke detector provides a piercing, unmistaken alarm in the event of a fire. But it doesn’t actually detect fire—it detects smoke particles and actives upon the merest hint of potential danger. A false positive (e.g., alarm in response to burnt toast) is far more favorable than a false negative (failing to active in response to flames). Thus these devices are calibrated to be annoyingly overresponsive. Similar to error management theory, the smoke detector principle proposes that evolved systems that govern defensive responses tend to generate false alarms and apparently excessive responses to trivial risks. The body’s response to the possibility of contamination—disgust, anxiety, fear, pain, fever, coughing, vomiting, diarrhea, and so on—are often more intense than is needed. But such reactions tend to be relatively small costs compared with the possibility of severe illness or death if no response were expressed. In other words, evolution shaped defensive systems on the principles of error management. When met with ambiguous risks, the body is calibrated to respond as if the danger is bigger than it really is, rather than under-respond. As Nesse and his co-author have put it, “the cost of getting killed even once is enormously higher than the cost of responding to a hundred false alarms.”
For staying alive, false positives are more adaptive than false negatives. Thus, it may be adaptive to strengthen one’s moral judgments when one’s ability to deal with threats is already compromised. That is, when the magnitude of a threat is uncertain, the cost of a pronounced defensive response in the form of sharp moral disapproval is lower than the cost of overlooking the threat of a potentially dangerous moral wrongdoer. In general, people are over-sensitive to cues of danger because this was more adaptive to our ancestors than being under-sensitive. This also explains the negativity bias.
Relatedly, our tendency to overreact to the possibility of danger is why we can safely take medication to reduce pain and nausea. Pain and nausea are adaptive responses. They are not problems—they are actually evolutionary solutions telling you how your body is doing and how much exertion it can take. But because humans have evolved to respond to unpleasant stimuli with outsized pain or nausea, dialing down these adaptive functions do not place people at a higher risk. If you are pushing your body to what seems to be its limit (e.g., an ultramarathon), you can take some meds to push yourself even further and often you will not get hurt. This is because your body is designed to ring the alarm bells telling you that your body is in danger well before it really is. So you can ingest drugs to override your body’s natural defenses and keep going. This isn’t wise and not recommended, but it is simply meant to illustrate the smoke detector principle.
In my thesis I propose an analogy to morality. Because we lean toward false positives, cultures can mitigate moral outrage even against behaviors that pose potential threats to the community. That is, because we tend to be overly responsive to immoral and potentially threatening acts, reducing moral judgments does not necessarily lead to an increase in actual danger. In the same way that inhibiting nausea does not lead to a greater likelihood of infection, inhibiting moral outrage does not necessarily give rise to further threatening behaviors. Still, if such harmful behaviors become widespread, moral condemnation can serve an important role in containing them.
Cory Clark recently suggested that moral condemnation seems to be consistent with error management theory. Generally, when a individual’s involvement in an immoral outcome is unclear, people generally favor holding the person responsible rather than letting them off the hook. This is because overlooking potential wrongdoing could signal to witnesses that moral norms are unserious, which could lead to social disharmony. In contrast, being overly strict signals the importance of the community’s edicts, which increases social cohesion and reduces future threatening behaviors.
In addition to the cheater detection module, the smoke detector principle, error management, the hazard-detection system, and the behavioral immune system, the fourth process is the “surveillance system,” developed by a team of political scientists. This system is proposed to “scan the environment for novelty and sudden intrusion of threat” and motivate one to take precautions. It monitors the environment for dangerous events and directs attention and behavior to deal with it. When we feel safe, the surveillance system is silent. But when it is activated, the system summons anxiety, stress, and fear. People respond by avoiding danger, reducing the threat, and attempting to return to a stable and predictable state of affairs.
This section outlines six functional processes proposed to help defend against threats:
Behavioral immune system
Cheater detection module
Smoke detector principle
Error management theory
Hazard-precaution system
Surveillance system
This section is conveys that these six adaptive processes are interwoven with human moral concerns.
The Link Between Threats and Morality
Disgust is linked with morality. But it remains unclear whether other forms of threat beyond contamination might also be associated with judgments of right and wrong.
I propose that other challenges that jeopardize survival intensify sensitivity to wrongdoing. This is primarily driven by concerns about threats.
A 2012 study found that participants held negative impressions of fictional groups described as having low sociability or low competence. But they had the worst impressions of groups described as having low morality.
The main reason for this is that participants viewed the low morality groups as being threatening. They did not view the low competence or low sociability groups as threatening. This suggests moral judgments and threat evaluations are intertwined, and that moral judgments are magnified when people feel especially vulnerable.
I hypothesize that when people experience threats—defined as “an organism, a thing, or a situation that is likely to inflict damage on an organism’s physical or mental wellbeing”—they will subsequently rate moral violations to be especially objectionable. When people experience something that could compromise survival, they then fortify their moral judgments.
Here I describe studies looking at two body/health-related threat domains (illness and age) as well as a more remote threat domain (thwarted social status).
Eleven studies across three separate but related lines of research indicate that concern about a real-world communicable illness (COVID-19), senescence (aging), and social threats all fortify moral judgments. Results imply that physical and social threats amplify moral condemnation.
COVID-19 Concern is Associated with Amplified Moral Condemnation
Lots of research suggests a link between scores on disgust sensitivity scales and abstract considerations of right and wrong.
I wanted to test this link. But rather than just asking people to complete disgust scales, I asked them how worried they were about contracting an infectious illness in the midst of a global spread of a new type of coronavirus. Still, I administered a disgust scale too, because I wanted to know if disgust was the factor responsible for people’s worry about COVID.
I hypothesized that relative to less worried individuals, people who reported greater subjective worry about contracting COVID would express more disapproval for various types of moral wrongdoing.
Study 1
I ran this study on March 17, 2020, four days after the U.S. government declared the COVID-19 outbreak a national emergency and six days after the World Health Organization declared COVID-19 a pandemic.
I initially ran this online study with 206 participants in Washington and Maine. Washington had a lot of official cases and deaths compared to the rest of the country at this time. And Maine had only 17 cases and zero deaths. So I figured that fear of the illness would be higher in Washington than Maine and wanted to compare differences between the states.
First, the participants read an article either about national parks or the dangers of COVID-19.
Then they responded to 60 moral scenarios rated on a scale from 1 (not at all wrong) to 5 (extremely wrong). There were 12 violations for each foundation, rated on a scale from 1 (not at all wrong) to 5 (extremely wrong).
Scenarios included
-“You see a girl laughing when she realizes her friend’s dad is the janitor” (Harm)
-“You see a tenant bribing a landlord to be the first to get their apartment repainted” (Fairness)
-“You see a man leaving his family business to go work for their main competitor” (Loyalty)
-“You see a star player ignoring her coach’s order to come to the bench during a game” (Authority)
-“You see two first cousins getting married to each other in an elaborate wedding” (Purity)
Vignettes were administered in a randomized order.
Then they indicated how worried they were about COVID on a scale from 1 (not at all worried) to 4 (very worried). They also completed a disgust sensitivity scale. Lastly, participants provided demographic info and their political orientation rated from 1 (very liberal) to 7 (very conservative). Then they were paid for their participation.
The reason I had participants read about the dangers of COVID (as the media was reporting the unfolding pandemic in March of 2020) was to see if it would subsequently lead them to make harsher moral judgments.
But reading about COVID didn’t do anything. Participants who read about national parks (neutral condition) and those who read about COVID reported the same level of subsequent worry about COVID. Probably because by this point in 2020, everyone knew about what was going on and reading another article about it didn’t do much to change their views.
I also found that level of worry was the same for participants in both Washington and Maine. This is likely because people probably knew about COVID regardless of where they lived. And even in Washington at this time, cases and deaths were particularly low compared with, say, what was happening in China and Italy.
But I did uncover an intriguing finding, consistent with the research on disgust sensitivity and moral judgment: People who reported greater worry about COVID were also more likely to deliver harsher moral judgments.
This effect held across all five moral domains. People worried about COVID were more likely to say it was extremely wrong for someone to use a stranger’s toothbrush. That makes sense. This is a behavior that could spread diseases.
But people worried about getting COVID were also more likely to say it was wrong for someone to leave their family’s business to work for a competitor, or backtalk their boss, or betray their friends, or commit bribery.
People worried about COVID more strongly condemned moral wrongdoers.
COVID was already politicized by this point. So I controlled for political orientation. The results remained about the same.
Regardless of a person’s political views, if they were worried about COVID, they tended to strongly object to violations of morality.
I also ran another analysis controlling for disgust sensitivity. This variable turned out to be responsible for much, but not all, of the variance. That is, the relationship between worry about COVID and moral condemnation shrank but remained significant even after controlling for how disgust sensitive participants were.
Study 2
I carried out Study 2 on March 27, 2020. Ten days after Study 1.
In Study 2, I ran the same study minus the article.
I got the same results. Replication confirmed.
Study 3
I did this study on May 6, 2020. This was another replication, this time with 487 participants from across the U.S.
I got the same results. Interestingly, this time after controlling for disgust sensitivity the relationship between COVID-19 worry and moral condemnation was no longer significant.
Again, results held even after controlling for political orientation.
The link between COVID-19 worry (and disgust sensitivity) and moral judgment held above and beyond the effects of political orientation.
I also re-ran the analyses from the studies I did in March and this most recent study. I found that disgust sensitivity appeared to increase over time. Participants in May were more disgust sensitive than people in March. Perhaps as a result of prolonged awareness of the pandemic.
People who were particularly worried about a concrete threat to their health also evaluated abstract moral violations more harshly.
These findings are broadly consistent with the notion that disgust and morality share a common mechanism. And that when individuals feel particularly vulnerable, they condemn wrongdoers more harshly to avert additional potential danger.
There are some limitations. I didn’t test personality. It’s possible that in addition to disgust sensitivity, other individual differences measures such as neuroticism, agreeableness, or conscientiousness also played a role in the link between COVID worry and moral judgment.
For the next set of studies, I looked at a different form of physical threat and its link with morality: age.
Older Age is Associated with Fortified Moral Judgments
Many people believe that with age comes wisdom.
This is reflected in official policies such as age restrictions for driving, military service, voting, drinking, and holding elected office (e.g., the minimum age requirement for the head of state is thirty in Israel, 35 in the U.S., and 40 in South Korea).
Moreover, people who currently hold positions of power tend to be older:
-The average Fortune 100 CEO is 57 years old
-The average age of G20 world leaders is 62.1 years
-The average age at the time of hire among S&P 500 company CEOs is 58.3 years old
-The average age of leaders of the United Nations Security Council Permanent Five is 62.4 years
-The average U.S. senator is 64.3 years old
Age appears to be relevant for moral psychology.
A 2001 survey found that compared with older employees, younger business professionals were more likely to agree with statements such as, “It is ethical to use company office supplies for personal use.” and “It is ethical to incorporate a mini-vacation with a company paid trip at company expense.”
Other research has found that older adults are less utilitarian than younger adults. That is, older adults are more likely to say that certain actions are wrong no matter what. And younger adults are more likely to say that considerations of right and wrong depend on the specific situation at hand.
Younger adults are more likely to say that it’s okay to pull the trolley switch to kill one instead of five. They are more likely to say it’s okay to push the fat man off the bridge to stop the train from killing five people.
Beyond abstract moral considerations, young people are more likely to take risks that many would consider to be morally dubious. Young people are more likely to engage in unsafe sexual behavior, criminal behavior, experiment with illicit drugs, and dangerous driving.
Moreover, younger adults tend to score more highly on the Dark Triad personality traits than other age groups.
The Dark Triad is a constellation of three traits:
-Psychopathy (callousness and disregard for others)
-Narcissism (entitled self-importance)
-Machiavellianism (strategic exploitation and duplicity)
These peak in late adolescence and the early twenties, and gradually taper off.
The link between young age and the Dark Triad is most pronounced for primary psychopathy, which has been characterized as the “darkest” of the three traits.
This is relevant because prior research has indicated that people who score highly on Dark Triad personality traits, especially those with pronounced psychopathy scores, tend to condemn moral wrongdoers less harshly.
This suggests that as individuals age and their personalities become less “dark,” they in turn may become more sensitive to moral transgressions.
In fact, clinical researchers have observed that many individuals with high scores on psychopathy and narcissism “burn out” in middle age. That is, scores on such traits decline over time, and that this “may reflect the loss of physical power that occurs at midlife.”
Physical robustness, associated with youth, may enable individuals to hold Dark Triad personality traits and their accompanying morally permissive views. But as they age and become more vulnerable, their moral compass might update to better defend themselves against potential wrongdoers.
Interestingly, older age is positively correlated with Light Triad personality traits. This is a constellation of three traits:
-Faith in Humanity (belief that others are generally good and worthy of trust)
-Kantianism (inclination to behave with integrity and honesty)
-Humanism (appreciation for the successes of others)
Older people tend to score highly on these Light Triad traits. Younger people tend to have lower scores.
Relatedly, a 2021 study found that older adults are more likely to share positive gossip about other people. And younger adults were more likely to share negative, reputation-damaging gossip.
A 2017 study found that two of the most socio-politically permissive (or socially liberal) groups in the U.S. are those who are high (vs. low) in social class, and young adults.
In contrast, those low in social class and older in age tend to be more restrictive in their sociopolitical views (or socially conservative). The researchers suggest that both material wealth and physical robustness coincide with moral lenience. In fact, physicality can be thought of as a form of “embodied capital,” which encompasses several features:
Muscular strength
Functional digestive organs
Physical size and speed
Efficient immune function
Analogous to economic wealth, an individual can be “rich” in embodied capital. Healthy, strong, and physically fit. Young people tend to hold higher levels of embodied capital. Thus they may be less sensitive to moral violations. In contrast, those who are less endowed with embodied capital—older adults—may condemn wrongdoers more harshly, because moral violations might deemed exceptionally unsafe when one is in a relatively compromised position.
Moreover, there are age-related differences in moral behavior, which likely extends to moral judgments. Developmental criminologists have documented the “age-crime curve.” They find that the tendency to commit crimes peaks in adolescence in early adulthood, and gradually declines with age. Younger age remains a significant predictor of crime even when controlling for socioeconomic status.
Beyond legal transgressions, younger adults are more deceptive in their everyday lives than older adults. A 2015 study of people aged six to 77 found that young children and elderly adults are the least likely to tell lies, while adolescents and young adults were the most likely to engage in deception.
Lastly, research in behavioral economics finds age-related differences in cooperation. Older adults behave in a less self-interested and more altruistic manner.
A 2012 study analyzed 287 interactions on the popular British Game show “Golden Balls,” in which two strangers play a large stakes one-shot prisoner’s dilemma-style game to win money. They found that young adults were far more likely to defect than older adults. Only forty-two percent of contestants younger than thirty cooperated, compared with 65 percent of contestants older than fifty years of age.
To the extent that it may be wise to avoid hostile or aggressive social situations for people who are relatively vulnerable (i.e., older adults), it may also be adaptive to behave in a generally cooperative manner.
To this extent, older adults appear to be generally more trusting than younger adults.
For example, in the U.S., seventy-one percent of adults below 30 agree that “Most people would try to take advantage of you if they got a chance” compared with only 39 percent of adults over 65. Moreover, adults younger than thirty are twice as likely (60%) to agree that “Most people can’t be trusted” compared with adults older than 65 (29%). Such findings are echoed in the U.K., where adults aged 18 to 24 are about half as trusting of people they meet for the first time (22%) compared with individuals over 65 (45%).
The studies reviewed thus far suggest that older adults are more empathic, more responsive to distress, more cooperative, less willing to endorse utilitarian moral reasoning, and more trusting than younger adults.
I hypothesize that older age should predict stronger moral verdicts. This relationship should hold above and beyond—that is, controlling for—political orientation and income.
To test this hypothesis, I analyzed multiple large existing datasets. I also conducted an online study to explore a potential mechanism—risk perception—that could explain the age-morality link.
Study 4: European Social Survey
The purpose of this study was to see whether older age is associated with stricter moral attitudes.
Analyzing nine rounds of the European Social Survey I examined the link between age and responses to moral items indicating support for authority (e.g., “to do what one is told and follow the rules”) rated from 1 (very much like me) to 6 (not like me at all).
The European Social Survey is a cross-national, representative population survey of European countries. The first round of data collection took place in 2002, and they have collected data every other year since then. The latest dataset I looked at took place in 2018. This cross-sectional dataset across multiple time points helps to explore the possibility that the age-morality link is consistent, and helps to rule out a cohort effect.
For example, if in 2002, a seventy year old held stricter moral views than a 30 year old, and this was also true in 2018, this suggests that age, rather than something unique to a specific generation, is the main independent variable.
The sample consisted of 274,885 participants from 9 rounds of the European Social Survey (147,888 women; age: M = 48.29 years, SD = 18.63). The rounds encompassed the years 2002 (n =23,815), 2004 (n =28,017), 2006 (n =25,380), 2008 (n =33,466), 2010 (n =33,764), 2012 (n =36,338), 2014 (n =28,653), 2016 (n =32,266), and 2018 (n =33,186).
This dataset was not actually designed to test moral judgments. So I had to find a few moral items that roughly approximated the moral foundation of respect for authority. I operationalized this as the average of how much participants agreed that it was important to: “do what one is told and follow the rules,” “behave properly,” and “follow traditions and customs.”
It’s imperfect, but with the benefit of a very large dataset, it works as a starting point.
I tested the link between age and endorsement of authority. I controlled for income, education, and political orientation.
Across all nine rounds of the European Social Survey dating back to 2002, older adults held stronger moral views about authority.
Effects were consistent across all years, suggesting that the age-morality link is not merely a cohort or generational effect.
Nevertheless, it’s possible this effect is isolated only to European, or WEIRD (Western, Educated, Industrialized, Rich, Democratic) countries.
Study 5: World Values Survey
In Study 4 I found that older age is associated with stricter moral views about authority. But the dataset only contained European countries.
Furthermore, it only contained moral items regarding authority, rather than other moral foundations. It’s possible that because older adults are more likely to hold leadership positions, they are thus more likely to endorse the value of authority.
My hypothesis, though, is that this relationship holds for all moral foundations, not just authority.
In Study 5, I analyzed 7 waves of the World Values Survey dating from 1981 to 2019. Data is collected from participants in more than 100 countries around the world, including non-WEIRD and non-western countries.
The independent variable was age. The outcome measure was participant evaluations of the permissibility of behaviours approximating the moral domain of fairness. Three behaviours were identified for the outcome measure (“avoiding a fare on public transport,” “cheating on taxes when you have a chance,” “someone accepting a bribe in the course of their duties”) rated from 1 (= Never justifiable) to 10 (= Always justifiable).
Across all 7 waves, older adults rated fairness violations as less permissible than younger adults. That is, older adults held stricter moral views about fairness. This held even when controlling for political orientation and income (I didn’t include education because unlike the European Social Survey, this question wasn’t asked in each wave of the World Values Survey).
Just like in Study 4, effects held for each wave dating between 1981 and 2019. This rules out a cohort effect and suggests that age is the explanatory variable.
The findings here in Study 5 suggest that the age-morality link extends beyond WEIRD countries. And includes not only authority, but also fairness.
Still, I wanted to see if the effect extends beyond just authority and fairness. For the next study, I looked only at the latest wave of the World Values Survey, which contained additional moral items.
Study 6: World Values Survey Wave 7
Although all waves of the World Values Survey contain responses from participants asking about fairness, only the latest wave asked participants about both harm and purity violations.
So I looked only at this latest wave to test whether older adults held stricter views about harm and purity.
Moreover, as an exploratory analysis (that is, I made no hypothesis in advance of this analysis), I tested whether the age-morality link held across different political regime types.
Prior research has found that political regime type influences traits associated with moral judgment such as the Big Five personality traits.
This appears to be true even within relatively similar cultures.
A 2015 study found marked differences in personality between residents of the former Federal Republic of Germany (West Germany) and residents of the German Democratic Republic (East Germany) 25 years after the country’s reunification. Relative to West Germans, former residents of the German Democratic Republic were found to have higher neuroticism scores, higher conscientiousness scores, and lower openness scores, and a more external locus of control. The authors attributed these enduring differences even decades after the reunification to long-term exposure to different political regimes and, specifically, patterns in the GDR including surveillance (increasing trait neuroticism), unfailing adherence to rules and norms (associated with a rise in trait conscientiousness), and restrictions of creativity and open-mindedness (inhibiting trait openness).
The outcome measure was participant evaluations of the permissibility of behaviours involving harm and purity rated from 1 (= Never justifiable) to 10 (= Always justifiable).
For harm, this encompassed three behaviours (“violence against other people,” “parents beating children,” “a man to beat his wife”).
For purity, I selected three behaviors that were also not included in earlier waves (“Having casual sex,” “sex before marriage,” and “prostitution”).
Because earlier waves of the World Values Survey did not include these items involving harm and purity, I analyzed only the 2017-2019 wave.
For analyses of political regime type on age and moral judgements, I employed a measure of regime type from the World Values Survey, which is drawn from the widely used Freedom in the World index (Freedom House, 2017). The index assigns countries to the categories of “free” (e.g., Chile, Australia) “partly free” (e.g., Malaysia, Colombia) and “not free” (e.g., Turkey, Tajikistan) based on two dimensions: a country’s political rights (e.g., contestation and participation) and civil liberties (e.g. freedom of expression and belief, associational and organizational rights, rule of law, personal autonomy and individual rights).
My main prediction was supported: Older adults held stricter moral views about harm and purity. Younger adults rated violence, prostitution, and sexual promiscuity as more permissible than older adults.
I also found that this age-morality link was strongest in “free” regimes. It was slightly weaker, but still significant, in “partly free” regimes. And it was nonexistent in “not free” political regimes.
Restrictive political regimes might compress age-related differences in moral judgements. It’s possible that they inhibit people’s ability to express their underlying ethical preferences. In other words, stifling governments might subdue individual differences in moral judgements, including among adults of different age.
This may be similar to the Nordic Paradox. Sex differences between men and women are largest in affluent and egalitarian countries. This is likely because in free societies, people can fully express their underlying traits and preferences. But in less affluent and less egalitarian societies, people are expected to behave in a more rigid manner.
Lots of research indicates that women are more sensitive to moral transgressions than men. My guess is that this sex difference in morality chiefly holds in free societies. And that the effects would be smaller or nonexistent in other contexts. But I didn’t test that here, as it wasn’t relevant to my general thesis.
Earlier I suggested that the reason age and morality might be linked is that older people are more vulnerable, and may thus be more risk averse. So they condemn moral violations more harshly, because wrongdoers pose a further threat when one’s ability to cope is already compromised.
So in the next study, I test whether the age-morality link is driven by perceptions of risk.
Study 7: Does Risk-Perception Explain Why Older Adults Hold Stricter Moral Views Than Younger Adults?
Because I had to rely on items administered in large panel surveys, I didn’t actually get to administer moral items that were specifically designed to test moral judgments. The items in those studies were part of other questionnaires. And they were useful. But testing judgments about the moral foundations was not their intended purpose.
So for Study 7, I recruited 310 participants from the U.S. 155 “emerging adults” aged 18 to 25. And 155 “older adults” aged 55 and above.
Recruiting two different age groups helps to increase statistical power and get an accurate estimate of the effect of age on morality.
Participants responded to the same 60 moral foundations scenarios I used in Studies 1-3 encompassing harm, fairness, authority, loyalty, and purity.
I also administered the Domain-Specific Risk-Taking Scale to participants to measure risk perception. It asked participants how risky various acts were (e.g., driving in a car without a seatbelt, mountain climbing) on a scale from 1 (not at all risky) to 7 (extremely risky).
The key result: Older adults delivered harsher moral judgments than younger adults. This remained when controlling for political orientation, education, and income.
This appeared to be driven by risk perception. Older adults scored higher on risk perception than younger adults, and this in turn was associated with stricter moral judgments.
This is consistent with the idea that when individuals’ believe their ability to cope with challenges is low, in this case as a consequence of aging, they condemn transgressions more harshly, because moral violations pose yet an additional threat.
These findings are novel in that they are the first to reveal a link between age and the judgments about the moral foundations.
As adults grow older they appear to become more, not less, disapproving of moral violations.
Considering that younger adults are relatively more tolerant of wrongdoers, these findings imply that it would indeed be a different world if younger adults obtained more power. More generally, these results suggest that as individuals grow older and move into positions of authority, the level of ethical standards increases.
Still, the link between threats and moral judgments might extend beyond physical challenges (as with worry about COVID and aging). I also wondered if social threats might also affect moral judgments. So I carried out four studies which I review next.
Social Threats Predict Stronger Moral Condemnation
Early attachment theorists claimed that social affiliation is a necessity on par with the need for food and warmth. Neuroscientist Jaak Panksepp posited that social exclusion gives rise to negative emotions because in the ancestral human environment, social inclusion was crucial to avoiding injury and death. He furthermore suggested the brain regions associated with physical pain co-opted the social regions of the brain to avoid being excluded or ostracized.
Modern neuroscience supports this account. Physical pain and social pain activate the same regions of the brain. The neuroscientist Matthew Lieberman has written that “the mammalian need to recognize social threats appears to have hijacked the physical pain system.”
Intriguingly, a 2015 study found that people experience more pain from reliving their socially painful memories compared with physically painful ones. That is, retrieving memories of social pain (e.g., exclusion by a friend or romantic partner) led to greater self-reported re-experienced pain and greater activity in brain regions associated with pain (dorsal anterior cingulate cortex and anterior insula) than memories of physical pain (e.g., broken bone). Events that undermine social connections are highly distressing, even years later.
Acceptance and belonging are inherently pleasurable in themselves and motivate behavior even in the absence of any other reward.
However, loss of acceptance is a perpetual risk when interacting with others. Researchers suggest that people have a “sociometer” which tracks self-esteem in relation to one’s social position amongst peers.
Roughly, the sociometer is a psychological mechanism that keeps track of your social status in the eyes of others.
Because people do not have the cognitive capacity to constantly monitor others’ judgements of them on a conscious level, the sociometer is thought to operate in the background, with little conscious effort. Although people sometimes consciously deliberate about how they are being perceived and assessed by others, the sociometer typically scans the social environment, including one’s own behaviour, at a pre-attentive level for indications of immediate or potential social threats.
When you feel accepted, your sociometer is relatively silent. However, if your basic social needs are unmet, your sociometer kicks into gear. It monitors the environment for information that could help restore your social needs.
Challenges to self-esteem are associated with feeling foolish, awkward, inadequate, and socially anxious. This is the sociometer.
People feel socially anxious even when they think they might make a bad impression that lowers their perceived value. In addition to activating during interactions, the sociometer activates in anticipation of social interactions too.
People feel anxious even in one-shot, zero-stakes encounters with people they will never meet again. This is because in the human ancestral environment in which the sociometer evolved, nearly all social interactions took place between people who knew one another and whose acceptance and support were critical.
For early humans, it was essential that all indications of approval or disapproval be taken seriously. Today, although brief interactions with strangers has become widespread, the sociometer is still attuned to respond to these one-shot interactions as though the stakes for social status and self-esteem were high.
Stress and anxiety tend to be at their highest in social situations. A meta-analysis found that the potential for negative social evaluation (e.g., delivering a speech) resulted in cortisol levels more than three times higher than non-social stressful tasks (e.g., mental arithmetic).
The researchers concluded that individuals may be equally concerned with preserving their social needs as their physical safety, because cortisol rises to recruit physiological energy for a fight or flight response in the face either unmet social or physical needs. A basic need—whether physiological, psychological, or social—activates a motivational state that, if satisfied, improves the possibility of survival. In other words, a need indicates the risk of death and thus propels behavior to foil this possibility.
In line with the proposed sociometer, research indicates that social exclusion (vs. inclusion) is associated with bolstered memory for social information, greater attention to others’ vocal tones, improved ability to distinguish between authentic and inauthentic emotions, and an increased aptitude for detecting deception. Moreover, social anxiety is associated with the ability to more quickly identify the emotions of anger and disgust. This suggests that the presence or prospect of social threats heighten sensitivity to other potential threats—such as moral wrongdoers.
I hypothesize that thwarted social needs—that is, heightened social threat—fortifies moral judgments. Individuals who experience social threat will condemn moral wrongdoers more harshly. When social resources are low, it is adaptive to express greater disapproval to violations to avert further harm posed by transgressors.
Study 8: Social Threat and Moral Judgment
In this study, I recruited participants to play an online game called “Cyberball.” People play a virtual ball-tossing game with two other participants on a computer. In the inclusion condition, you are included throughout the entire game.
In the exclusion condition, after the first few rounds in which a virtual ball is thrown to your digital avatar, you no longer receive the ball and observe as the other “players” interact with only one another.
This sounds simple. And kind of silly. But it works. Lots of studies use this method and it induces surprisingly high levels of distress in people. It provokes strong emotional and behavioral changes. Neuroimaging studies indicate high levels of brain activity in neural pain centers when people are excluded during this game.
Participants have been known to tap their computer screens and shout at the other players for excluding them, despite knowing the other players can’t hear them.
Other participants sit there and brood, or zone out, or look away. The game is only about three minutes long. And it’s not even very fun when you are included. But it is upsetting when you are not.
For this first study in this line of work, I administered moral scenarios only from the harm and fairness foundations. This was a pilot study, just to see if any effect between social threat and moral verdicts might exist.
I recruited 218 participants. Half were included in the cyberball game. Half were excluded.
They then responded to 12 items involving harm, and 12 involving fairness violations.
Lastly, they completed a fundamental social needs scale, measuring feelings of belonging, self-esteem, meaningful existence, and sense of control experienced during the cyberball game.
Participants who were excluded reported lower fundamental social needs—that is, greater social threat—than included participants.
There was a nonsignificant effect of social exclusion on moral judgments involving harm. This study used a relatively small sample size, and only tested two of the five moral foundations.
These pilot studies are often tricky. If you get zero effect, it’s probably time to move on. If you get a significant effect, then you should chase it up with a replication. If you get a trending nonsignificant effect, like I did here, then what you should do next is unclear. I decided to chase it up.
For the next study, I recruited a larger sample, and asked participants to respond to moral items involving all 5 moral foundations.
Study 9: Indirect Effect of Social Threat on Moral Judgment
I recruited 381 participants. They went through the same procedure as the last study. They were either included or excluded in the cyberball game, then they evaluated moral transgressions, then they reported their level of social threat they experienced while playing the cyberball game.
I also administered a brief scale to measure participants’ Big Five personality traits. I wanted to see if people with different personality traits responded differently to social exclusion, which in turn might have affected their moral judgments. But this wasn’t the case. Regardless of personality, people really disliked being excluded. Social threat produces a similar response in people of all personality types.
I found an indirect effect: Social exclusion increased social needs threat, which in turn was associated with stricter moral judgments.
In other words, when I just compared the excluded versus the included groups, no differences in moral judgments emerged. But when I looked at how exclusion influenced social threat, then this turned out to be a predictor of stronger moral judgments.
Does social exclusion influence moral judgments? Not directly.
But for people who are socially excluded and then report higher levels of social threat, they subsequently report stricter moral judgments.
It sounds a little complicated. So here’s an analogy.
Does stress influence weight? Imagine you gather data on people who either feel high stress or low stress. You also measure their weight. And you also measure how much they exercise.
You find that stress has no main effect on weight—both high and low stress people weigh about the same.
But when you insert their level of exercise as a predictor variable, you find an indirect effect: High stress is associated with physical activity, which in turn is associated with lower weight. Thus, although there is no main effect of stress on weight, there is an indirect effect of stress on weight through exercise.
So you can’t say stress makes people lose weight. But you can say that stress tends to increase exercise activity, which in turn leads to reduced weight.
Why would there be an indirect effect but no main effect? One possibility is that there is a “suppressor” variable.
Some people feel stress and exercise more. This makes them lose weight.
Some people feel stress and overeat. This makes them gain weight.
So when you measure the main effect of stress on weight, you find nothing. People who respond to stress by exercising and people who respond to stress overeating cancel each other out.
For my study, it’s possible there was a suppressor variable. Maybe being excluded made some number of participants feel less social threat, for whatever reason.
It’s possible that some people simply weren’t paying attention. They saw they weren’t being involved in the cyberball game and spent that time looking at Tik Tok videos, and this made them feel better. So when they responded to the questions about how they felt during the game they were like “Yeah, I felt pretty good. That Tik Tok video was hilarious.” These people might have cancelled out the people who actually paid attention to the game and felt bad throughout.
Or something else might be involved. Who knows.
All I can say is that there is indeed an indirect effect, and that this is consistent with my main hypothesis: Social threat predicts harsher moral judgments.
Study 10: Social Exclusion Replication
In this study, I replicated Study 9’s indirect effects of social threat on moral judgment with a different set of participants.
Findings were consistent: Social threat increased condemnation for moral violations.
I did do one thing differently: Participants completed the MacArthur Ladder of Subjective Social Status.
My findings from the previous study indicated that personality had no effect on how people responded to social exclusion. So I wanted to test if social status had any effect on response to social exclusion.
Result: No effect. Regardless of social status, people found social exclusion to be upsetting and responded in a similar manner to the social needs scale indicating they felt a high level of social threat.
In the next study, rather than have participants experience threats to their social status, I tested whether people who were generally prone to concern about their status were also more likely to deliver harsher verdicts for wrongdoing.
So I tested the link between social anxiety and moral judgment.
Study 11: Social Anxiety and Moral Judgment
In Studies 9 and 10, social exclusion in the cyberball game instilled a particular emotional state, defined as “acute, intense, and typically brief psychophysiological changes that result from a response to a meaningful situation in one’s environment.” Affective traits, are “stable predispositions toward certain types of emotional responding” that “set the threshold for the occurrence of particular emotional states.” Such affective traits shape how individuals routinely feel, construe life events, and interact with their physical and social environments.
I tested how the state of being in a situation of social threat affected moral judgments.
This time I wanted to test how the trait of being particularly alert to social threats affected moral judgments.
In other words, Studies 9 and 10 activated people’s sociometer. For this study, though, I wanted measure how people with particularly active sociometers in everyday situations would respond to moral scenarios. These are people high in social anxiety.
Social anxiety is defined as “anxiety resulting from the prospect or presence of personal evaluation in real or imagined social situations.” It is a warning signal from the sociometer that helps people sustain their interpersonal relationships and maintain relevance in the eyes of others.
Other researchers have proposed that socially anxious individuals tend to be “locked into”, or assign distinct importance to, social status, and view themselves as relatively low in social rank, regardless of how they are perceived by others.
People feel socially anxious when they believe that the impressions they make will not lead others to value their relationships with them as much as they desire or, more unfavorably, may cause others to devalue, avoid, or reject them. Social threat in the form of social anxiety is thought to motivate the preservation or enhancement of social standing among one’s peers and the avoidance of behavior that may diminish it.
People who believe their social standing is at stake may condemn wrongdoers more harshly, because they are less able to cope when they believe their social resources are compromised.
To test this, participants completed the Social Anxiety Questionnaire which assesses five dimensions:
1) Speaking in public/Talking with people in authority
2) Interactions with attractive individuals or individuals of the opposite sex
3) Assertive expressions of displeasure
4) Criticism and embarrassment
5) Interactions with strangers
Participants also completed the UCLA Loneliness scale, widely used in prior research to measure the frequency of social interactions and emotions linked to loneliness.
I administered these two scales because I wanted to explore whether moral judgments are associated more with social threat (i.e., social anxiety) or social lack (i.e., loneliness).
Loneliness is not necessarily inherently unpleasant. But social anxiety is a straightforward and unmistakable threat.
To avoid potential priming effects, people evaluated the 60 moral foundations scenarios.
Then they completed the social anxiety and loneliness scales (in counterbalanced order, meaning it was random which one a particular participants filled out first).
I didn’t want their responses to the moral items to be affected by thoughts of loneliness or social anxiety. That’s why I had them complete the moral items first and the other two scales second.
Results: The correlation between social anxiety and moral condemnation was large by social science standards (r = .43). For what it’s worth, this is quite a bit larger than the relationship between IQ and income (which is around r = .3).
The relationship between social anxiety and moral condemnation held across all five moral foundations of harm, fairness, authority, loyalty, and sanctity.
In contrast, there was no significant correlation between loneliness and moral judgment.
The correlation between social anxiety and loneliness was significant but relatively low, indicating they capture different underlying constructs.
In sum, the findings from this line of work indicate that challenges to social status is associated with increased sensitivity to moral violations.
Making people worried about their social status makes them more moralizing. And people who are generally worried about their social status are more moralizing.
The body’s rapid response to social threat is intended to be a preparation state for the possibility of physical danger. That is, social rejection raises cortisol levels and triggers basic sympathetic nervous system and hypothalamic-pituitary-adrenal stress responses because being excluded from the social group makes an individual more vulnerable to attack (by predators or hostile group members). Social threats appear to prepare the body to deal with physiological challenges, and this may extend to greater condemnation of moral wrongdoers, whose detestable actions may embody such challenges
What relevance does this line of work have in the real world?
It may help explain why online moral outrage has intensified in recent years.
Internet-users often experience a relentless sense of social threat because social media content encourages upward social comparison, which can make them feel inferior to their peers.
At the same time, people often witness egregious behavior because social media algorithms are optimized to elicit engagement through moral outrage.
So here’s a scenario. You feel unpopular when scrolling through your social media feed. You see all these pretty people. You see all these people humblebragging about their latest achievement.
You start feeling a little bit anxious. You start feeling socially threatened. Your status relative to others feels like it is diminishing.
Then you see someone post something that is borderline offensive. Or at least could be construed as offensive.
A quick way to boost your status and get some likes and shares and retweets: Condemn the person. Burn the witch! Then bask the dopamine glory of receiving validation online. There. Now your status has bounced back.
The idea is that the combination of online activities of comparing ourselves to others, along with reading news stories and social media posts designed to prompt moral outrage, may contribute to the current level of online vitriol.
Is social threat a true driver of outrage on social media?
As they say: these findings are suggestive. More research is needed.
General Discussion
Across eleven studies, I looked at how physical and social threats amplify moral condemnation across the five moral foundations of harm, fairness, authority, loyalty, and purity.
The results supported my hypothesis: When experiencing threats to survival, individuals will condemn moral violations more harshly, because wrongdoers pose yet another threat when one’s ability to deal with challenges is already compromised.
In short, when people believe their survival is at stake, they fortify their moral shields to guard against further risk.
First, I found that people who were particularly worried about contracting COVID-19 in early 2020 also delivered harsher moral judgments for unrelated transgressions, relative to those who were less worried about contracting COVID-19. Results held after controlling for political orientation.
Second, I found that older adults delivered harsher moral judgments than younger adults, and that this was in part driven by risk aversion. Older people perceive more risk in their environment, and this appears to extend to moral wrongdoers. Results held after controlling for political orientation, education, and income.
Third, I found that social threats indirectly amplify moral condemnation. People who felt that their social status thwarted after being excluded in a game subsequently delivered harsher moral judgments. Moreover, I found a strong correlation between social anxiety (which captures how worried people feel about how they come across to others) and moral condemnation. People who are more socially anxious—more concerned with their social standing—also rate moral violations as particularly objectionable. People who experience, or are highly sensitive to, having their social status undermined are more moralizing, perhaps to ward off additional threats, or perhaps to restore their status by being seen as morally righteous.
These findings provide support to the notion that morality evolved as an adaptive response to environmental and social challenges. And that condemnation of wrongdoers is, in part, aimed at deterring potentially dangerous behaviors that could undermine one’s survival as well as the survival of one’s community.
Morality is adaptive and flexible. This is because organisms, including humans, are highly sensitive to context. Those who behave in the same manner regardless of the environment, available resources, or their own bodily state would have comparatively poor fitness prospects compared with those who update their behavior based on context.
The findings have been informed by six different proposed psychological and behavioral processes from prior empirical and theoretical research:
The cheater detection module: Monitors information that could reveal whether an individual is exploitative/immoral
Error management theory (and the smoke detector principle): Individuals favor false positives for potential harm, rather than false negatives. Better to be over-vigilant for danger than under-vigilant
The hazard-precaution system: Monitors and avoids dangers that early humans faced, such as predation, strangers, contamination, contagion, social offense, and harm to offspring
The behavioral immune system: Psychological and behavioral mechanisms (e.g., disgust) that reduce the risk of infection
The surveillance system: Negative emotions such as anxiety aim to scan the environment for novelty and intrusions of threat and motivate defensive responses
The sociometer: Monitors the environment for social approval or disapproval and activates responses to maintain and enhance social status
My findings are consistent with all of these systems. All of these systems fall under the umbrella of error management theory – it is more costly to overlook threats than to over-respond to them. In other words, we have evolved to be keenly aware of potential threats and adjust our behavior, including the strength of our moral judgments, accordingly.
For many wild animals, if you so much as touch them, they will freak out and either run or try to kill you.
You might not mean any harm. But they don’t know that.
Humans are more docile. We are self-domesticated. But we are still beings that arose through the same evolutionary process as other animals. And we more likely to err on the side of caution rather than neglect when we register the possibility of danger.
These results imply that when individuals feel particularly safe and secure, their moral judgments may be relatively muted.
My findings indicate that threats amplify moral condemnation. Does this mean that safety reduces moral condemnation?
The answer seems to be yes. A 2006 study and a 2011 study both found that inducing feelings of mirth—that is, getting people to laugh after, e.g., watching a standup comedy routine—made people subsequently rate morally dubious acts to be less reprehensible.
Smiling and laughter signal playfulness, safety, and feelings of belonging. They are associated with positive emotions. So getting people to feel safe by inducing mirth seems to make them less moralizing.
This might be why people who are prone to moralizing have such a negative attitude about humor. Moralizers are worried that if you laugh, you aren’t taking seriously the threats they believe are so dire. So they prevent laughter in order to make you feel unsafe.
A 2019 study found that upper body strength among males is associated with more permissive views about inequality. Stronger men are more likely to agree that, e.g., “Economic inequality in a society is natural.” Weaker men are more likely to disagree.
A 2020 study found that people who feel relatively powerful (e.g., agreement with statements such as “If I want to, I get to make the decisions”) deliver more muted moral judgments. People who believe themselves to be powerful, and, presumably, in a position where they are better able to meet the challenges that confront them, are less harsh in their judgments of wrongdoing.
A 2022 study found that among U.S. political parties, both Democrats and Republicans used more moral language when they held fewer seats relative to the oppositional party. That is, holding political power is associated with less moralizing language.
Advantages in size, strength, and power are associated with relatively subdued moral judgments.
This might explain sex differences in moral judgment.
Women are more moralizing than men. A 2020 study found that around the world, in 67 countries, compared with men, women delivered stricter judgments for moral violations. The researchers suggest this is related to their parental care systems that evolved in the context of motherhood and infant care in the ancestral environment.
Research indicates that relative to men, women are more sensitive to moral transgressions. Women are more likely to condemn corporations in cases of product harm. They are more likely to boycott. They are more honest than men, at least in the context of economic games, where they are less likely to defect. The strongest predictors of criminal offending are being male and being young.
Both men and young people are more willing to break the law, perhaps out of an implicit belief that they are better equipped to afford the potential costs.
Alongside existing explanations, whether due to evolution or socialization, there may be another possibility (which also ultimately an evolutionary perspective): Men are simply larger and stronger. Thus they may be less concerned with potentially dangerous transgressions.
If one were to control for size, strength, body mass index, and power, I predict that sex differences in morality would shrink by a large amount.
A 2021 study suggested that the reason women score much higher on anxiety than men is because of physical differences. The researchers found that men and women with strong handgrip strength tended to score low on anxiety. And that differences in strength accounted, in part, for sex differences in anxiety.
These findings also hold for depression. Depression is a response to adversity. Its purpose is to conserve energy, avoid risks, and disengage from potentially threatening situations. It’s a debilitating condition, but it does appear to, in some cases, be adaptive. Handgrip strength is negatively correlated with depression. Stronger people are less likely to experience depression.
Formidable people can take more risks, including taking a lenient position on moral transgressions. Physical strength accounts for sensitivity to danger.
Low physical formidability is associated with traits that alert individuals to danger (e.g., risk aversion, anxiety, fearfulness). This suggests that people who are relatively less formidable might also produce harsher moral judgments to as a form of self-protection.
Morality is often thought to be firmly grounded in rational and coherent principles. But the findings I have reviewed, as well as the results of my own work, suggest that the perceived ability to withstand threats influence unrelated moral considerations.
Considerations of right and wrong are informed not only by careful and deliberate thought, but also by psychological, physical, and social variables that are not stable across individuals or social contexts.
Perceived threats that potentially undermine survival may contribute to heightened sensitivity to any additional perceived danger. And this, consequently, might increase the likelihood of conflict between individuals and between groups.
Is experiencing white supremacy all she is? And if not, why do her translators have to be people just like her?
Our racial reckoning has put many new ideas afloat. One of them is that a black female poet’s work should only be translated by other black female people. Or at least black people…
The logic is supposed to be that only someone of Gorman’s race, and optimally gender, can effectively translate her expression into another language. But is that true? And are we not denying Gorman and black people basic humanity in – if I may jump the gun – pretending that it is?
THE MEDIA FLUBBED THE LANGUAGE OF THE FORMER NBA PLAYER’S DEVASTATING INJURY AFTER HE WAS HIT BY A DRIVER WHILE ON HIS BIKE.
While we cyclists clearly love our bicycles, it’s more likely that Bradley would just be very, very sad that a car hit his bike—not grievously injured.
Simply put, the media missed an opportunity for a slam dunk with its headlines and stories on the news, and similarly missed a chance to begin to right a long-running wrong against the cycling community. As Henry Grabar wrote for Slate, “A child falling off his bike in the park is a bicycle accident … Getting rammed from behind by a car is not a bicycle accident.” And yet, for decades, media reports have used this framing when reporting on cyclists who are hit by drivers and injured or killed.
Interesting commentary on language. To be fair to the headline writers in the publications below, the FTC complaint itself uses the word “withholding.” To say “stealing” would be editorializing. Good discussion nonetheless.
Below a headline that states: “Amazon to Pay Contract Drivers $61.7 Million After FTC Probe Finds It Stole Tips to Pay Wages”
There are over 7000 living languages on earth today. These mutually unintelligible means of communication are closely associated with different groups’ identities. But how does a new language start out? That’s what listener BK wants to know. BK lives on one of the islands of the Philippines, where he speaks three languages fluently and has noticed there is a different language on almost every island.
Presenter Anand Jagatia finds language experts from around the world who tell him about the many different ways that languages can form.
Interesting post about the nature of language and how it can misconstrue a complex reality and ultimately lead to misunderstandings and poor policy actions by governments.
Muddled thinkers confuse the world of our senses with the way in which it is depicted in language.
Yet as is true of all beneficial institutions, language is imperfect – it has, some might say, its ‘costs.’ Among the ‘costs’ of language is its tendency to cause us to suppose that the abstractions that we describe with words possess a concrete reality that these abstractions don’t possess. 🔥🔥🔥 [flames are my addition]
Interesting story that raises many interesting questions about ethics and responsibility. What is the responsibility of the institution vs. individual? How do we decide what is ethical? The article describes a level of abstraction and jargon that happens in the company that belies the very human cost of its actions. Further, how do our actions change when we don’t directly deal with the human face/cost of our actions?
People at Capital One are extremely friendly. But one striking fact of life there was how rarely anyone acknowledged the suffering of its customers. It’s no rhetorical exaggeration to say that the 3,000 white-collar workers at its headquarters are making good money off the backs of the poor. The conspiracy of silence that engulfed this bottom-line truth spoke volumes about how all of us at Capital One viewed our place in the world, and what we saw when we looked down from our glass tower.
…
Amid the daily office banter at Capital One, we hardly ever broached the essence of what we were doing. Instead, we discussed the “physics” of our work. Analysts would commonly say that “whiteboarding”—a gratifying exercise in gaming out equations on the whiteboard to figure out a better way to build a risk model or design an experiment—was the favorite part of their job. Hour-long conversations would oscillate between abstruse metaphors representing indebtedness and poverty, and an equally opaque jargon composed of math and finance-speak.
On the surface this ruling may seem silly. The smiley pictogram is internationally popular precisely because it’s simple for most people to understand, or so it seems superficially. One of the messages in question was supported with victory signs, champagne, a quilt of symbols that’s barely translatable but clearly positive. In fact, however, emoji in a legal context is very serious business.
Language interpretation is rarely simple upon close examination and lawyers can argue anything. Pictures give them a lot of room to do so.
Santa Clara University law professor Eric Goldman searched for 2016 cases in the US that dealt with emojis and emoticon and found about 80 judicial opinions that mentioned these.
He told The Recorder in May that he imagines that emoji interpretation issues will only get more common and could get very difficult. The images look different to each of us, and parties can have legitimately different understandings of an image used in an exchange.
Once Hatebase has the data, it is automatically sorted and annotated. These annotations can explain the multiple meanings of the terms used, for example, or their level of offensiveness. The resulting data can also be displayed in a dashboard to make it easier for city officials to visualize the problem.
Once enough data has been gathered (most likely in a few months’ time), the city will use Hatebase’s system to monitor trends in hate-speech usage across Chattanooga, and see if there are any patterns between the words used against particular groups and subsequent hate crimes. Often, violence against a particular group is preceded by an increase in dehumanizing, abusive language used against that group. The Sentinel Project has already used this sort of language monitoring successfully as an early warning system for armed ethnic conflict in Kenya, Uganda, Burma, and Iraq.