• Bipolar Disorder
  • Therapy Center
  • When To See a Therapist
  • Types of Therapy
  • Best Online Therapy
  • Best Couples Therapy
  • Best Family Therapy
  • Managing Stress
  • Sleep and Dreaming
  • Understanding Emotions
  • Self-Improvement
  • Healthy Relationships
  • Student Resources
  • Personality Types
  • Guided Meditations
  • Verywell Mind Insights
  • 2024 Verywell Mind 25
  • Mental Health in the Classroom
  • Editorial Process
  • Meet Our Review Board
  • Crisis Support

The Difference Between Morals and Ethics

Brittany is a health and lifestyle writer and former staffer at TODAY on NBC and CBS News. She's also contributed to dozens of magazines.

definition essay morals

Steven Gans, MD is board-certified in psychiatry and is an active supervisor, teacher, and mentor at Massachusetts General Hospital.

definition essay morals

Stockarm / Getty Images

What Is Morality?

What are ethics, ethics, morals, and mental health, are ethics and morals relative, discovering your own ethics and morals, frequently asked questions.

Are ethics vs. morals really just the same thing? It's not uncommon to hear morality and ethics referenced in the same sentence. That said, they are two different things. While they definitely have a lot of commonalities (not to mention very similar definitions!), there are some distinct differences.

Below, we'll outline the difference between morals and ethics, why it matters, and how these two words play into daily life.

Morality is a person or society's idea of what is right or wrong, especially in regard to a person's behavior.

Maintaining this type of behavior allows people to live successfully in groups and society. That said, they require a personal adherence to the commitment of the greater good.

Morals have changed over time and based on location. For example, different countries can have different standards of morality. That said, researchers have determined that seven morals seem to transcend across the globe and across time:

  • Bravery: Bravery has historically helped people determine hierarchies. People who demonstrate the ability to be brave in tough situations have historically been seen as leaders.
  • Fairness: Think of terms like "meet in the middle" and the concept of taking turns.
  • Defer to authority: Deferring to authority is important because it signifies that people will adhere to rules that attend to the greater good. This is necessary for a functioning society.
  • Helping the group: Traditions exist to help us feel closer to our group. This way, you feel more supported, and a general sense of altruism is promoted.
  • Loving your family: This is a more focused version of helping your group. It's the idea that loving and supporting your family allows you to raise people who will continue to uphold moral norms.
  • Returning favors : This goes for society as a whole and specifies that people may avoid behaviors that aren't generally altruistic .
  • Respecting others’ property: This goes back to settling disputes based on prior possession, which also ties in the idea of fairness.

Many of these seven morals require deferring short-term interests for the sake of the larger group. People who act purely out of self-interest can often be regarded as immoral or selfish.

Many scholars and researchers don't differentiate between morals and ethics, and that's because they're very similar. Many definitions even explain ethics as a set of moral principles.

The big difference when it comes to ethics is that it refers to community values more than personal values. Dictionary.com defines the term as a system of values that are "moral" as determined by a community.

In general, morals are considered guidelines that affect individuals, and ethics are considered guideposts for entire larger groups or communities. Ethics are also more culturally based than morals.

For example, the seven morals listed earlier transcend cultures, but there are certain rules, especially those in predominantly religious nations, that are determined by cultures that are not recognized around the world.

It's also common to hear the word ethics in medical communities or as the guidepost for other professions that impact larger groups.

For example, the Hippocratic Oath in medicine is an example of a largely accepted ethical practice. The American Medical Association even outlines nine distinct principles that are specified in medical settings. These include putting the patient's care above all else and promoting good health within communities.

Since morality and ethics can impact individuals and differ from community to community, research has aimed to integrate ethical principles into the practice of psychiatry.

That said, many people grow up adhering to a certain moral or ethical code within their families or communities. When your morals change over time, you might feel a sense of guilt and shame.

For example, many older people still believe that living with a significant other before marriage is immoral. This belief is dated and mostly unrecognized by younger generations, who often see living together as an important and even necessary step in a relationship that helps them make decisions about the future. Additionally, in many cities, living costs are too high for some people to live alone.

However, even if younger person understands that it's not wrong to live with their partner before marriage they might still feel guilty for doing so, especially if they were taught that doing so was immoral.

When dealing with guilt or shame, it's important to assess these feelings with a therapist or someone else that you trust.

Morality is certainly relative since it is determined individually from person to person. In addition, morals can be heavily influenced by families and even religious beliefs, as well as past experiences.

Ethics are relative to different communities and cultures. For example, the ethical guidelines for the medical community don't really have an impact on the people outside of that community. That said, these ethics are still important as they promote caring for the community as a whole.

This is important for young adults trying to figure out what values they want to carry into their own lives and future families. This can also determine how well young people create and stick to boundaries in their personal relationships .

Part of determining your individual moral code will involve overcoming feelings of guilt because it may differ from your upbringing. This doesn't mean that you're disrespecting your family, but rather that you're evolving.

Working with a therapist can help you better understand the moral code you want to adhere to and how it ties in aspects of your past and present understanding of the world.

A Word From Verywell

Understanding the difference between ethics vs. morals isn't always cut and dry. And it's OK if your moral and ethical codes don't directly align with the things you learned as a child. Part of growing up and finding autonomy in life involves learning to think for yourself. You determine what you will and will not allow in your life, and what boundaries are acceptable for you in your relationships.

That said, don't feel bad if your ideas of right and wrong change over time. This is a good thing that shows that you are willing to learn and understand those with differing ideas and opinions.

Working with a therapist could prove to be beneficial as you sort out what you do and find to be acceptable parts of your own personal moral code.

Morals refer to a sense of right or wrong. Ethics, on the other hand, refer more to principles of "good" versus "evil" that are generally agreed upon by a community. 

Examples of morals can include things such as not lying, being generous, being patient, and being loyal. Examples of ethics can include the ideals of honesty, integrity, respect, and loyalty.

Because morals involve a personal code of conduct, it is possible for people to be moral but not ethical. A person can follow their personal moral code without adhering to a more community-based sense of ethical standards. In some cases, a persons individual morals may be at odds with society's ethics.

Dictionary.com. Morality .

Curry OS, Mullins DA, Whitehouse H.  Is it good to cooperate? Testing the theory of morality-as-cooperation in 60 societies . Current Anthropology. 2019;60(1):47-69. doi:10.1086/701478

Dictionary.com. Ethics .

Crowden A. Ethically Sensitive Mental Health Care: Is there a Need for a Unique Ethics for Psychiatry?   Australian & New Zealand Journal of Psychiatry . 2003;37(2):143-149.

By Brittany Loggins Brittany is a health and lifestyle writer and former staffer at TODAY on NBC and CBS News. She's also contributed to dozens of magazines.

Library homepage

  • school Campus Bookshelves
  • menu_book Bookshelves
  • perm_media Learning Objects
  • login Login
  • how_to_reg Request Instructor Account
  • hub Instructor Commons

Margin Size

  • Download Page (PDF)
  • Download Full Book (PDF)
  • Periodic Table
  • Physics Constants
  • Scientific Calculator
  • Reference & Cite
  • Tools expand_more
  • Readability

selected template will load here

This action is not available.

Humanities LibreTexts

5.1: Moral Philosophy – Concepts and Distinctions

  • Last updated
  • Save as PDF
  • Page ID 86935

\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)

\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)

( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)

\( \newcommand{\Span}{\mathrm{span}}\)

\( \newcommand{\id}{\mathrm{id}}\)

\( \newcommand{\kernel}{\mathrm{null}\,}\)

\( \newcommand{\range}{\mathrm{range}\,}\)

\( \newcommand{\RealPart}{\mathrm{Re}}\)

\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)

\( \newcommand{\Argument}{\mathrm{Arg}}\)

\( \newcommand{\norm}[1]{\| #1 \|}\)

\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)

\( \newcommand{\vectorA}[1]{\vec{#1}}      % arrow\)

\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}}      % arrow\)

\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)

\( \newcommand{\vectorC}[1]{\textbf{#1}} \)

\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)

\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)

\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)

Before examining some standard theories of morality, it is important to understand basic terms and concepts that belong to the specialized language of ethical studies. The concepts and distinctions presented in this section will be useful for characterizing the major theories of right and wrong we will study in subsequent sections of this unit. The general area of concepts and foundations of ethics explained here is referred to as  meta-ethics .

5.1.1 The Language of Ethics

Ethics is about values, what is right and wrong, or better or worse. Ethics makes claims, or judgments, that establish values. Evaluative claims are referred to as  normative, or prescriptive, claims . Normative claims tell us, or affirm, what  ought  to be the case. Prescriptive claims need to be seen in contrast with  descriptive claims , which simply tell us, or affirm, what  is  the case, or at least what is believed to be the case.

For example, this claim is descriptive:, it describes what is the case:

“Low sugar consumption reduces risk of diabetes and heart failure.”

On the other hand, this claim is normative:

“Everyone ought to reduce consumption of sugar.”

This distinction between descriptive and normative (prescriptive) claims applies in everyday discourse in which we all engage. In ethics, however, normative claims have essential significance. A normative claim may, depending upon other considerations, be taken to be a “moral fact.”

Note:  Many philosophers agree that the truth of an “is” statement in itself does not infer an “ought” claim. The fact the low sugar consumption leads to better health does not imply, on its own, that everyone should reduce their sugar intake. A good logical argument would require further reasons (premises) to reach the “ought” conclusion/claim. An “ought” claim inferred directly from an “is” statement is referred to as the  naturalistic fallacy .

A supplemental resource is available (bottom of page) on the distinction between descriptive and normative claims.

5.1.2 How Are Moral Facts Real?

When we talk about “moral facts” typically we are referring to claims about values, duties, standards for behavior, and other evaluative prescriptions. The following concepts describe the sense in which moral facts are real in terms of:

  • the degree of universality, or lack thereof, with which the moral claims are held, and
  • the extent to which moral facts stand independently of other considerations.

Moral Objectivism

The view that moral facts exist, in the sense that they hold for everyone, is called moral (or ethical) objectivism. From the viewpoint of objectivism, moral facts do not merely represent the beliefs of the person making the claim, they are facts of the world. Furthermore, such moral facts/claims have no dependencies on other claims nor do they have any other contingencies.

Moral Subjectivism

Moral (or ethical) subjectivism holds that moral facts are not universal, they exist only in the sense that those who hold them believe them to exist. Such moral facts sometimes serve as useful devices to support practical purposes. According to the viewpoint of subjectivism, moral facts (values, duties, and so forth) are entirely dependent on the beliefs of those who hold them.

Moral Absolutism

Moral absolutism is an objectivist view that there is only one true moral system with specific moral rules (or facts) that always apply, can never be disregarded. At least some rules apply universally, transcending time, culture. and personal belief. Actions of a specific sort are always right (or wrong) independently of any further considerations, including their consequences.

Moral Relativism

Moral relativism is the view that there are no universal standards of moral value, that moral facts, values, and beliefs are relative to individuals or societies that hold them. The rightness of an action depends on the attitude taken toward it by the society or culture of the person doing the action.

  • Moral relativism as it relates to an individual is a form of ethical subjectivism.
  • As it relates to a society or culture, moral relativism is referred to as “cultural relativism” and is also subjectivist in that moral facts depend entirely on the beliefs of those who hold them, they are not universal.

Note  that some accounts of meta-ethical concepts do not use both “objectivism” and “absolutism” or use them interchangeably. The important relationship to keep in mind is that both objectivism and absolutism stand in contrast to relativism and subjectivism.

Here are several arguments in support of moral relativism. The “objection” following each one is an argument against moral relativism and in favor of moral objectivism.

  • Objection: “Is” does not imply “ought.” Further, the fact that there are diverse cultural values does not necessarily imply that there are no objective values.
  • Objection: That we cannot yet justify objective values does not mean that such a foundation could not be developed.
  • Objection: This entails that we tolerate oppressive systems that are intolerant themselves. Further, this argument seems to confer objective value on “tolerance” and further still, “tolerance” is not the same as “respect.”

Here are some additional arguments against moral relativism:

  • If values for right and wrong are relative to a specific moral standpoint or culture, anything can be justified, even practices that seem objectively unconscionable.
  • Ethical relativism would diminish our possibility for making moral judgments of others and other societies. However, we do make moral judgments of others and believe we are justified in making these moral judgments.
  • Ethical relativism says that moral values are determined by ‘the group’, but it is difficult to determine who ‘the group’ is. Anyone in the “group” who disagrees is immoral.
  • If people were ethical relativists in practice (that is, if everyone was a ethical subjectivist), there would be moral chaos.

A supplemental resource is available (bottom of page) on moral relativism.

Do you think that there are objective moral values? Or do you believe that all moral values are relative to either cultures or individuals? Include your reasons.

Note:  Submit your response to the appropriate Assignments folder.

5.1.3 How Do We Know What is Right?

The question at hand is about moral epistemology. How do we know what is right or wrong? What prompts our moral sentiments, our values, our actions? Are our moral assessments made on a purely rational basis, or do they stem from our emotional nature? There are contemporary philosophers who support each position, but we will return to some “old” friends we met in our unit on epistemology, Immanuel Kant and David Hume. They were hardly on the “same page” when it came to how and if we can know anything at all, and it’s hardly surprising that we find them at odds on what motivates moral choices, how we know what is right.

When we met  Immanuel Kant (1724-1804)  in our study of epistemology, we read passages from his  Prolegomena to any Future Metaphysic  (1783). In that work, he applied a slightly less intricate and perplexing presentation of topics from his masterwork on metaphysics and epistemology, the  Critique of Pure Reason  (1781). His next project involved application of his same rigorous reasoning method to moral philosophy. In 1785, Kant published  Fundamental Principles of the Metaphysic of  Morals; it introduced concepts that he expanded subsequently in the  Critique of Practical Reason  (1788). The short excerpts that follow are from  Fundamental Principles of the Metaphysic of Morals.

Recall that Kant’s epistemology required both reason and empirical experience, each in its proper role. Kant believed that human action could be evaluated only by the logical distinctions based in synthetic  a priori  judgments.

In the following excerpt, Kant explains that a clear understanding of the moral law is not to be found in the empirical world but is a matter of pure reason.

Everyone must admit that if a law is to have moral force, i.e., to be the basis of an obligation, it must carry with it absolute necessity; that, for example, the precept, “Thou shalt not lie,” is not valid for men alone, as if other rational beings had no need to observe it; and so with all the other moral laws properly so called; that, therefore, the basis of obligation must not be sought in the nature of man, or in the circumstances in the world in which he is placed, but a priori simply in the conception of pure reason; and although any other precept which is founded on principles of mere experience may be in certain respects universal, yet in as far as it rests even in the least degree on an empirical basis, perhaps only as to a motive, such a precept, while it may be a practical rule, can never be called a moral law. Thus not only are moral laws with their principles essentially distinguished from every other kind of practical knowledge in which there is anything empirical, but all moral philosophy rests wholly on its pure part.

However, there is some correspondence between the study of natural world and of ethics. Both have an empirical dimension as well as a rational one. When Kant speaks of “anthropology” he refers to the empirical study of human nature.

…there arises the idea of a twofold metaphysic- a metaphysic of nature and a metaphysic of morals. Physics will thus have an empirical and also a rational part. It is the same with Ethics; but here the empirical part might have the special name of practical anthropology, the name morality being appropriated to the rational part.

So, while the nature of moral duty must be sought  a priori  “in the conception of pure reason,” empirical knowledge of human nature has a supporting role in distinguishing how to apply moral laws and in dealing with “so many inclinations” – the confusing array of emotions, impulses, desires that bombard us and contradict the command of reason. Our emotions (inclinations) are hardly the source of moral knowledge; they interfere with the human capability for practical pure reason.

Thus not only are moral laws with their principles essentially distinguished from every other kind of practical knowledge in which there is anything empirical, but all moral philosophy rests wholly on its pure part.When applied to man, it does not borrow the least thing from the knowledge of man himself (anthropology), but gives laws a priori to him as a rational being. No doubt these laws require a judgment sharpened by experience, in order on the one hand to distinguish in what cases they are applicable, and on the other to procure for them access to the will of the man and effectual influence on conduct; since man is acted on by so many inclinations that, though capable of the idea of a practical pure reason, he is not so easily able to make it effective in concreto in his life.

Kant sees his project on moral law, or “practical reason,” to be a less complicated project than  Critique of Pure Reason,  his “critical examination of the pure speculative reason, already published.” According to Kant, “moral reasoning can easily be brought to a high degree of correctness and completeness”, whereas speculative reason is “dialectical” – laden with opposing forces. Furthermore, a complete “critique” of practical reason entails “a common principle” that can cover any situation – “for it can ultimately be only one and the same reason which has to be distinguished merely in its application.”

Intending to publish hereafter a metaphysic of morals, I issue in the first instance these fundamental principles. Indeed there is properly no other foundation for it than the critical examination of a pure practical reason; just as that of metaphysics is the critical examination of the pure speculative reason, already published. But in the first place the former is not so absolutely necessary as the latter, because in moral concerns human reason can easily be brought to a high degree of correctness and completeness, even in the commonest understanding, while on the contrary in its theoretic but pure use it is wholly dialectical; and in the second place if the critique of a pure practical Reason is to be complete, it must be possible at the same time to show its identity with the speculative reason in a common principle, for it can ultimately be only one and the same reason which has to be distinguished merely in its application.

In the next section of this unit, we will see where Kant goes with this project and its “common principle” the applies universally. For now, keep in mind that Kant sees moral judgment as a reason-based activity, and that emotions/inclinations diminish our moral judgments. Many philosophers agree that making moral judgments and taking moral actions are rationally contemplated undertakings.

David Hume (1711-1776) , as we learned in our epistemology unit, doubted that the principles of cause and effect and that induction could lead to truth about the natural world. Recall his picture of reason, his version of the distinction between  a prior  and  a posteriori  knowledge:

  • Relations of ideas are beliefs grounded wholly on associations formed within the mind; they are capable of demonstration because they have no external referent.
  • Matters of fact are beliefs that claim to report the nature of existing things; they are always contingent.

In both his  Treatise of Human Nature  (1739) and  An Enquiry concerning the Principles of Morals  (1751) relations-of-ideas and matters-of-fact figure in his position that human agency and moral obligation are best considered as functions of human passions rather than as the dictates of reason. The excerpts that follow are from the  Treatise (Book III, Part I, Sections I and II).

If reason were the source of moral sensibility, then either relations of ideas or matters-of-fact would need to be involved:

As the operations of human understanding divide themselves into two kinds, the comparing of ideas, and the inferring of matter of fact; were virtue discovered by the understanding; it must be an object of one of these operations, nor is there any third operation of the understanding. which can discover it.

Relations of ideas involve precision and certainty (as with geometry or algebra) that arise out of pure conceptual thought and logical operations. A relationship between “vice and virtue” cannot be demonstrated in this way.

There has been an opinion very industriously propagated by certain philosophers, that morality is susceptible of demonstration; and though no one has ever been able to advance a single step in those demonstrations; yet it is taken for granted, that this science may be brought to an equal certainty with geometry or algebra. Upon this supposition vice and virtue must consist in some relations; since it is allowed on all hands, that no matter of fact is capable of being demonstrated….. For as you make the very essence of morality to lie in the relations, and as there is no one of these relations but what is applicable… RESEMBLANCE, CONTRARIETY, DEGREES IN QUALITY, and PROPORTIONS IN QUANTITY AND NUMBER; all these relations belong as properly to matter, as to our actions, passions, and volitions. It is unquestionable, therefore, that morality lies not in any of these relations, nor the sense of it in their discovery.

Hume goes on to explain how moral distinctions do not arise from of matters of fact:

Take any action allowed to be vicious: Willful murder, for instance. Examine it in all lights, and see if you can find that matter of fact, or real existence, which you call vice. In which-ever way you take it, you find only certain passions, motives, volitions and thoughts. There is no other matter of fact in the case. The vice entirely escapes you, as long as you consider the object. You never can find it, till you turn your reflection into your own breast, and find a sentiment of disapprobation, which arises in you, towards this action. Here is a matter of fact; but it is the object of feeling, not of reason. It lies in yourself, not in the object.

And so, Hume concludes that moral distinctions are not derived from reason, rather they come from our feelings, or sentiments.

Thus the course of the argument leads us to conclude, that since vice and virtue are not discoverable merely by reason, or the comparison of ideas, it must be by means of some impression or sentiment they occasion, that we are able to mark the difference betwixt them……Morality, therefore, is more properly felt than judged of”

Hume’s view that our moral judgments and actions arise not from our rational capacities but from our emotional nature and sentiments, is contrary to several of the major normative theories we will explore. However, it is interesting to note that some present-day philosophers regard the domain of emotion as a primary source of moral action, and also that work in neuroscience suggests that Hume may have been on the right track.

Economist Jeremy Rifkin provides an absorbing and fast-moving chalk-talk on human empathy, as demonstrated by neuroscience. (10+ minutes) Note: Cartoon depictions of humans are unclothed  RSA Animate .  [CC-BY-NC-ND]

Optional Video

Trust, morality – and oxytocin? .  [CC-BY-NC-ND]  Neuro-economist Paul Zak believe he has identified the “moral molecule” in the brain. (16+ minutes)

An additional supplemental video (bottom of page) explores moral judgments and neuroscience even further.

What do you think about the connection between morality and the neurobiology of our brains? Do you think these findings affect arguments for or against ethical relativism?

Note:  Post your response in the appropriate Discussion topic.

5.1.4 Psychological Influences

Various psychological characterizations of human nature have had significant influence on views about morality. We will see in this Ethics unit and the next on Social and Political Philosophy that particular conceptions of human nature may be at the center of theories about moral actions of individuals and about ethical interaction among individuals in social communities.

Egoism is the view that by nature we are selfish, that our actions, even our ostensibly generous ones, are motivated by selfish desire.  Ethical egoism  is the belief that pursuing ones own happiness is the highest moral value, that moral decisions should be guided by self-interest.

Another view of human nature holds that the primary motivation for all of our actions is pleasure.  Hedonism  is the view that pleasure is the highest or only good worth seeking, that we should, in fact, seek pleasure.

A different take on human nature is that we have innate capacity for benevolence (empathy) toward other people. (Recall the the mirror neurons in the Jeremy Rifkin video.)  Altruism  is the view that moral decisions should be guided by consideration for the interests and well-being of other people rather than by self-interest.

5.1.5 The Meaning of “Good”

In Ethics, we refer to what is “good” as a general term of approval, for what is of value, for example, a particular action, a quality, a practice, a way of life. Among the aspects of “good” that philosophers discuss is whether a particular thing is valued because it is good in and of itself, or because it leads to some other “good.”

  • An  intrinsic good  is something that is good in and of itself, not because of something else that may result from it. In ethics, a “value” possesses intrinsic worth. For example, with hedonism, pleasure is the only intrinsic good, or value. In some normative theories, a particular type of action may possess intrinsic worth, or good.
  • An  instrumental good , on the other hand, is useful for attaining something else that is good. It is instrumental in that it that leads to another good, but it is not good is and of itself. For example, for an egoist, an action such as generosity to others can be seen as an instrumental good if it leads to to self-fulfillment, which is an intrinsic good valued in and of itself by an egoist.

As we look more closely at some major normative theories, the distinction between intrinsic and instrumental good will be among the considerations of interest. Understanding normative theories, also involves these questions:

  • How do we determine what the right action is?
  • What are the standards that we use to judge if a particular action is good or bad?

The following normative theories will be addressed:

  • Deontology (from the Greek for “obligation, or duty”) is concerned rules and motives for actions.
  • Utilitarianism, a consequentialist theory, is interested in the good outcomes of actions.
  • Virtue Ethics values actions in terms of what a person of good character would do.

Supplemental Resources

Descriptive and Normative Claims

Fundamentals: Normative and Descriptive Claims . This 4-minute video is a quick review with examples, on the differences between descriptive and normative claims.

Internet Encyclopedia of Philosophy (IEP).  Moral Relativism . Read section “3. Arguments for Moral Relativism” and section “4. Objections to Moral Relativism.”

Moral Judgment and Neuroscience

The Neuroscience behind Moral Judgments . Alan Alda talks with an MIT neuroscientist about neurological connections with moral judgments. (5+ minutes)

  • 5.1 Moral Philosophy - Concepts and Distinctions. Authored by : Kathy Eldred. Provided by : Pima Community College. License : CC BY: Attribution

alis yimyen/Shutterstock

Ethics and Morality

Morality, Ethics, Evil, Greed

Reviewed by Psychology Today Staff

To put it simply, ethics represents the moral code that guides a person’s choices and behaviors throughout their life. The idea of a moral code extends beyond the individual to include what is determined to be right, and wrong, for a community or society at large.

Ethics is concerned with rights, responsibilities, use of language, what it means to live an ethical life, and how people make moral decisions. We may think of moralizing as an intellectual exercise, but more frequently it's an attempt to make sense of our gut instincts and reactions. It's a subjective concept, and many people have strong and stubborn beliefs about what's right and wrong that can place them in direct contrast to the moral beliefs of others. Yet even though morals may vary from person to person, religion to religion, and culture to culture, many have been found to be universal, stemming from basic human emotions.

  • The Science of Being Virtuous
  • Understanding Amorality
  • The Stages of Moral Development

Dirk Ercken/Shutterstock

Those who are considered morally good are said to be virtuous, holding themselves to high ethical standards, while those viewed as morally bad are thought of as wicked, sinful, or even criminal. Morality was a key concern of Aristotle, who first studied questions such as “What is moral responsibility?” and “What does it take for a human being to be virtuous?”

We used to think that people are born with a blank slate, but research has shown that people have an innate sense of morality . Of course, parents and the greater society can certainly nurture and develop morality and ethics in children.

Humans are ethical and moral regardless of religion and God. People are not fundamentally good nor are they fundamentally evil. However, a Pew study found that atheists are much less likely than theists to believe that there are "absolute standards of right and wrong." In effect, atheism does not undermine morality, but the atheist’s conception of morality may depart from that of the traditional theist.

Animals are like humans—and humans are animals, after all. Many studies have been conducted across animal species, and more than 90 percent of their behavior is what can be identified as “prosocial” or positive. Plus, you won’t find mass warfare in animals as you do in humans. Hence, in a way, you can say that animals are more moral than humans.

The examination of moral psychology involves the study of moral philosophy but the field is more concerned with how a person comes to make a right or wrong decision, rather than what sort of decisions he or she should have made. Character, reasoning, responsibility, and altruism , among other areas, also come into play, as does the development of morality.

GonzaloAragon/Shutterstock

The seven deadly sins were first enumerated in the sixth century by Pope Gregory I, and represent the sweep of immoral behavior. Also known as the cardinal sins or seven deadly vices, they are vanity, jealousy , anger , laziness, greed, gluttony, and lust. People who demonstrate these immoral behaviors are often said to be flawed in character. Some modern thinkers suggest that virtue often disguises a hidden vice; it just depends on where we tip the scale .

An amoral person has no sense of, or care for, what is right or wrong. There is no regard for either morality or immorality. Conversely, an immoral person knows the difference, yet he does the wrong thing, regardless. The amoral politician, for example, has no conscience and makes choices based on his own personal needs; he is oblivious to whether his actions are right or wrong.

One could argue that the actions of Wells Fargo, for example, were amoral if the bank had no sense of right or wrong. In the 2016 fraud scandal, the bank created fraudulent savings and checking accounts for millions of clients, unbeknownst to them. Of course, if the bank knew what it was doing all along, then the scandal would be labeled immoral.

Everyone tells white lies to a degree, and often the lie is done for the greater good. But the idea that a small percentage of people tell the lion’s share of lies is the Pareto principle, the law of the vital few. It is 20 percent of the population that accounts for 80 percent of a behavior.

We do know what is right from wrong . If you harm and injure another person, that is wrong. However, what is right for one person, may well be wrong for another. A good example of this dichotomy is the religious conservative who thinks that a woman’s right to her body is morally wrong. In this case, one’s ethics are based on one’s values; and the moral divide between values can be vast.

Studio concept/shutterstock

Psychologist Lawrence Kohlberg established his stages of moral development in 1958. This framework has led to current research into moral psychology. Kohlberg's work addresses the process of how we think of right and wrong and is based on Jean Piaget's theory of moral judgment for children. His stages include pre-conventional, conventional, post-conventional, and what we learn in one stage is integrated into the subsequent stages.

The pre-conventional stage is driven by obedience and punishment . This is a child's view of what is right or wrong. Examples of this thinking: “I hit my brother and I received a time-out.” “How can I avoid punishment?” “What's in it for me?” 

The conventional stage is when we accept societal views on rights and wrongs. In this stage people follow rules with a  good boy  and nice girl  orientation. An example of this thinking: “Do it for me.” This stage also includes law-and-order morality: “Do your duty.”

The post-conventional stage is more abstract: “Your right and wrong is not my right and wrong.” This stage goes beyond social norms and an individual develops his own moral compass, sticking to personal principles of what is ethical or not.

definition essay morals

New data suggest LLMs can provide moral advice rated more thoughtful, trustworthy, and correct than that of laypeople and experts.

definition essay morals

Personal Perspective: Humanism has its roots in Darwin's ideas, which put the entirety of life on the same playing field. We need a humanist approach now more than ever.

definition essay morals

Conformity to morality is often mistaken as a panacea for societal issues. Cognitions of reality represent the confounding variable responsible for moral attribution.

definition essay morals

Eating disorder treatment is rife with sensitive and ethical treatment considerations. Adding telehealth therapy to the mix creates new opportunities and potential risks.

definition essay morals

A holistic, compassionate view of who and what we wear raises wide-ranging ethical questions about social justice, freedom, human and nonhuman well-being, and sustainability.

definition essay morals

Personal Perspective: Corporations' legal protections tend to overwhelm the struggle for social justice. It takes a brave person, a hero, to go against the odds.

definition essay morals

On current vaccine controversies and the challenges confronting low-to-middle-income countries.

definition essay morals

The question of whether a therapist can or should provide conjoint individual and couples therapy is a hotly debated issue with no clear guidelines or answers.

definition essay morals

Why Michael Cohen is apologizing and atoning for his mistakes in the New York Supreme Court case against Donald J. Trump.

definition essay morals

A Personal Perspective: When we choose which ideas should be given the right of freedom of expression and which shouldn't, democracy suffers for all of us.

  • Find a Therapist
  • Find a Treatment Center
  • Find a Psychiatrist
  • Find a Support Group
  • Find Online Therapy
  • United States
  • Brooklyn, NY
  • Chicago, IL
  • Houston, TX
  • Los Angeles, CA
  • New York, NY
  • Portland, OR
  • San Diego, CA
  • San Francisco, CA
  • Seattle, WA
  • Washington, DC
  • Asperger's
  • Bipolar Disorder
  • Chronic Pain
  • Eating Disorders
  • Passive Aggression
  • Personality
  • Goal Setting
  • Positive Psychology
  • Stopping Smoking
  • Low Sexual Desire
  • Relationships
  • Child Development
  • Self Tests NEW
  • Therapy Center
  • Diagnosis Dictionary
  • Types of Therapy

May 2024 magazine cover

At any moment, someone’s aggravating behavior or our own bad luck can set us off on an emotional spiral that threatens to derail our entire day. Here’s how we can face our triggers with less reactivity so that we can get on with our lives.

  • Emotional Intelligence
  • Gaslighting
  • Affective Forecasting
  • Neuroscience

McCombs School of Business

  • Español ( Spanish )

Videos Concepts Unwrapped View All 36 short illustrated videos explain behavioral ethics concepts and basic ethics principles. Concepts Unwrapped: Sports Edition View All 10 short videos introduce athletes to behavioral ethics concepts. Ethics Defined (Glossary) View All 58 animated videos - 1 to 2 minutes each - define key ethics terms and concepts. Ethics in Focus View All One-of-a-kind videos highlight the ethical aspects of current and historical subjects. Giving Voice To Values View All Eight short videos present the 7 principles of values-driven leadership from Gentile's Giving Voice to Values. In It To Win View All A documentary and six short videos reveal the behavioral ethics biases in super-lobbyist Jack Abramoff's story. Scandals Illustrated View All 30 videos - one minute each - introduce newsworthy scandals with ethical insights and case studies. Video Series

Ethics Defined UT Star Icon

Morals are society’s accepted principles of right conduct that enable people to live cooperatively.

Morals are the prevailing standards of behavior that enable people to live cooperatively in groups. Moral refers to what societies sanction as right and acceptable.

Most people tend to act morally and follow societal guidelines. Morality often requires that people sacrifice their own short-term interests for the benefit of society. People or entities that are indifferent to right and wrong are considered amoral, while those who do evil acts are considered immoral.

While some moral principles seem to transcend time and culture, such as fairness, generally speaking, morality is not fixed. Morality describes the particular values of a specific group at a specific point in time. Historically, morality has been closely connected to religious traditions, but today its significance is equally important to the secular world. For example, businesses and government agencies have codes of ethics that employees are expected to follow.

Some philosophers make a distinction between morals and ethics. But many people use the terms morals and ethics interchangeably when talking about personal beliefs, actions, or principles. For example, it’s common to say, “My morals prevent me from cheating.” It’s also common to use ethics in this sentence instead.

So, morals are the principles that guide individual conduct within society. And, while morals may change over time, they remain the standards of behavior that we use to judge right and wrong.

Related Terms

Ethics

Ethics refers to both moral principles and to the study of people’s moral obligations in society.

Prosocial Behavior

Prosocial Behavior

Prosocial Behavior occurs when people voluntarily help others.

Values

Values are society’s shared beliefs about what is good or bad and how people should act.

Stay Informed

Support our work.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Sage Choice

Logo of sageopen

The Psychology of Morality: A Review and Analysis of Empirical Studies Published From 1940 Through 2017

Naomi ellemers.

1 Utrecht University, The Netherlands

Jojanneke van der Toorn

2 Leiden University, The Netherlands

Yavor Paunov

3 Mannheim University, Germany

Thed van Leeuwen

Associated data.

Supplemental material, APPENDIX_1._Exclusion_of_papers_in_review for The Psychology of Morality: A Review and Analysis of Empirical Studies Published From 1940 Through 2017 by Naomi Ellemers, Jojanneke van der Toorn, Yavor Paunov and Thed van Leeuwen in Personality and Social Psychology Review

Supplemental material, APPENDIX_2._Reference_list_of_all_studies_included_in_review_(1940-2017) for The Psychology of Morality: A Review and Analysis of Empirical Studies Published From 1940 Through 2017 by Naomi Ellemers, Jojanneke van der Toorn, Yavor Paunov and Thed van Leeuwen in Personality and Social Psychology Review

Supplemental material, APPENDIX_3._Bibliometric_indicators_used for The Psychology of Morality: A Review and Analysis of Empirical Studies Published From 1940 Through 2017 by Naomi Ellemers, Jojanneke van der Toorn, Yavor Paunov and Thed van Leeuwen in Personality and Social Psychology Review

Supplemental material, Supplementary_Figures for The Psychology of Morality: A Review and Analysis of Empirical Studies Published From 1940 Through 2017 by Naomi Ellemers, Jojanneke van der Toorn, Yavor Paunov and Thed van Leeuwen in Personality and Social Psychology Review

Supplemental material, Supplementary_Tables for The Psychology of Morality: A Review and Analysis of Empirical Studies Published From 1940 Through 2017 by Naomi Ellemers, Jojanneke van der Toorn, Yavor Paunov and Thed van Leeuwen in Personality and Social Psychology Review

We review empirical research on (social) psychology of morality to identify which issues and relations are well documented by existing data and which areas of inquiry are in need of further empirical evidence. An electronic literature search yielded a total of 1,278 relevant research articles published from 1940 through 2017. These were subjected to expert content analysis and standardized bibliometric analysis to classify research questions and relate these to (trends in) empirical approaches that characterize research on morality. We categorize the research questions addressed in this literature into five different themes and consider how empirical approaches within each of these themes have addressed psychological antecedents and implications of moral behavior. We conclude that some key features of theoretical questions relating to human morality are not systematically captured in empirical research and are in need of further investigation.

This review aims to examine the “psychology of morality” by considering the research questions and empirical approaches of 1,278 empirical studies published from 1940 through 2017. We subjected these studies to expert content analysis and standardized bibliometric analysis to characterize relevant trends in this body of research. We first identify key features that characterize theoretical approaches to human morality, extract five distinct classes of research questions from the studies conducted, and visualize how these aim to address the psychological antecedents and implications of moral behavior. We then compare this theoretical analysis with the empirical approaches and research paradigms that are typically used to address questions within each of these themes. We identify emerging trends and seminal publications, specify conclusions that can be drawn from studies conducted within each research theme, and outline areas in need of further investigation.

Morality indicates what is the “right” and “wrong” way to behave, for instance, that one should be fair and not unfair to others ( Haidt & Kesebir, 2010 ). This is considered of interest to explain the social behavior of individuals living together in groups ( Gert, 1988 ). Results from animal studies (e.g., de Waal, 1996 ) or insights into universal justice principles (e.g., Greenberg & Cropanzano, 2001 ) do not necessarily help us to address moral behavior in modern societies. This also requires the reconciliation of people who endorse different political orientations ( Haidt & Graham, 2007 ) or adhere to different religions ( Harvey & Callan, 2014 ). The observation that “good people can do bad things” further suggests that we should look beyond the causes of individual deviance or delinquency to understand moral behavior. In our analysis, we consider key explanatory principles emerging from prominent theoretical approaches to capture important features characterizing human morality ( Tomasello & Vaish, 2013 ). These relate to (a) the social anchoring of right and wrong, (b) conceptions of the moral self, and (c) the interplay between thoughts and experiences. We argue that these three key principles explain the interest of so many researchers in the topic of morality and examine whether and how these are addressed in empirical research available to date.

Through an electronic literature search (using Web of Science [WoS]) and manual selection of relevant entries, we collected empirical publications that contained an empirical measure and/or manipulation that was characterized by the authors as relevant to “morality.” With this procedure, we found 1,278 papers published from 1940 through 2017 that report research addressing morality. Notwithstanding the enormous research interest visible in empirical publications on morality, a comprehensive overview of this literature is lacking. In fact, the review paper on morality that was most frequently cited in our set was published more than 35 years ago ( Blasi, 1980 ). As it stands, separate strands of research seem to be driven by different questions and empirical approaches that do not connect to a common approach or research agenda. This makes it difficult to draw summary conclusions, to integrate different sets of findings, or to chart important avenues for future research.

To organize and understand how results from empirical studies relate to each other, we identify the relations that are implicitly seen to connect different research questions. The rationales provided to study specific issues commonly refer to the psychological antecedents and implications of moral behavior and thus are seen to capture “the psychology of morality.” By content-analyzing the study reports provided, we classify the studies included in this review into five groups of thematic research questions and characterize the empirical approaches typically used in studies addressing each of these themes. With the help of bibliometric techniques, we then quantify emerging trends and consider how different clusters of study approaches relate to questions in each of the research themes examined. This allows us to clarify the theoretical conclusions that can be drawn from empirical work so far and to identify less examined issues in need of further study.

Morality and Social Order

Moral principles indicate what is a “good,” “virtuous,” “just,” “right,” or “ethical” way for humans to behave ( Haidt, 2012 ; Haidt & Kesebir, 2010 ; Turiel, 2006 ). Moral guidelines (“do no harm”) can induce individuals to display behavior that has no obvious instrumental use or no direct value for them, for instance, when they show empathy, fairness, or altruism toward others. Moral rules—and sanctions for those who transgress them—are used by individuals living together in social communities, for instance, to make them refrain from selfish behavior and to prevent them from lying, cheating, or stealing from others ( Ellemers, 2017 ; Ellemers & Van den Bos, 2012 ; Ellemers & Van der Toorn, 2015 ).

The role of morality in the maintenance of social order is recognized by scholars from different disciplines. Biologists and evolutionary scientists have documented examples of selfless and empathic behaviors observed in communities of animals living together, considering these as relevant origins of human morality (e.g., de Waal, 1996 ). The main focus of this work is on displays of fairness, empathy, or altruism in face-to-face groups, where individuals all know and depend on each other. In the analysis provided by Tomasello and Vaish (2013) , this would be considered the “first tier” of morality, where individuals can observe and reciprocate the treatment they receive from others to elicit and reward cooperative and empathic behaviors that help to protect individual and group survival.

Philosophers, legal scholars, and political scientists have addressed more abstract moral principles that can be used to regulate and govern the interactions of individuals in larger and more complex societies (e.g., Haidt, 2012 ; Mill 1861/1962 ). Here, the nature of cooperative or empathic behavior is much more symbolic as it depends less on direct exchanges between specific individuals, but taps into more abstract and ambiguous concepts such as “the greater good.” Scholarly efforts in this area have considered how specific behaviors might (not) be in line with different moral principles and which guidelines and procedures might institutionalize social order according to such principles (e.g., Churchland, 2011 ; Morris, 1997 ). These approaches tap into what Tomasello and Vaish (2013) consider the “second tier” of morality, which emphasizes the social signaling functions of moral behavior and distinguishes human from animal morality (see also Ellemers, 2018 ). At this level, behavioral guidelines that have lost their immediate survival value in modern societies (such as specific dress codes or dietary restrictions) may nevertheless come to be seen as prescribing essential behavior that is morally “right.” Specific behaviors can acquire this symbolic moral value to the extent that they define how individuals typically mark their religious identity, communicate respect for authority, or secure group belonging for those adhering to them ( Tomasello & Vaish, 2013 ). Moral judgments that function to maintain social order in this way rely on complex explanations and require verbal exchanges to communicate the moral overtones of behavioral guidelines. Language-driven interpretations and attributions are needed to capture symbolic meanings and inferred intentions that are not self-evident in behavioral displays or outwardly visible indicators of emotions ( Ellemers, 2018 ; Kagan, 2018 ).

The interest of psychologists in moral behavior as a factor in maintaining social order has long been driven by developmental questions (how do children acquire the ability to do this, for example, Kohlberg, 1969 ) and clinical implications (what are origins of social deviance and delinquency, for example, Rest, 1986 ). Jonathan Haidt’s (2001) publication, on the role of quick intuition versus deliberate reflection in distinguishing between right and wrong, marked a turning point in the interest of psychologists in these issues. The consideration of specific psychological mechanisms involved in moral reasoning prompted many psychological researchers to engage with this area of inquiry. This development also facilitated the connection of psychological theory to neurobiological mechanisms and inspired attempts to empirically examine underlying processes at this level—for instance, by using functional magnetic resonance imaging (fMRI) measures to monitor the brain activity of individuals confronted with moral dilemmas ( Greene, 2013 ; Greene, Sommerville, Nystrom, Darley, & Cohen, 2001 ).

Below, we will consider influential approaches that have advanced the understanding of human morality in social psychology, organizing them according to their main explanatory focus. These characterize the “second tier” ( Tomasello & Vaish, 2013 ) implications of morality that go beyond more basic displays of empathy and altruism observed in animal studies that form the root of biological and evolutionary explanations. From the theoretical perspectives currently available, we extract three key principles that capture the essence of human morality.

Social Anchoring of Right and Wrong

The first principle refers to the social implications of judgments about right and wrong. This has been emphasized as a defining characteristic of morality in different theoretical perspectives. For instance, Skitka (2010) and colleagues have convincingly argued that beliefs about what is morally right or wrong are unlike other attitudes or convictions ( Mullen & Skitka, 2006 ; Skitka, Bauman, & Sargis, 2005 ; Skitka & Mullen, 2002 ). Instead, moral convictions are seen as compelling mandates, indicating what everyone “ought” to or “should” do. This has important social implications, as people also expect others to follow these behavioral guidelines. They are emotionally affected and distressed when this turns out not to be the case, find it difficult to tolerate or resolve such differences, and may even resort to violence against those who challenge their views ( Skitka & Mullen, 2002 ).

This socially defined nature of moral guidelines is explicitly acknowledged in several theoretical perspectives on moral behavior. The Theory of Planned Behavior (e.g., Ajzen, 1991 ) offers a framework that clearly specifies how behavioral intentions are determined in an interplay of individual dispositions and social norms held by self-relevant others ( Ajzen & Fishbein, 1974 ; Fishbein & Ajzen, 1974 ). For instance, research based on this perspective has been used to demonstrate that the adoption of moral behaviors, such as expressing care for the environment, can be enhanced when relevant others think this is important ( Kaiser & Scheuthle, 2003 ).

In a similar vein, Haidt (2001) argued that judgments of what are morally good versus bad behaviors or character traits are specified in relation to culturally defined virtues. This allows shared ideas about right and wrong to vary, depending on the cultural, religious, or political context in which this is defined ( Giner-Sorolla, 2012 ; Haidt & Graham, 2007 ; Haidt & Kesebir, 2010 ; Rai & Fiske, 2011 ). Haidt (2001) accordingly specifies that moral intuitions are developed through implicit learning of peer group norms and cultural socialization. This position is supported by empirical evidence showing how moral behavior plays out in groups ( Graham, 2013 ; Graham & Haidt, 2010 ; Janoff-Bulman & Carnes, 2013 ). This work documents the different principles that (groups of) people use in their moral reasoning ( Haidt, 2012 ). By connecting judgments about right and wrong to people’s group affiliations and social identities, this perspective clarifies why different religious, political, or social groups sometimes disagree on what is moral and find it difficult to understand the other position ( Greene, 2013 ; Haidt & Graham, 2007 ).

We argue that all these notions point to the socially defined and identity-affirming properties of moral guidelines and moral behaviors. Conceptions of right and wrong reflect the values that people share with important others and are anchored in the social groups to which they (hope to) belong ( Ellemers, 2017 ; Ellemers & Van den Bos, 2012 ; Ellemers & Van der Toorn, 2015 ; Leach, Bilali, & Pagliaro, 2015 ). This also implies that there is no inherent moral value in specific actions or overt displays, for instance, of empathy or helping. Instead, the same behaviors can acquire different moral meanings, depending on the social context in which they are displayed and the relations between actors and targets involved in this context ( Blasi, 1980 ; Gray, Young, & Waytz, 2012 ; Kagan, 2018 ; Reeder & Spores, 1983 ).

Thus, a first question to be answered when reviewing the empirical literature, therefore, is whether and how the socially shared and identity relevant nature of moral guidelines—central to key theoretical approaches—is adressed in the studies conducted to examine human morality.

Conceptions of the Moral Self

A second principle that is needed to understand human morality—and expands evolutionary and biological approaches—is rooted in the explicit self-awareness and autobiographical narratives that characterize human self-consciousness, and moral self-views in particular ( Hofmann, Wisneski, Brandt, & Skitka, 2014 ).

Because of the far-reaching implications of moral failures, people are highly motivated to protect their self-views of being a moral person ( Pagliaro, Ellemers, Barreto, & Di Cesare, 2016 ; Van Nunspeet, Derks, Ellemers, & Nieuwenhuis, 2015 ). They try to escape self-condemnation, even when they fail to live up to their own moral standards. Different strategies have been identified that allow individuals to disengage their self-views from morally questionable actions ( Bandura, 1999 ; Bandura, Barbaranelli, Caprara, & Pastorelli, 1996 ; Mazar, Amir, & Ariely, 2008 ). The impact of moral lapses or moral transgressions on one’s self-image can be averted by redefining one’s behavior, averting responsibility for what happened, disregarding the impact on others, or excluding others from the right to moral treatment, to name just a few possibilities.

A key point to note here is that such attempts to protect moral self-views are not only driven by the external image people wish to portray toward others. Importantly, the conviction that one qualifies as a moral person also matters for internalized conceptions of the moral self ( Aquino & Reed, 2002 ; Reed & Aquino, 2003 ). This can prompt people, for instance, to forget moral rules they did not adhere to ( Shu & Gino, 2012 ), to fail to recall their moral transgressions ( Mulder & Aquino, 2013 ; Tenbrunsel, Diekmann, Wade-Benzoni, & Bazerman, 2010 ), or to disregard others whose behavior seems morally superior ( Jordan & Monin, 2008 ).

As a result, the strong desire to think of oneself as a moral person not only enhances people’s efforts to display moral behavior ( Ellemers, 2018 ; Van Nunspeet, Ellemers, & Derks, 2015 ). Instead, sadly, it can also prompt individuals to engage in symbolic acts to distance themselves from moral transgressions ( Zhong & Liljenquist, 2006 ) or even makes them relax their behavioral standards once they have demonstrated their moral intentions ( Monin & Miller, 2001 ). Thus, tendencies for self-reflection, self-consistency, and self-justification are both affected by and guide moral behavior, prompting people to adjust their moral reasoning as well as their judgments of others and to endorse moral arguments and explanations that help justify their own past behavior and affirm their worldviews ( Haidt, 2001 ).

A second important question to consider when reviewing the empirical literature on morality, thus, is whether and how studies take into account these self-reflective mechanisms in the development of people’s moral self-views. From a theoretical perspective, it is therefore relevant to examine antecedents and correlates of tendencies to engage in self-defensive and self-justifying responses. From an empirical perspective, it also implies that it is important to consider the possibility that people’s self-reported dispositions and stated intentions may not accurately indicate or predict the moral behavior they display.

The Interplay Between Thoughts and Experiences

A third principle that connects different theoretical perspectives on human morality is the realization that this involves deliberate thoughts and ideals about right and wrong, as well as behavioral realities and emotional experiences people have, for instance, when they consider that important moral guidelines are transgressed by themselves or by others. Traditionally, theoretical approaches in moral psychology were based on the philosophical reasoning that is also reflected in legal and political scholarship on morality. Here, the focus is on general moral principles, abstract ideals, and deliberate decisions that are derived from the consideration of formal rules and their implications ( Kohlberg, 1971 ; Turiel, 2006 ). Over the years, this perspective has begun to shift, starting with the observation made by Blasi (1980 , p. 1) that

Few would disagree that morality ultimately lies in action and that the study of moral development should use action as the final criterion. But also few would limit the moral phenomenon to objectively observable behavior. Moral action is seen, implicitly or explicitly, as complex, imbedded in a variety of feelings, questions, doubts, judgments, and decisions . . . . From this perspective, the study of the relations between moral cognition and moral action is of primary importance.

This perspective became more influential as a result of Haidt’s (2001) introduction of “moral intuition” as a relevant construct. Questions about what comes first, reasoning or intuition, have yielded evidence showing that both are possible (e.g., Feinberg, Willer, Antonenko, & John, 2012 ; Pizarro, Uhlmann, & Bloom, 2003 ; Saltzstein & Kasachkoff, 2004 ). That is, reasoning can inform and shape moral intuition (the classic philosophical notion), but intuitive behaviors can also be justified with post hoc reasoning (Haidt’s position). The important conclusion from this debate thus seems to be that it is the interplay between deliberate thinking and intuitive knowing that shapes moral guidelines ( Haidt, 2001 , 2003 , 2004 ). This points to the importance of behavioral realities and emotional experiences to understand how people reflect on general principles and moral ideals.

A first way in which this has been addressed resonates with the evolutionary survival value of moral guidelines to help avoid illness and contamination as sources of physical harm. In this context, it has been argued and shown that nonverbal displays of disgust and physical distancing can emerge as unthinking embodied experiences to morally aversive situations that may subsequently invite individuals to reason why similar situations should be avoided in the future ( Schnall, Haidt, Clore, & Jordan, 2008 ; Tapp & Occhipinti, 2016 ). The social origins of moral guidelines are acknowledged in approaches explaining the role of distress and empathy as implicit cues that can prompt individuals to decide which others are worthy of prosocial behavior ( Eisenberg, 2000 ). In a similar vein, the experience of moral anger and outrage at others who violate important guidelines is seen as indicating which guidelines are morally “sacred” ( Tetlock, 2003 ). Experiences of disgust, empathy, and outrage all indicate relatively basic affective states that are marked with nonverbal displays and have direct implications for subsequent actions ( Ekman, 1989 ; Ekman, 1992 ).

In addition, theoretical developments in moral psychology have identified the experience of guilt and shame as characteristic “moral” emotions. Compared with “primary” affective responses, these “secondary” emotions are used to indicate more complex, self-conscious states that are not immediately visible in nonverbal displays ( Tangney & Dearing, 2002 ; Tangney, Stuewig, & Mashek, 2007 ). These moral emotions are seen to distinguish humans from most animals. Indeed, affording to others the perceived ability to experience such emotions communicates the degree to which we consider them to be human and worthy of moral treatment ( Haslam & Loughnan, 2014 ). The nature of guilt and shame as “self-condemning” moral emotions indicates their function to inform self-views and guide behavioral adaptations rather than communicating one’s state to others.

At the same time, it has been noted that feelings of guilt and shame can be so overwhelming that they raise self-defensive responses that stand in the way of behavioral improvement ( Giner-Sorolla, 2012 ). This can occur at the individual level as well as the group level, where the experience of “collective guilt” has been found to prevent intergroup reconciliation attempts ( Branscombe & Doosje, 2004 ). Accordingly, it has been noted that the relations between the experience of guilt and shame as moral emotions and their behavioral implications depend very much on further appraisals relating to the likelihood of social rejection and self-improvement that guide self-forgiveness ( Leach, 2017 ).

Regardless of which emotions they focus on, these theoretical perspectives all emphasize that moral concerns and moral decisions arise from situational realities, characterized by people’s experiences and the (moral) emotions these evoke. A third question emerging from theoretical accounts aiming to understand human morality, therefore, is whether and how the interplay between the thoughts people have about moral ideals (captured in principles, judgments, reasoning), on one hand, and the realities they experience (embodied behaviors, emotions), on the other, is explicitly addressed in empirical studies.

Empirical Approaches

Now that we have identified that socially shared, self-reflective, and experiential mechanisms represent three key principles that are seen as essential for the understanding of human morality in theory , it is possible to explore how these are reflected in the empirical work available. An initial answer to this question can be found by considering which types of research paradigms and classes of measures are frequently used in studies on morality. Do study designs typically take into account the way different social norms can shape individual moral behavior? Do instruments that are developed to assess people’s morality incorporate the notion that explicit self-reports do not necessarily capture their actual moral responses? And do responses that are assessed allow researchers to connect moral thoughts people have with their actual experiences?

We examined this by reviewing the empirical literature. Through an electronic literature search, we collected empirical studies reporting on manipulations and/or empirical measures that authors of these studies identified as being relevant to “morality.” In a first wave of data collection (see the “Method” section for further details), we extracted 419 empirical studies on morality that were published from 2000 through 2013. These were manually processed and content-coded to determine for each publication the research question that was asked, the research design that was employed to examine this, and the measures that were used (for details of how this was done, see Ellemers, Van der Toorn, & Paunov, 2017 ). We distinguished between correlational and experimental designs and assessed which manipulations were used to compare different responses (see Supplementary Table A ). We also listed and classified “named” scales and measures that were employed in these studies (see Table 1 ) and additionally indicated which types of responses were captured, in moral judgments provided, emotional and behavioral indicators, or with standardized scales (see Supplementary Table B ).

Four Types of Scales Used to Examine Morality in 91 Publications, With N Indicating the Number of Publications Using a Scale Type, Indicated in Order of Most Frequently Used (First Scale Mentioned) to Least Frequently Used (Last Scale Mentioned).

Hypothetical moral dilemmas ( = 27)Self-reported traits/behaviors of self/other ( = 32)Endorsement of abstract moral rules ( = 31)Position on specific moral issues ( = 20)
Defining Issues Test (DIT)  ( )
Prosocial Moral Reasoning Measure (PROM)
 ( )
Accounting Specific Defining Issues Test (ADIT)
 ( )
Revised Moral Authority Scale (MAS-R)
 ( )
Moral/Conventional Distinction Task
 ( ; )
Moral Emotions Task
 ( )
Moral Judgment Test (MJT)
 ( )
Moral Identity Scale
 ( )
HEXACO-PI
 ( )
Implicit Association Task (IAT)
 ( )
Cognitive Reflection Test (CRT)
 ( )
Index of Moral Behaviors
 ( )
Josephson Institute Report Card on the Ethics of American Youth
 ( )
Moral Entrepreneurial Personality (MEP)
 ( )
Moral Functioning Model
 ( )
Tennessee Self-Concept Scale
 ( )
Washington Sentence Completion Test of Ego Development
 ( )
Moral Exemplarity
 ( )
Moral Foundations Questionnaire (MFQ)
 ( )
Schwartz Value Survey (SVS)
 ( )
Ethics Position Questionnaire (EPQ)
 ( )
Integrity Scale
 ( )
Moral Motives Scale (MMS)
 ( )
Identification with all Humanities Scale
 ( )
Moral Character
 ( )
Value Survey Module
 ( )
Community Autonomy Divinity Scale (CADS)
 ( )
Moral Foundations Dictionary
 ( )
Moral Disengagement Scale
 ( )
Sensitivity to Injustice
 ( )
Sociomoral Reflection Measure–Short Form (SRM-SF)
 ( )
Beliefs About Morality (BAM)
 ( )
Dubious Behaviors
 ( )
Morally Debatable Behaviors Scale (MDBS)
 ( )
Moral Disengagement Tool
 ( )
Self-reported Inappropriate Negotiation Strategies Scale (SINS scale)
 ( )
TRIM-18R
 ( )

Note. A single publication can contain multiple scales.

Are Social Influences Taken Into Account?

An overview of the research designs that were coded in this way (see Supplementary Table A , final column) first reveals that a substantial proportion of these studies (185 of 419 studies examined; 44%) used correlational designs to examine, for instance, which traits people associate with particular targets or how self-reported beliefs, convictions, principles, or norms relate to self-stated intentions. Of the studies using an experimental design, a substantial number (91 studies; about 22%) examined the impact of some situational prime intended to activate specific goals, rules, or experiences. Furthermore, a substantial number of studies examined the impact of manipulating specific target characteristics (51 studies; 12%) or moral concerns (51 studies; 12%). However, experimental studies examining the impact of specific social norms (31 studies; 7%) or a group-based participant identity were relatively rare (four studies; less than 1%). This suggests that the socially shared nature of moral guidelines is not systematically addressed in this body of research.

Do Standard Instruments Rely on Self-Reports?

The types of responses typically examined in these studies can be captured by looking in more detail at the nature of the scales, tests, tasks, and questionnaires that were used. Our manual content analysis yielded 38 different scales, tests, tasks, and questionnaires that were used in 91 of the 419 studies examined (see Table 1 ). We clustered these according to their nature and intent, which yielded four distinct categories. We found seven different measures (used in 27 studies; 30%) that rely on hypothetical moral dilemmas , where people have to weigh different moral principles against each other (e.g., stealing from one person to help another person), and indicate what should be done in these situations. We found 11 additional measures (used in 12 studies; 13%) consisting of lists of traits or behaviors (e.g., honesty, helpfulness) that can be used to indicate the general character/personality type of the self or a known other (friend, family member). Here, we included measures such as the HEXACO Personality Inventory (HEXACO-PI; Lee & Ashton, 2004 ) and the moral identity scale ( Aquino & Reed, 2002 ). Third, we found 11 different measures (used in 31 studies; 34%) that assess the endorsement of abstract moral rules (e.g., “do no harm”). A representative example is the Moral Foundations Questionnaire ( Graham et al., 2011 ), which distinguishes between statements indicating concern for “individualizing” principles (harm/care, fairness) and “binding” principles (loyalty, authority, purity). Fourth, we found nine different measures (used in 20 studies; 22%) aiming to capture people’s position on specific moral issues (e.g., “it is important to tell the truth”; “it is ok for employees to take home a few office supplies”). We also included in this category different lists of behaviors (for instance, the Morally Debatable Behaviors Scale [MDBS]; Katz, Santman, & Lonero, 1994 ) that focus on the endorsement of behaviors considered relevant to morality (e.g., corruption, violence, discrimination, or misrepresentation).

Importantly, all four clusters of measures we found to rely on self-reported preferences and stated character traits or intentions, describing overall tendencies and general behavioral guidelines. However, it is less evident that such measures can be used to understand how people will actually behave in real-life situations, where they may have to choose which of different competing guidelines to apply or where it is unclear how the general principles they endorse translate to a specific act or decision in that context.

Are “Thoughts” Connected to “Experiences?”

Our manual coding of the different dependent measures that were used (see Supplementary Table B , final column) reveals that the majority of measures aimed to capture either general moral principles that people endorse (72 of 445 measures coded; 16%) or their moral evaluations of specific individuals, groups, or companies (72 measures; 16%). In addition, a substantial proportion of studies examined people’s positions on specific issues, such as abortion, gossiping, or specific political convictions (61 measures; 14%). Substantial numbers of measures assessed the perceived implications of one’s moral principles (48 measures; 11%) or the willingness to be cooperative or truthful in hypothetical situations (44 measures; 10%). Notably, a relatively small proportion of measures actually tried to capture cooperative or cheating behavior in experimental or real-life situations (51 measures; 12%). Similarly, empathy with others and moral emotions such as guilt, shame, and disgust were assessed in 15% (67) of the measures that were coded. Thus, the majority of measures used focuses on “thoughts” relating to morality, as these capture abstract principles, overall judgments, or hypothetical intentions, while much less attention has been devoted to examining behavioral displays or emotions characterizing the actual “experiences” people have in relation to these “thoughts.”

Thus, this initial examination of empirical evidence available in studies on morality published from 2000 through 2013 suggests that the three key theoretical principles we have extracted from relevant theoretical perspectives on morality are not systematically reflected in the research that has been carried out. Instead, it seems that “moral tendencies” are typically defined independently of the social context, specific norms, or the identity of others who may be affected by the (im)moral behavior. Furthermore, general and self-reported tendencies or preferences are often taken at face value without testing them against actual behavioral displays or emotional experiences. Finally, empirical studies have prioritized the examination of all kinds of “thoughts” relating to morality over attempts to connect these to actual moral “experiences.” Thus, this initial examination of the literature seems to reveal a mismatch between the empirical approach that is typically taken and leading theoretical perspectives—that emphasize the socially shared nature of moral guidelines, the self-justifying nature of moral reasoning, and the importance of emotional experiences.

As others have noted before us (e.g., Abend, 2013 ), this initial assessment of studies carried out suggests that the empirical breadth of past morality research is constrained in that some approaches appear to be favored at the expense of others. Studies often rely on highly artificial paradigms or scenarios ( Chadwick, Bromgard, Bromgard, & Trafimow, 2006 ; Eriksson, Strimling, Andersson, & Lindholm, 2017 ). They examine hypothetical reasoning or focus on a few specific decisions or actions that may rarely present themselves in everyday life, such as deciding about the course of a runaway train ( Bauman, McGraw, Bartels, & Warren, 2014 ; Graham, 2014 ) or eating one’s dog ( Haidt, Koller, & Dias, 1993 ; Mooijman & Van Dijk, 2015 ). This does not capture the wide variety of contexts in which moral choices have to be made (for instance, whether or not to sell a subprime mortgage to achieve individual performance targets), and it is not evident whether and how this limits the conclusions that can be drawn from such work (for similar critiques, see Crone & Laham, 2017 ; Graham, 2014 ; Hofmann et al., 2014 ; Lovett, Jordan, & Wiltermuth, 2015 ).

Understanding Moral Behavior

Our conclusion so far is that researchers in social psychology have displayed a considerable interest in examining topics relating to morality. However, it is not self-evident how the multitude of research topics and issues that are addressed in this literature can be organized. This is why we set out to organize the available research in this area into a limited set of meaningful categories by content-analyzing the publications we found to identify studies examining similar research questions. In the “Method” section, we provide a detailed explanation of the procedure and criteria we used to develop our coding scheme and to classify studies as relating to one of five research themes we extracted in this way. We now consider the nature of the research questions addressed within each of these themes and the rationales typically provided to study them, to specify how different research questions that are examined are seen to relate to each other. We visualize these hypothesized relations in Figure 1 .

An external file that holds a picture, illustration, etc.
Object name is 10.1177_1088868318811759-fig1.jpg

The psychology of morality: connections between five research themes.

Researchers in this literature commonly cite the ambition to predict, explain, and influence Moral Behavior as their focal guideline for having an interest in examining some aspect of morality (see also Ellemers, 2017 ). We therefore place research questions relating to this theme at the center of Figure 1 . Questions about behavioral displays that convey the moral tendencies of individuals or groups fall under this research theme. These include research questions that address implicit indicators of moral preferences or cooperative choices, as well as more deliberate displays of helping, cheating, or standing up for one’s principles.

Many researchers claim to address the likely antecedents of such moral behaviors that are located in the individual as well as in the (social) environment. Here, we include research questions relating to Moral Reasoning , which can reflect the application of abstract moral principles as well as specific life experiences or religious and political identities that people use to locate themselves in the world (e.g., Cushman, 2013 ). This work addresses moral standards people can adhere to, for instance, in the decision guidelines they adopt or in the way they respond to moral dilemmas or evaluate specific scenarios.

We classify research questions as referring to Moral Judgments when these address the dispositions and behaviors of other individuals, groups, or companies in terms of their morality. These are considered as relevant indicators of the reasons why and conditions under which people are likely to display moral behavior. Research questions addressed under this theme consider the characteristics and actions of other individuals and groups as examples of behavior to be followed or avoided or as a source of information to extract social norms and guidelines for one’s own behavior (e.g., Weiner, Osborne, & Rudolph, 2011 ).

We distinguish between these two clusters to be able to separate questions addressing the process of moral reasoning (to infer relevant decision rules) from questions relating to the outcome in the form of moral judgments (of the actions and character of others). However, the connecting arrow in Figure 1 indicates that these two types of research questions are often discussed in relation to each other, in line with Haidt’s (2001) reasoning that these are interrelated mechanisms and that moral decision rules can prescribe how certain individuals should be judged, just as person judgments can determine which decision rules are relevant in interacting with them.

We proceed by considering research questions that relate to the psychological implications of moral behavior. The immediate affective implications of one’s behavior, and how this reveals one’s moral reasoning as well as one’s judgments of others, are addressed in questions relating to Moral Emotions ( Sheikh, 2014 ). These are the emotional responses that are seen to characterize moral situations and are commonly used to diagnose the moral implications of different events. Questions we classified under this research theme typically address feelings of guilt and shame that people experience with regard to their own behavior, or outrage and disgust in response to the moral transgressions of others.

Finally, we consider research questions addressing self-reflective and self-justifying tendencies associated with moral behavior. Studies aiming to investigate the moral virtue people afford to themselves and the groups they belong to, and the mechanisms they use for moral self-protection, are relevant for Moral Self-Views . Under this research theme, we subsume research questions that address the mechanisms people use to maintain self-consistency and think of themselves as moral persons, even when they realize that their behavior is not in line with their moral principles (see also Bandura, 1999 ).

Even though research questions often consider moral emotions and moral self-views as outcomes of moral behaviors and theorize about the factors preceding these behaviors, this does not imply that emotions and self-views are seen as the final end-states in this process. Instead, many publications refer to these mechanisms of interest as being iterative and assume that prior behaviors, emotions, and self-views also define the feedback cycles that help shape and develop subsequent reasoning and judgments of (self-relevant) others, which are important for future behavior. The feedback arrows in Figure 1 indicate this.

Our main goal in specifying how different types of research questions can be organized according to their thematic focus in this way is to offer a structure that can help monitor and compare the empirical approaches that are typically used to advance existing insights into different areas of interest. The relations depicted in Figure 1 represent the reasoning commonly provided to motivate the interest in different types of research questions. The location of the different themes in this figure clarifies how these are commonly seen to connect to each other and visualizes the (sometimes implicit) assumptions made about the way findings from different studies might be combined and should lead to cumulative insights. In the sections that follow, we will examine the empirical approaches used to address each of these clusters of research questions to specify the ways in which results from different types of studies actually complement each other and to identify remaining gaps in the empirical literature.

A Functionalist Perspective

An important feature of our approach is that we do not delineate research questions in terms of the specific moral concerns, guidelines, principles, or behaviors they address. Instead, we take a functionalist perspective in considering which mechanisms relevant to people’s thoughts and experiences relating to morality are examined to draw together the empirical evidence that is available. For each of the research themes described above, we therefore consider the empirical approaches that have been taken by identifying the nature of relevant functions or mechanisms that have been examined. This will help document the evidence that is available to support the notion that morality matters for the way people think about themselves, interact with others, live and work together in groups, and relate to other groups in society. In considering the different functions morality may have, we distinguish between four levels at which mechanisms in social psychology are generally studied (see also Ellemers, 2017 ; Ellemers & Van den Bos, 2012 ).

Intrapersonal Mechanisms

All the ways in which people consider, think, and reason by themselves to determine what is morally right refer to intrapersonal mechanisms. Even if these considerations are elicited by social norms or reflect the behavior observed in others, it is important to assess the extent to which they emerge as guiding principles for individuals to be used in their further reasoning, for their judgments of the self and others, for their behavioral displays, or for the emotions they experience. Thus, such intrapersonal mechanisms are relevant for questions relating to each of the five research themes we examine.

Interpersonal Mechanisms

The way people relate to others, respond to their moral behaviors, and connect to them tap into interpersonal mechanisms. Again we note that such mechanisms are relevant for research questions in all five research themes, as relations with others can inform the way people reason about morality, the way they judge other individuals or groups, the way they behave, as well as the emotions they experience and the self-views they have.

Intragroup Mechanisms

The role of moral concerns in defining group norms, the tendency of individuals to conform to such norms, and their resulting inclusion versus exclusion from the group all indicate intragroup mechanisms relevant to morality. Considering how groups influence individuals is relevant for our understanding of the way people reason about morality and the way they judge others. It also helps us understand the moral behavior individuals are likely to display (for instance, in public vs. private situations), the emotions they experience in response to the transgression of specific moral rules by themselves or different others, and the self-views they develop about their morality.

Intergroup Mechanisms

The tendency for social groups to endorse specific moral guidelines as a way to define their distinct identity, disagreements between groups about the nature or implications of important values, or moral concerns that stem from conflicts between groups in society all refer to intergroup mechanisms relevant to morality. Here too, examination of such mechanisms is relevant to research questions in each of the five research themes we distinguish. These may inform the tendency to interpret the prescription to be “fair” differently, depending on the identity of the recipients of such fairness, which helps understand people’s moral reasoning and the way they judge the morality of others. Intergroup relations may also help understand the tendency to behave differently toward members of different groups, as well as the emotions and self-views relating to such behaviors.

In sum, we argue that each of these four levels of analysis offers potentially relevant approaches to understand the mechanisms that can shape people’s moral concerns and their judgments of others. Mechanisms at all four levels can also affect moral behavior and have important implications for the emotions people experience and the self-views they hold. Reviewing whether and how empirical research has addressed relevant mechanisms at these four levels thus offers a better understanding of how morality operates in the social regulation of individual behavior (see also Carnes, Lickel, & Janoff-Bulman, 2015 ; Ellemers, 2017 ; Janoff-Bulman & Carnes, 2013 ).

Questions Examined

The functionalist perspective we have outlined above is central to how we conceptualize morality in this review. We built a database containing research that is relevant for this review by including all studies in which the authors indicated their research design or measures to speak to issues relating to morality. Thus, we do not limit ourselves to the examination of specific guidelines or behaviors as representing key features of morality, but consider the broad range of situations that can be interpreted in terms of their moral implications (see also Blasi, 1980 ). We argue that many different principles or behaviors can acquire moral overtones, and our main interest is to examine what happens when these are considered as indicating the morally “right” versus “wrong” way to behave in a particular situation. We think this latter aspect reflects the essence of theoretical accounts that have emphasized the ways in which morality and moral judgments regulate the behavior of individuals living in groups ( Rai & Fiske, 2011 ; Tooby & Cosmides, 2010 ). As indicated above, this implies that—given the abstract nature of universal moral values—the specific behavior that is seen as moral can shift, depending on the social context ( Haidt & Graham, 2007 ; Haidt & Kesebir, 2010 ; Rai & Fiske, 2011 ), as well as the relevant norms or features that characterize distinct social groups ( Giner-Sorolla, 2012 ; Greene, 2013 ). Shared moral standards go beyond other behavioral norms in that they are used to define whether an individual can be considered a virtuous and “proper” group member, with social exclusion as the ultimate sanction ( Tooby & Cosmides, 2010 ; see also Ellemers & Van den Bos, 2012 ). In the remainder of this review, we will examine the empirical approaches to examining morality in social psychology from this functionalist perspective:

  • Emerging trends: We built a database containing bibliometric characteristics of all studies relevant to our review. This allows us to consider relevant trends in the emergence of published studies, comparing these with general developments in the field of social psychology. We will consider differences in the development of interest in the five types of research questions we distinguish and detail the different mechanisms that are studied to examine questions falling within each of these themes. In this way, we aim to examine the effort researchers have made over the years to understand what they see as the psychological antecedents and implications of moral behavior. We also assess whether and how these emerging efforts have addressed the intrapersonal, interpersonal, intragroup, and intergroup mechanisms relating to morality.
  • Influential views: We will identify which (theoretical) publications external to our database are most frequently cited in the empirical publications included in our database. We see these as seminal approaches that have influenced researchers with an interest in morality. We also assess which empirical publications in our database receive the most cross-citations from other researchers on morality and are frequently cited in the broader literature. This will help understand which theoretical perspectives and empirical approaches have been most influential in further developing this area of research.
  • Types of studies: We will use standardized bibliometric techniques to identify interrelated clusters of research and characterize the way these clusters differ from each other. We consider the different types of research questions asked in each of the themes we distinguish and relate them to clusters of studies carried out to specify the empirical approaches that have typically been adopted to address questions within each research theme. This elucidates which conclusions can be drawn from the studies that are available to date and how these contribute to broader insights on the psychology of morality.

By considering the empirical literature in this way, we seek to determine whether and how relevant theoretical perspectives on human morality and the types of research questions they raise are reflected in empirical studies carried out. In doing this, we will assess to what extent this work addresses the role of shared identities in the development of moral guidelines, takes into account the limits of self-reported individual dispositions as proxies for moral behaviors, and considers the interplay between moral principles, guidelines, and convictions as “thoughts,” on one hand, and actual behaviors and emotions as “experiences,” on the other.

Data Collection Procedure

The data collection was carried out entirely online using the WoS engine. Information was derived from three databases: the Science Citation Index Expanded (SCI-EXPANDED, 1945-present), the Social Sciences Citation Index (SSCI, 1956-present), and the Arts & Humanities Citation Index (A&HCI, 1975-present). These database choices were determined by user account access. The category criterion was set to “Psychology Social.” The search query was “moral*” whereby the results listed all empirical and review articles featuring the word “moral” within the source’s title, keywords, or abstract.

The publications initially found in this way were manually screened to determine whether they should be included in our review of empirical studies on morality. Criteria to include a publication in the set accordingly were (a) that it was an English-language publication, (b) that it had been published in a peer-reviewed journal, (c) that it contained an original report of qualitative or quantitative empirical data (either in a correlational or an experimental design), and (d) that it contained a manipulation or a measure that the authors indicated as relevant to morality.

The complete set of studies examined here was collected in three waves (see Appendix 1, in Supplementary materials ). Each wave consisted of an electronic search using the procedure and inclusion criteria detailed above. The publications that came up in the electronic search were first screened to remove any review or theory papers that did not report original data. The empirical publications that were retained were assessed for relevance to our research question by checking whether the study or studies reported actually included a manipulation or measure that was identified by the authors as relating to morality.

The initial search was done in 2014 and included all publications that had appeared in 2000 through 2013, of which 419 met our inclusion criteria. A second wave of data collection was carried out in 2016 and 2017 to add two more years of empirical publications that had appeared in 2014 and 2015. This yielded 221 additional publications that were included in the set. The data collection was completed with a third wave of data collection conducted in 2018. Here, the same procedure was used to add 275 empirical studies that had been published in 2016 and 2017. In this third wave of data collection, we also searched for publications that had appeared before 2000 and were listed in WoS. This yielded 372 additional studies published from 1940 through 1999. Together, these three waves of data collection yielded a total number of 1,278 studies on morality published from 1940 through 2017 that we collected for this review (see Appendix 2, in Supplementary materials ).

We note that complete records of main publication details are only available from 1981 onward, and complete full-text records of publications in WoS are only available from 1996 onward. This is why statistical trends analyses will only be conducted for studies published from 1981 onward, and full bibliometric analyses can only be carried out for the main body of 989 studies on morality published from 1996 through 2017 for which complete publication details are digitally available.

Data Coding

Coding procedure and interrater reliability.

During the first wave of data collection, a coding scheme was jointly developed by the two first authors. Different coders used this scheme to code groups of publications in different waves of data collection. This was decided by determining the main prediction examined and inspecting the study design and measures that were used. In each phase of data coding, ambiguous cases were flagged, and publication details were further examined and discussed with other coders to reach a joint decision on the most appropriate classification. Each time this occurred, the coding scheme was further specified.

After completion of the third wave of data collection, interrater reliability was determined for the full database included in this review. The codes assigned by five different coders in the first and second wave of data collection, and by six additional coders in the third wave of data collection, were checked by the second group of six coders. An online random number generator was used to randomly select 20 entries for six subsets of years examined (1940 through 2017) that contained about 200 publications each. This resulted in 120 entries (roughly 10% of all publications included) sampled to assess interrater reliability. Each group of 20 entries was then assigned to a second coder and coded in an empty file. Only after completing the 20 entries did the second coder compare their codings with the original codings. The overall interrater agreement was good. For the levels of analysis at which morality was examined, coders were in agreement for 84% of the entries coded. When determining how to classify the main research question under one of the research themes, coders agreed on 84.3% of the entries.

Levels of Analysis

For each entry, we inspected the study design and measures that were used to assess the level at which the mechanism under investigation was located. We distinguish four levels which mirror the categories that are commonly used to characterize different types of mechanisms addressed in social psychological theory (e.g., in textbooks): (a) research on intrapersonal mechanisms, which studies how a single individual considers, evaluates, or makes decisions about rules, objects, situations, and courses of action; (b) research on interpersonal mechanisms, which examines how individuals perceive, evaluate, and interact with other individuals; (c) research on intragroup mechanisms, investigating how people perceive, evaluate, and respond to norms or behaviors displayed by other members of the same group, work or sports team, religious community, or organization; and (d) research on intergroup mechanisms, focusing on how people perceive, evaluate, and interact with members of different cultural, ethnic, or national groups. We also include here research that explicitly aims to examine how members of distinct group differ from each other in how they consider morality.

Interrater agreement was 74% for intrapersonal mechanisms, 83% for interpersonal mechanisms, 92% for intragroup mechanisms, and 88% for intergroup mechanisms.

Research Themes

For each entry, we decided what was the main goal of the research question that was addressed. At the first wave of data collection, the first two authors listed all the keywords provided by the authors of studies included and decided how these could be classified into the five research themes we distinguish in our model. We used this as a starting point to develop our coding scheme, in which ambiguities were resolved through deliberation, as specified above. In this case, coders were instructed to choose a single theme that represented the main focus of the research question in each of the entries included (which could contain multiple studies). Cases where coders thought multiple research themes might be relevant were flagged and further studied and discussed with other coders to determine the primary focus of the research question. Interrater agreement was 68% for moral reasoning, 89% for moral behavior, 84% for moral judgment, 87% for moral self-views, and 95% for moral emotions.

Moral reasoning

Here, we included all research questions that try to capture the moral guidelines people endorse. These include questions about what people consider to be morally right by considering their ideas of what “good” people are generally like or questions about what guidelines people endorse to indicate what a moral person should do. Some researchers aim to examine which choices people think should be made in hypothetical dilemmas and vignettes, asking about people’s positions on specific issues (e.g., gay adoption, killing bugs for science), or wish to assess which values are guiding principles in their life (e.g., fairness, purity). Under this theme, we also classified research questions aiming to examine how moral choices and decisions may differ, depending on specific concerns or situational goals that are activated implicitly (e.g., clean vs. dirty environment) or explicitly (e.g., long-term vs. short-term implications). We note that some of the research questions we included under this theme are labeled by their authors as being about “moral judgment,” as they use this term more broadly than we do. However, in our delineation of the different types of research questions—and in our coding scheme for the five thematic clusters we distinguish—we reserve the term moral judgments for a specific set of research questions, which address the way in which people judge the morality of a another individual or group . Research questions investigating people’s judgments about the general morality of a particular decision or course of action—which capture one’s own moral guidelines—fall under the theme of “moral reasoning” in our coding scheme.

Moral judgments

Under this research theme, we classify all research questions addressing ways in which we evaluate the morality of other individuals or groups. We include research questions examining how the general character of specific individuals is evaluated in terms of perceived closeness of the target to the self or overall positivity/negativity of the target (e.g., in terms of likeability, familiarity, or attractiveness). We also consider under this theme research questions aiming to uncover how people assign moral traits (honesty etc.) or moral responsibility to the individual for the behavior described (guilty, intentionally inflicting harm, deserving of punishment). Similarly, we include research questions addressing the judgments of group targets (existing social groups, companies, communities) in terms of overall positivity/negativity, specific moral traits (e.g., trustworthiness), negative emotions raised, or implicit moral judgments implied in lexical decisions. In this cluster, we also consider research questions addressing the perceived severity of behaviors described, wondering whether people think it merits punishment, or affecting the level of empathy versus dehumanization they experience toward the victims of moral transgressions.

Moral behavior

Here, we include research questions addressing self-reported past behavior or behavioral intentions, as well as reports of (un)cooperative behavior in real life (e.g., volunteering, donating money, helping, forgiving, citizenship) or deceitful behavior in experimental contexts (e.g., cheating, lying, stealing, gossiping). We also include questions addressing implicit indicators of moral behavior (e.g., word completion tendencies, speech pattern analysis, handwipe choices). Research questions under this theme consider these behavioral reports as expressing internalized personal norms, convictions, or beliefs, in relation to indicators of “moral atmosphere,” descriptive or injunctive team or group norms, family rules, or moral role models. We also include under this theme research questions that address moral behavior in relation to situational concerns (e.g., moral rule reminders, cognitive depletion) or specific virtues (e.g., care vs. courage).

Moral emotions

This theme includes research questions in which emotions are considered in response to recollections of real-life events, behaviors, and dilemmas, including significant historical or political events. We also include research questions examining whether such emotions (after being evoked with experimental procedures) can induce participants to display morally questionable behavior (e.g., in a computer game, in response to a provocation by a confederate) or when prompted with situational primes (e.g., pleasant or abhorrent pictures, odors, faces, or transgressive scenarios). Research questions addressing emotional responses people experience in relation to morally relevant issues or situations (guilt, shame, outrage, disgust) are also included under this theme.

Moral self-views

We classified under this research theme all research questions that address the way different aspects of people’s self-views relate to each other (e.g., personality characteristics with self-stated inclinations to display moral behavior), as well as research questions addressing the way experimentally induced behavioral primes, reminders of past (individual or group level) moral transgressions, or the moral superiority of others relate to people’s self-views. This research theme includes research questions addressing personality inventories or trait lists of moral characteristics (e.g., honesty, fairness), as well as self-stated moral motivations or moral ideals (e.g., do not harm) that participants can either explicitly claim as self-defining or implicitly (by examining implicit associations with the self or response times). In addition, we include questions addressing the stated willingness to display moral or immoral behavior (e.g., lie, cheat, help others, donate money or blood), which is also used to indicate the occurrence of moral justifications or moral disengagement to maintain a moral self-view.

Bibliometric Procedures

Temporal trends and impact development.

The data on relevant publications included in this review were linked to the bibliometric WoS database present at the Centre for Science and Technology Studies (CWTS) at Leiden University ( Moed, De Bruin, & Van Leeuwen, 1995 ; Van Leeuwen, 2013 ; Waltman, Van Eck, Van Leeuwen, Visser, & Van Raan, 2011a , 2011b ). At the time these analyses were prepared, the CWTS in-house database contained relevant indicators for records covering the period 1981 through 2017 (see Appendix 3, in Supplementary materials ).

Seminal Publications

We identified two types of seminal publications. First, we assessed which (theoretical or empirical) publications outside our set (excluding methodological publications) are most frequently cited in the publications we examined. Second, we determined which of the empirical publications within our set have received an outstanding number of citations, within the field of morality research, as well as in the wider environment (the general WoS database).

In both cases, the analysis of seminal papers was conducted in three steps. First, we detected publications that were highly cited within this set of studies on morality and recorded in which research theme they were located. Second, within each research theme, we focused on the top 25 most highly cited publications from outside the set and—reflecting the smaller number of publications to choose from—the top 10 most highly cited publications within the set of studies on morality. We then identified how many citations these had received in the publications included in this review to determine a top three of seminal papers outside this set and a top three of seminal papers within this set, for each of the five research themes represented. We also examined how frequently these seminal papers were cited in the wider context of the whole WoS database.

Clusters of Approaches

We used VOSviewer as a tool ( Van Eck & Waltman, 2010 , 2014 , 2018 ) for mapping and clustering ( Waltman, Van Eck, & Noyons, 2010 ) to visualize the content structure in the descriptions of empirical research on morality that we selected for this review. The analysis determines co-occurrences of so-called noun phrase groups in the titles and abstracts of the publications included in the analysis. Because full records of titles and abstracts are only available for studies published from 1996 onward, this analysis could only be conducted for the set of studies published from 1996 through 2017. Co-occurrences of noun phrase groups are indicated as clusters in a two-dimensional space where (a) closeness (vs. distance) between words indicates their relatedness, (b) larger font size of terms generally indicates a higher frequency of occurrence, and (c) shared color codes indicate stronger interrelations. We use these clusters to indicate the empirical approaches described in the titles and abstracts of studies included in this review and relate these to the different types of research questions we classified into five themes.

Trends in Presence and Impact

When we compare trends in publication rates over time, we see that in social psychology publications have increased from about 1,500 per year in 1981 to 4,000 per year since 2014. The absolute numbers in publications on morality included in our review are much lower: Here, we found 10 publications per year in 1981, increasing to over 100 per year since 2014. Thus, the absolute number of publications on morality research remains relatively small compared with the whole field of social psychology. Yet, in comparison, the increase is much steeper for publications on morality, when both trends are indexed relative to the number observed in 1981 (see Figure 2 ). The regression coefficient is considerably larger for publications on morality (0.27) than for publications on social psychology (0.04). The R 2 further indicates that a linear trend explains 85% of the overall increase observed in publications on social psychology, while the trend in studies on morality is less well captured with a linear equation ( R 2 = .54). Indeed, the increase in the number of publications on morality that were published from 2005 onward is much steeper than before, with a regression coefficient of 1.22 and an R 2 for this linear trend of .9.

An external file that holds a picture, illustration, etc.
Object name is 10.1177_1088868318811759-fig2.jpg

Indexed trends and regression coefficients for social psychology as a field and morality as a specialism, WoS, 1981-2017.

Note. WoS = Web of Science.

When we assess the impact of the studies on morality included in our review, we see the average impact of these publications, the journals in which they are published, and the percentage of top-cited publications going up consistently (see Figure 3 ). These field-normalized scores show that the impact of studies on morality is clearly above the average in the field, since 2005. At the same time, there is a steady decrease in the percentage of uncited papers, as well as the proportion of self-citations, and increasing collaboration between authors from different countries (see supplementary materials ).

An external file that holds a picture, illustration, etc.
Object name is 10.1177_1088868318811759-fig3.jpg

Trends in impact scores in morality, WoS, 1981-2017, indicating the average normalized number of citations (excluding self-citations; mncs), the average normalized citation score of the journals in which these papers are published (mnjs), and the proportion of papers belonging to the top 10% in the field where they were published (pp_top_perc).

Emerging Themes

When we distinguish between the types of research questions addressed, this reveals that across the board, there is a disproportionate interest in research questions relating to moral reasoning (χ 2 = 502.19, df = 4, p < .001). In fact this is the most frequently examined research theme throughout the period examined and has yielded between 35 and 60 publications per year during the past few years. Research questions relating to moral judgments were initially examined less frequently, but from 2013 onward with 30 to 40 publications per year this research theme approaches similar levels of research activity as moral reasoning. The steady stream of publications examining questions relating to moral behavior peaked around 2014 when more than 30 publications were devoted to this research theme, but subsequently this has dropped down to roughly 20 publications per year. Publications on research questions relating to moral emotions and moral self-views have increased during the past few years; however, these remain relatively less examined overall, with around 10 publications per year addressing each of these themes. When we compare how these themes developed since the interest of researchers in examining morality increased so rapidly after 2005, we clearly see these differential trends. During this period, the number of studies addressing moral reasoning increases more quickly than studies on moral judgments, as well as—in decreasing order—moral behavior, moral self-views, and moral emotions (see Figure 4 ).

An external file that holds a picture, illustration, etc.
Object name is 10.1177_1088868318811759-fig4.jpg

Comparative trends in the development of research themes in morality research, 2005-2017.

Mechanisms Examined

In a similar vein, we assessed trends visible in the intrapersonal, interpersonal, intragroup, and intergroup levels of mechanisms examined in the studies included in our review. Overall, the interest in these different types of mechanisms is not distributed evenly (χ 2 = 688.43, df = 3, p < .001). Most of the studies included in this review have addressed intrapersonal mechanisms relating to morality, and the relative preference for examining mechanisms relevant to morality at the intrapersonal level has only increased during the past years. The number of studies since 2005 examining intragroup mechanisms show a steep linear trend that accounts for the majority of variance observed (regression coefficient: 6.35, R 2 = .78). Although interpersonal mechanisms were initially less examined, the increased research interest in morality since 2005 is also visible in the number of studies that have addressed such mechanisms (regression coefficient: 3.09, R 2 = .85). However across the board, the examination of intragroup mechanisms remains relatively rare in this literature, with less than 10 studies per year addressing such issues. Here, the regression coefficient is much lower (0.59) and matches the observed variance less well ( R 2 = .64). The examination of intergroup mechanisms is only slightly more popular; however, a linear trend (with a regression coefficient of 0.76) does not explain this trend very well ( R 2 = .25).

When we assess this per research theme (see Figure 5 ), we see that the strong emphasis on intrapersonal mechanisms that is visible across all research themes is less pronounced in research questions addressing moral judgments (χ 2 = 249.48, df = 12, p < .001). In research on moral judgments, the interest in interpersonal mechanisms is much larger. In fact this research theme accounts for the majority of the studies in our review that examine interpersonal mechanisms. The interest in intragroup mechanisms is very rare across the board. It is perhaps most clearly visible in research questions relating to moral behavior. The interest in intergroup mechanisms is relatively small, but more or less the same across the five research themes we examined.

An external file that holds a picture, illustration, etc.
Object name is 10.1177_1088868318811759-fig5.jpg

Number of studies addressing mechanisms at different levels of analysis, specified per research theme, 1940 – 2017.

In the seminal publications outside the set (see Table 2 ), one publication comes up as a top three seminal paper in more than one research theme. This is the publication by Haidt (2001) in which he develops his theory on moral intuition. Clearly, this publication has been highly influential in developing this area of research. It has also been extremely well cited in the WoS database more generally and can be seen as an important development that prompted the increased interest in research on morality during the past 10 to 15 years. However, besides this one paper, there is no overlap between the five research themes in the top three seminal publications that characterize them. This substantiates our reasoning that different clusters of research questions can be distinguished and underlines the validity of the criteria we used to classify the studies reviewed into these five themes.

Top-three Seminal Papers for Each Research Theme, Published Outside the Set.

Rank in research themeNumber of citations in data setAuthorsJournalTitlePublication yearNumber of citations in WoSmncsmnjs
Moral reasoning
 159Haidt, J. The emotional dog and its rational tail: A social intuitionist approach to moral judgment2001199452.5910.37
 236Jost, J. T., Glaser, J., Kruglanski, A. W., & Sulloway, F. J. Political conservatism as motivated social cognition2003123834.779.55
 335Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., & Cohen, J. D. An fMRI investigation of emotional engagement in moral judgment2001136032.1813.44
Moral judgments
 141Haidt, J. The emotional dog and its rational tail: A social intuitionist approach to moral judgment2001199452.5910.37
 229Fiske, S. T., Cuddy, A. J. C., & Glick, P. Universal dimensions of social cognition: Warmth and competence200779022.255.44
 320Gray, K., Young, L, Waytz, A. Mind perception is the essence of morality201215410.974.95
Moral behavior
 129Mazar, N., Amir, O., & Ariely, D. The dishonesty of honest people: A theory of self-concept maintenance200854323.692.16
 224Blasi, A. Bridging moral cognition and moral action: A critical review of the literature198059424.157.41
 317Ajzen, I. The theory of planned behavior199114495327.358.49
Moral emotions
 117Tangney, J. P., Stuewig, J., & Mashek, D. J. Moral emotions and moral behavior200760521.5113.71
 216Baumeister, R. F., Stillwell, A. M., & Heatherton, T. F. Guilt: An interpersonal approach199463120.5510.69
 314Tangney, J. P., Miller, R. S., Flicker, L., Barlow, D. H. Are shame, guilt and embarrassment distinct emotions?19964607.023.42
Moral self-views
 111Zhong, C. B., & Liljenquist, K. Washing away your sins: Threatened morality and physical cleansing20063239.6910.26
 210Haidt, J. The emotional dog and its rational tail: A social intuitionist approach to moral judgment2001199452.5910.37

Note. The rank order within each theme is specified according to the number of citations within the data set examined, which not always corresponds to the total number of citations in WoS. We consider publications as seminal to research on morality when they attract at least 10 citations within the data set examined. As a result of this criterion, we only identified two external papers that were seminal to research on moral self-views. WoS = Web of Science.

Going through the five themes and their top three seminal papers additionally revealed that there are two empirical studies that have been highly influential in this literature. These are not included in our set because they were not published in a psychology journal and hence did not meet our inclusion criteria. In fact, part of the appeal in citing the fMRI study by Greene et al. (2001) in research on moral reasoning or the physical cleansing study by Zhong and Liljenquist (2006) in research on moral self-views may be that these were published in the extremely coveted journal Science —which is not a regular outlet for researchers in social psychology. Indeed, there has been some concern that these high visibility publications—and the media attention they attracted—have led multiple researchers to adopt this same methodology for further studies, perhaps hoping to achieve similar success ( Bauman et al., 2014 ; Graham, 2014 ; Mooijman & Van Dijk, 2015 ). The drawback of this publication strategy is that this may have led many researchers to continue examining different conditions affecting trolley dilemma and handwipe choices, instead of broadening their investigations to other issues relating to morality ( Hofmann et al., 2014 ; Lovett et al., 2015 ).

In the research on moral reasoning , besides Haidt’s (2001) theory on moral intuition and the fMRI study by Greene et al. (2001) discussed above, the third highly cited review paper addresses political ideologies. This publication by Jost, Glaser, Kruglanski, and Sulloway (2003) reports a meta-analysis examining how individual differences (e.g., authoritarianism, need for closure) correlate with conservative ideologies across 88 research samples in 12 countries. The relationship between moral reasoning and political ideologies is also an important topic in empirical work in this research theme. Indeed, the empirical publication that is most often cited in the WoS database (see Table 3 ) reports a series of studies that connects the primacy of different moral foundations (e.g., fairness, harm, authority) to liberal versus conservative political views of specific individuals ( Graham, Haidt, & Nosek, 2009 ). The high visibility and impact of the work of John Haidt and his collaborators in research on moral reasoning are further evidenced by the other two empirical publications that come up as most highly cited in our review of this research theme. These report data used for the development and validation of the Moral Foundations Questionnaire ( Graham et al., 2011 ) and research revealing cultural differences in the issues people consider moral and the way they respond to them ( Haidt et al., 1993 ).

Top-three Seminal Papers, Published Within the Set, for Each Research Theme.

Rank in research themeNumber citations in data setAuthorsJournalTitlePublication yearNumber citations in WoSmncsmnjs
Moral reasoning
 1129Graham, J., Haidt., J., & Nosek, B. A. Liberals and conservatives rely on different sets of moral foundations200967132.033.11
 251Haidt, J., Koller, S. H., Dias, M. G. Affect, culture and morality, or is it wrong to eat your dog?199344711.543.76
 395Graham, J., Nosek, B. A., Haidt, J., Iyer, R., Koleva, S., & Ditto, P. H. Mapping the moral domain201135423.223.14
Moral judgments
 145Schnall, S., Haidt, J., Clore, G. L., & Jordan, A. H. Disgust as embodied moral judgment200838416.131.77
 224Reeder, G. D., & Spores, J. M. The attribution of morality19831092.512.52
 331Goodwin, J. P., Piazza, J., & Rozin, P. Moral character predominates in person perception and evaluation20148013.362.43
Moral behavior
 154Bandura, A., Barbaranelli, C., Caprara, G. V., & Pastorelli, C. Mechanisms of moral disengagement in the exercise of moral agency199652711.623.63
 230Monin, B., & Miller, D. T. Moral credentials and the expression of prejudice20012945.983.24
 320Gino, F., Schweitzer, M. E., Mead, N. L., & Ariely, D. Unable to resist temptation: How self-control depletion promotes unethical behavior201116112.241.98
Moral emotions
 152Rozin, P., Lowery, L., Imada, S., & Haidt, J. The CAD triad hypothesis: A mapping between three moral emotions (contempt, anger, disgust) and three moral codes (community, autonomy, divinity)199948411.193.52
 217Tybur, J. M., Lieberman, D., & Griskevicius, V. Microbes, mating, and morality: Individual differences in three functional domains of disgust200922510.293.11
 326Horberg, E. J., Oveis, C., Keltner, D., & Cohen, A. B. Disgust and the moralization of purity20091446.883.11
Moral self-views
 196Aquino, K., & Reed, A. The self-importance of moral identity200256112.003.01
 263Leach, C. W., Ellemers, N., & Barreto, M. Group virtue: The importance of morality (vs. competence and sociability) in the positive evaluation of in-groups20072337.263.09
 315Ford, M. R., & Lowery, C. R. Gender differences in moral reasoning: A comparison of the use of justice and care orientations1986871.403.49

Note. The rank order within each theme is specified according to the total number of citations in WoS, which not always corresponds to the number of citations within the data set examined. WoS = Web of Science.

Research on moral judgments essentially examines the assignment of good versus bad intentions to others, for instance, based on their observed behaviors. An influential theoretical model guiding work in this area argues that people’s perceived intentions and abilities form two key dimensions in social impression formation ( Fiske, Cuddy, & Glick, 2007 ). In addition, many researchers in this area have referred to the work of Gray et al. (2012 , see Table 2 ) who consider the intentional perpetration of interpersonal harm—which requires the assignment of mental capacities to others—as a hallmark of human morality. Among the empirical studies examining these issues, the classic research by Reeder and Spores (1983) , which examines how situational information affects the perceived morality of individual actors, has become a seminal publication. A more recent study highly cited within this research theme was conducted by G. P. Goodwin, Piazza, and Rozin (2014) on the primacy of morality in person perception (see Table 3 ). The influence of Haidt’s (2001) seminal publication on moral intuition in this research theme is visible in a frequently cited study by Haidt and colleagues on the role of disgust as a form of embodied moral judgment ( Schnall et al., 2008 ; see Table 3 ).

In moral behavior , the most highly cited theory papers emphasize the connection between conceptualizations of the moral self and displays of moral behavior. In addition to the classic review paper arguing for this connection ( Blasi, 1980 ), many studies in this research theme refer to the different strategies people can use to maintain their self-concept of being a moral person, even if they are not immune to moral lapses ( Mazar et al., 2008 ). Seminal studies within this research theme reveal the implications of the connection between moral self-views and moral behaviors, which is in line with relations between research themes visualized in Figure 1 . Accordingly, the most frequently cited publications reveal that even well-meaning individuals can display unethical behavior as their self-control becomes depleted ( Gino, Schweitzer, Mead, & Ariely, 2011 ). In addition, research elucidates the different strategies people can use to disengage from their moral lapses ( Bandura et al., 1996 ). The possible implications are demonstrated empirically, for instance, in work showing that people freely express prejudice once they have established their moral credentials ( Monin & Miller, 2001 ).

In the research theme on moral emotions , the most highly cited theory papers focus on the experience of guilt and shame as relevant self-condemning emotions, indicating how people reflect upon and experience moral transgressions associated with the self . These exemplify the social implications of moral behavior and are generally considered uniquely diagnostic for human morality ( Baumeister, Stillwell, & Heatherton, 1994 ; Tangney, Miller, Flicker, & Barlow, 1996 ; Tangney et al., 2007 ). However, the most highly cited empirical publications drawing from these theoretical perspectives all address disgust as a response, indicating that other individuals or situational contexts are considered impure and should be avoided ( Horberg, Oveis, Keltner, & Cohen, 2009 ; Rozin, Lowery, Imada, & Haidt, 1999 ; Tybur, Lieberman, & Griskevicius, 2009 ).

Finally, the studies on moral self-views comprise a relatively small and dispersed research theme, which is not characterized by a specific theoretical perspective. This is also exemplified by the fact that we only found two papers external to the set that met our criteria for being considered seminal. Researchers working on this theme most often cite the study of Zhong and Liljenquist (2006) , suggesting that people engage in symbolically cleansing acts to alleviate threats to their moral self-image. In addition, the seminal paper by Haidt (2001) is frequently cited by publications in this research theme. Empirical publications on moral self-views that have attracted many citations also from outside the morality literature include a validation study of the moral identity scale ( Aquino & Reed, 2002 ), a series of studies documenting the importance of morality for people’s group-based identities ( Leach, Ellemers, & Barreto, 2007 ), and a classic study on gender differences in moral self-views ( Ford & Lowery, 1986 ).

We examined the interrelations and clusters of research approaches in the studies reviewed, on the basis of titles and abstracts for 989 studies in our set, published in 1996 through 2017 (see Figure 6 ). The first cluster, containing 107 interrelated terms (indicated in red— Experiments and actions ), contains studies examining a variety of actions and their consequences in experimental research. The second cluster contains 70 terms (indicated in orange— Individual and group differences ) capturing studies on personality and individual differences as well as differences between social groups in correlational research. The third cluster connects 48 terms (indicated in pink— Rule endorsement ) referring to studies on justice and fairness, authority, and moral foundations. The fourth cluster contains 26 terms (indicated in turquoise— Harm perpetrated ) indicating responses to violation and harm. The fifth cluster contains seven terms (indicated in purple— Norms and intentions ) referring to norms and deliberate intentions in planned behavior.

An external file that holds a picture, illustration, etc.
Object name is 10.1177_1088868318811759-fig6.jpg

Publications on morality, 1996-2017.

Note. Clustering and interrelations based on content analysis of publication titles and abstracts.

These clusters help us characterize the studies conducted within each of the research themes we distinguish in this review. We assess this by examining overlay “heat maps” indicating the density of studies within each research theme (ranging from low—blue to yellow—high) by projecting them on the clusters of research approaches outlined above (see supplementary materials ).

The overlay map for research on moral reasoning connects clusters of research relating to individual and group differences (orange) and rule endorsement (pink). However, studies on moral reasoning have largely neglected to examine how such reasoning relates to actions in experimental contexts (red), harm perpetrated (turquoise), or norms and intentions (purple). Studies on moral judgments by contrast mainly involve experiments and examine actions (red) as well as harm perpetrated (turquoise). However, research addressing questions on moral judgments has been less concerned about examining individual and group differences (orange), rule endorsement (pink), or norms and intentions (purple). Research on moral behavior has most frequently addressed norms and intentions (purple), and to a lesser extent experiments and actions (red) and individual and group differences (orange). Researchers in this area have not systematically examined rule endorsement (pink) or harm perpetrated (turquoise). The research on moral emotions is mostly carried out in relation to harm perpetrated (turquoise), which is examined in terms of experiments and actions (red), rather than individual and group differences (orange). Rule endorsement (pink) and norms and intentions (purple) are rarely taken into account. The research on moral self-views connects approaches addressing individual and group differences (orange), experiments and actions (red), and harm perpetrated (turquoise), but is less concerned with rule endorsement (pink) or norms and intentions (purple).

Conclusions Emerging From Five Research Themes

The quantitative analyses reported above have allowed us to specify the overall characteristics of the studies included in our review, in terms of their most influential publications as well as most frequently used research approaches. We will now consider how the nature of the research questions addressed in the studies reviewed and the empirical approaches that were used affect current insights on the psychology of morality.

This is by far the most popular research theme in the empirical literature on morality, and this preference has only intensified over the years. Research based on Haidt and Graham’s (2007) moral foundations theory has established that conservatives in the United States are more likely to show support for civil rights restrictions ( Crowson & DeBacker, 2008 ), to have a prevention focus ( Cornwell & Higgins, 2013 ), and to perceive moral clarity ( Schlenker, Chambers, & Le, 2012 ) than liberals. This not only predicts their political voting behavior and candidate preferences ( Skitka & Bauman, 2008 ) but also relates to more general tendencies in how individuals relate to others, as indicated by their social dominance orientation, authoritarianism ( Federico, Weber, Ergun, & Hunt, 2013 ), or parenting styles ( McAdams et al., 2008 ).

However, research on this theme also reveals how the moral principles people endorse relate to their life experiences, family roles, and position in society. For instance, exposure to war ( Haskuka, Sunar, & Alp, 2008 ) or abusive/dysfunctional family relations ( Caselles & Milner, 2000 ) impedes moral reasoning. More generally, many studies have shown that the moral judgments people make depend on their age, gender (e.g., Kray & Haselhuhn, 2012 ; Skoe, Cumberland, Eisenberg, Hansen, & Perry, 2002 ), parental status, education, multicultural experiences ( Lin, 2009 ), war experiences, family experiences, or religious status ( Simpson, Piazza, & Rios, 2016 ).

While this work attests to the power and resilience of moral convictions, at the same time, there is an abundance of evidence that people are not very consistent in their moral reasoning. Indeed, it has clearly been demonstrated that moral reasoning also depends on the way a moral dilemma is framed or specific concerns that are (implicitly) primed. Such primes can make salient the monetary cost of their decisions (e.g., Irwin & Baron, 2001 ), the intentions and goals of the actors involved, the harm done as a result of their actions ( Sabini & Monterosso, 2003 ), or specific events in history ( Lv & Huang, 2012 ). But also more subtle and implicit cues can have far-reaching effects for moral reasoning. For instance, the moral acceptability of the same course of action differs depending on whether people are implicitly prompted to focus on their head (vs. their heart; Fetterman & Robinson, 2013 ), on cleanliness ( Zhong, Strejcek, & Sivanathan, 2010 ), on approach versus avoidance ( Broeders, Van Den Bos, Müller, & Ham, 2011 ; Janoff-Bulman, Sheikh, & Hepp, 2009 ; Moore, Stevens, & Conway, 2011 ), on the present versus the future, or on own learning versus the education of others ( Tichy, Johnson, Johnson, & Roseth, 2010 ).

In sum, the accumulated research on moral reasoning has led to two types of conclusions. First, it has been extensively documented that different social roles and life experiences can have a long-term impact on the way people reason about morality and the moral principles they prioritize. Second, more immediate situational cues also affect moral reasoning and moral decisions. Both these conclusions from studies on moral reasoning complement philosophical analyses as well as evolutionary accounts emphasizing the objective survival value of adhering to specific principles or guidelines.

Studies on moral judgments generally attest to the fact that information about morality weighs more heavily in determining overall impressions of others than diagnostic information pertaining to behavioral domains such as competence or sociability (e.g., S. Chen, Ybarra, & Kiefer, 2004 ). This is the case for evaluations of individuals, as well as for groups and organizations. Information about morality is seen as being more predictive of behavior in a range of situations ( Pagliaro, Ellemers, & Barreto, 2011 ) and more likely to reflect on other members of the same group ( Brambilla, 2012 ). However, people find it easy to accept lapses or shortcomings as indicating moral decline, while they require more evidence to be convinced of people’s moral improvement ( Klein & O’Brien, 2016 ). Furthermore, the relative importance people attach to specific features may differ, depending, for instance, on the cultural context (e.g., Chinese vs. Western) in which this is assessed ( F. F. Chen, Jing, Lee, & Bai, 2016 ; X. Chen & Chiu, 2010 ).

Inferences about people’s good intentions—presumably indicating their morality—are often derived from features indicating agreeableness and communality. Individuals are seen as moral when they can make agentic motives compatible with communal motives, for instance, by displaying self-control, honesty, reliability, other-orientedness, and dependability ( Frimer, Walker, Lee, Riches, & Dunlop, 2012 ). Whether this is perceived to be the case also depends on situational cues such as the harm done to others (e.g., Guglielmo & Malle, 2010 ), the benefit to the self ( Inbar, Pizarro, & Cushman, 2012 ), or the perceived intentionality of the behavior that has led to such outcomes (e.g., Greitemeyer & Weiner, 2008 ; Reeder, Kumar, Hesson-McInnis, & Trafimow, 2002 ).

Other target characteristics (such as their social status or their national, religious, cultural, or sexual identity; e.g., Cramwinckel, van den Bos, van Dijk, & Schut, 2016 ), as well as contextual guidelines (e.g., instructing people to focus on the action vs. the person; duties vs. ideals; appearance vs. behavior of the target) may also color the way research participants interpret and value concrete information about specific targets ( Heflick, Goldenberg, Cooper, & Puvia, 2011 ). Even unrelated contextual cues may have such effects, for instance, when information is presented on a black-and-white background ( Zarkadi & Schnall, 2013 ) or when research participants are positively or negatively primed with a specific odor, mood induction, or room temperature (e.g., Schnall et al., 2008 ).

In addition, judgments of other individuals and groups also depend on the physical and psychological closeness of these targets to the self (e.g., Cramwinckel, van Dijk, Scheepers, & van den Bos, 2013 ; Haidt, Rosenberg, & Hom, 2003 ). Self-anchoring, self-distancing, and self-justifying effects can all be raised when moral judgments about others can be seen to reflect upon own social class or race, one’s personal convictions, the salience of specific social roles (e.g., as a parent, Eibach, Libby, & Ehrlinger, 2009 ; as a subordinate, Bauman, Tost, & Ong, 2016 ), or any group memberships that is seen as self-defining (e.g., Iyer, Jetten, & Haslam, 2012 ). Related concerns can lead people to protect just-world beliefs ( Gray & Wegner, 2010 ) by dehumanizing stigmatized targets (e.g., Cameron, Harris, & Payne, 2016 ; Riva, Brambilla, & Vaes, 2016 ), increasing their physical distance from them, pointing to moral failures they or other group members have displayed in the past, or referring to “natural” differences that justify differential treatment (e.g., Kteily, Hodson, & Bruneau, 2016 ).

In sum, even if people are strongly inclined to evaluate the moral stature of others they encounter, research in this area reveals that the morality of other individuals and groups is largely in the eye of the beholder. In general, people find it easier to acknowledge the moral questionability of specific behaviors, when these are perpetrated by an individual or group that is more distant from the self. Self-protective mechanisms can also lead people to reduce the moral standing of victims of immoral behavior or alleviate the blame placed on perpetrators.

Studies on moral behavior have often addressed the interplay between individual moral guidelines, on one hand, and social norms, on the other. This is examined, for instance, in studies on moral rebels and moral courage—those who stand up for their own principles ( Sonnentag & McDaniel, 2013 )—as well as moral entrepreneurs and people engaged in moral exporting—those who actively seek to convince others of their own moral principles ( Peterson, Smith, Tannenbaum, & Shaw, 2009 ). Research shows that the strength of personal moral beliefs, attitudes, or convictions can make people resilient against social pressures ( Brezina & Piquero, 2007 ; Hornsey, Majkut, Terry, & McKimmie, 2003 ; Langdridge, Sheeran, & Connolly, 2007 ). However, in domains where personal moral convictions are less strong, moral norms (indicated by team atmosphere or principled leadership) can also overrule individual concerns (e.g., Fernandez-Dols et al., 2010 ). At the same time, it has been documented that social pressures can tempt people either to behave less morally (e.g., M. A. Barnett, Sanborn, & Shane, 2005 ) or to display more group-serving (instead of selfish) behavior (e.g., Osswald, Greitemeyer, Fischer, & Frey, 2010 ), depending on what these norms prescribe ( Ellemers, Pagliaro, Barreto, & Leach, 2008 ).

Research has also revealed that once their moral standing is affirmed, people more easily fall prey to “moral licensing” tendencies. This can even happen vicariously. For instance, it has been demonstrated that people are more likely to display prejudice and bias in hiring decisions after having seen that other members of their group have hired an ethnic minority applicant for a vacant position ( Kouchaki, 2011 ). Yet, positive emotional states resulting from immoral behavior (such as “cheater’s high”; Ruedy, Moore, Gino, & Schweitzer, 2013 , or “hubristic pride,” for example, Bureau, Vallerand, Ntoumanis, & Lafreniere, 2013 ) occur only rarely. Instead, most studies show that people find it aversive to realize they have behaved immorally and have documented different compensatory strategies that can be displayed (e.g., Bandura, Caprara, Barbaranelli, Pastorelli, & Regalia, 2001 ). For instance, confronting people with moral lapses (of themselves and others) impairs the recall, cognitive salience, and perceived applicability of moral rules (“moral disengagement”; Bandura, 1999 ; Fiske, 2009 ). When caught in a moral transgression, people emphasize that this behavior does not reflect their true intention or identity ( Conway & Peetz, 2012 ) or speculate that others are likely to do even worse (“moral hypocrisy”; Valdesolo & DeSteno, 2007 ; Valdesolo & DeSteno, 2008 ).

In sum, research on moral behavior demonstrates that people can be highly motivated to behave morally. Yet, personal convictions, social rules and normative pressures from others, or motivational lapses may all induce behavior that is not considered moral by others and invite self-justifying responses to maintain moral self-views.

The intensity of emotional responses to the moral acts of the self and others has been shown to depend on the nature of the situation (importance of the moral dilemma, distance in time, resulting from action vs. inaction; Kedia & Hilton, 2011 ), as well as on specific characteristics of the victim or target of morally questionable acts (e.g., perceived vulnerability, physical proximity; Dijker, 2010 ). These include factors relating to the self (experience of pride; Camacho, Higgins, & Luger, 2003 ), to the social situation (social validation of action perpetrated), or to the victim of the transgression (dubious moral character; Jiang et al., 2011 ). All these situational characteristics may buffer people against the emotional costs of witnessing or perpetrating immoral acts.

Research has further examined the antecedents and implications of specific emotions. This has revealed that disgust can elicit (symbolic) cleansing behaviors ( Gollwitzer & Melzer, 2012 ) and is raised in response to various health cues (e.g., relating to taste sensitivity— Skarlicki, Hoegg, Aquino, & Nadisic, 2013 —sexuality, or pathogens). However, such disgust is not necessarily related to morality ( Tybur et al., 2009 ). Other studies have addressed moral anger, which has been associated with the tendency to aggress against others (protest, Cronin, Reysen, & Branscombe, 2012 ; scapegoating and retribution, Rothschild, Landau, Molina, Branscombe, & Sullivan, 2013 ) or attempts to restore moral order (e.g., Pagano & Huo, 2007 ).

In this literature, guilt and/or shame emerge as self-reflective emotions that uniquely indicate the felt moral implications of actions perpetrated by the self (or others that imply the self, for example, ingroup members). Shame and guilt each have their specific properties and effects (e.g., Sheikh & Janoff-Bulman, 2010 ; Smith, Webster, Parrott, & Eyre, 2002 ). Shame is more clearly associated with the Behavioral Inhibition System, related to public exposure, blushing, and (in problem populations) anxiety and substance abuse. Guilt relates more clearly to the Behavioral Activation System and is related to private beliefs, empathy, and (in problem populations) religious activities. Nevertheless, both shame and guilt have been found to relate specifically to justice violations rather than other types of negative experiences (e.g., Agerström, Björklund, & Carlsson, 2012 ). Furthermore, the experience of guilt and/or shame is associated with endorsing victim compensation and support and reparation efforts (e.g., Pagano & Huo, 2007 ) but does not necessarily elicit other forms of prosocial behavior (e.g., De Hooge, Nelissen, Breugelmans, & Zeelenberg, 2011 ).

In sum, both the intensity and the nature of emotions reported indicate the extent to which people experience situations encountered by themselves and others as having moral implications and as requiring action to enact moral guidelines or redress past injustices. The secondary, uniquely human, and self-reflective emotions of guilt and shame appear to be particularly important in this process.

In this literature, “concern for others,” derived from self-proclaimed levels of agreeableness or communion, are seen to indicate people’s moral character. Accordingly, much of the research on moral self-views has assessed self-proclaimed levels of honesty/humility or warmth/care (contained, for instance, in Lee and Ashton’s (2004) HEXACO-PI or Aquino and Reed’s (2002) “moral identity” scale). Individuals who combine a focus on agency and goal achievement with expressions of communion and care for others are seen as “moral exemplars” (e.g., Frimer, Walker, Dunlop, Lee, & Riches, 2011 ). When such moral behavior is displayed by others, this can also increase people’s confidence in their own ability to act morally (e.g., Aquino, McFerran, & Laven, 2011 ).

Different studies have established that self-reported character traits correlate with accounts of delinquency, unethical business decisions, or forgiveness provided by research participants (e.g., Cohen, Panter, Turan, Morse, & Kim, 2013 ). In addition, the moral self-views people report have been found to converge with actual behavioral displays (e.g., cheating vs. helping others) during experimental tasks in the lab (e.g., Stets & Carter, 2011 ). However, results from this research also suggest that people deliberately use such acts to communicate their good moral intentions, for instance, by donating money after lying ( Mulder & Aquino, 2013 ) or demonstrating that they resist pressure from others to behave immorally ( Carter, 2013 ).

Unfortunately, this tendency to self-present as being morally good can also prevent people from acknowledging their moral lapses. Indeed, after behaving in ways that violate moral standards (violence, delinquency, unethical decision making), people have been found to display a range of moral disengagement strategies. These include placing the event at a more distant point in time or describing it in more abstract terms ( Lammers, 2012 ), rationalizing one’s behavior by invoking a more distant moral purpose ( Aquino, Reed, Thau, & Freeman, 2007 ), or dehumanizing those who suffered from it ( Monroe, 2008 ). In a similar vein, actions that call into question the moral integrity and standards of one’s ingroup have been found to invite negative attitudes (prejudice), emotions (outrage), and behaviors (intolerance) directed toward the outgroup (e.g., Täuber & Zomeren, 2013 ).

In sum, this literature suggests that people reflect on their moral character and how they present this in their self-descriptions as well as in acts they can use to convey their moral intentions. However, the available evidence shows this may primarily lead them to preserve moral self-regard instead of making them improve or prevent morally questionable behaviors. Indeed, the focus on communality and concern for others as indicators of moral character may be too broad to provide sufficient guidance on how to act morally in specific situations.

Discussion and Future Directions

The past years have witnessed a marked increase in the interest of (social) psychologists in “morality” as a topic for empirical research. Our bibliometric analysis reveals the increasing maturity of this area of scientific inquiry, in terms of amount of research effort invested and relative impact. Yet, overviews that are still often cited are by now outdated in terms of the studies covered ( Blasi, 1980 , reviewing 71 studies) or have tended to focus on specific issues or research themes (e.g., Bauman et al., 2014 ).

Observed Trends and Neglected Issues

Substantial knowledge has accumulated about the way people think about morality; however, we know much less about how this affects their moral behavior . We draw this conclusion based on the observation that by far most of the published studies in our review have addressed issues relating to moral reasoning—what people consider right and wrong ways to behave. Furthermore, many researchers have examined the judgments we make about the moral behaviors of other individuals and groups. Of course, these are important research themes in their own right. However, part of the interest of social psychological researchers in the topic of morality stems from the fact that moral reasoning and moral judgments of others are seen to inform the choices people make in their own moral behaviors, as is also visualized in Figure 1 . Yet, we see that studies on moral reasoning and moral judgments have tended to focus on a limited number of specific research questions, methodologies, and approaches, which are not clearly connected to each other or to other research themes.

As a result, current insights on moral reasoning mostly pertain to relatively abstract principles (such as “fairness”) that people can subscribe to, as well as individual differences in which moral guidelines they endorse. The concrete implications of these general principles for specific situations remain less considered. Research on moral judgments complements this by addressing people’s situational experiences, for instance, resulting from concrete choices or behaviors displayed by others. However, these more specific judgments are not systematically traced back to the general moral principles that might inform them or the (dis)agreement that may exist about how to prioritize these.

Research on moral behavior and moral self-views has examined a broader range of issues and is less bound to specific research paradigms and approaches. Accordingly, researchers examining these topics have been more successful in connecting different clusters of research—validating the central role assigned to such research questions in Figure 1 . Nevertheless, overall these integrative empirical approaches have received much less interest from researchers examining issues in morality and have remained relatively dispersed. In fact, we were unable to clearly identify a seminal theoretical approach that has guided research on moral self-views. We suspect this may be a side-effect of some highly visible research paradigms and successful measures that are cited and followed up by many researchers.

Imbalance in Research Themes Addressed and Mechanisms Examined

A second conclusion relates to the choices researchers have made in directing their efforts to examine different issues relating to morality. Our classification of this body of research into distinct themes addressed and types of mechanisms examined has allowed us to quantify and characterize these choices. The comparison of studies carried out to address different research themes revealed that a large part of this literature is relatively limited in terms of the questions raised and the type of methodologies that are used. As a result, the concrete value of the detailed knowledge we have accumulated about moral reasoning and moral judgments as antecedent conditions for moral behavior unfortunately has remained hypothetical. That is, emerging insights into the way people think about morality and moral behavior have not systematically been followed through by assessing how broader guidelines and principles actually inform behavior, emotions, and self-views. Instead, these latter types of studies are relatively rare. Similarly, the literature reviewed here yields relatively little insight into the way behavior, emotions, and self-views feed back into the development of people’s moral reasoning over time. Nor does this body of work systematically address how people’s own experiences affect their judgments of others. These process-oriented and integrative questions constitute promising avenues for future research.

Our decision to classify published studies in terms of the level of analysis adopted has additionally revealed that the mechanisms examined (e.g., how the moral principles people subscribe to relate to the moral intentions they report) are mostly located at the intrapersonal level. In addition, there is a considerable body of research that examines interpersonal mechanisms in particular in studies examining how these relate to the impressions we form of others. However, much less research effort has been devoted to examining how people may come to share the same moral values or how members of different groups in society respond to each other’s moral value endorsements. Yet, the studies that adopt such an approach have clearly established that intragroup mechanisms can and do play a role, also in the moral reasoning individuals develop. Furthermore, research has shown that individuals adapt the moral principles they prioritize, depending on group identities and salient concerns these prescribe. Bicultural individuals, for instance, have been found to shift between prioritizing autonomy or community concerns in their moral reasoning, depending on which of their cultural identities is more salient in the situation they encounter ( Fu, Chiu, Morris, & Young, 2007 ).

Because studies taking this type of approach are so rare, our understanding of when and how people converge toward shared moral views, how they influence each other in adapting their moral convictions, and how social sanctions and rewards are used to make individuals adhere to shared moral norms has largely remained uncharted territory. Yet, these latter types of questions are those that guide the public debate on morality—and are often cited as a source of inspiration by researchers in this area. Similarly, relatively few researchers have addressed intergroup mechanisms, even though their relevance—for instance, for moral reasoning—is revealed in work showing that group memberships define the “moral circles” in which people are afforded or denied deservingness of moral treatment (e.g., Olson, Cheung, Conway, Hutchison, & Hafer, 2011 ; see also Ellemers, 2017 ).

The relative neglect of intragroup and intergroup mechanisms in this literature is all the more striking because different theoretical approaches—that are frequently cited by researchers working on morality—emphasize that moral principles are considered so important because they indicate shared notions about “right” and “wrong” that regulate the behavior of individuals. Indeed, prominent approaches to morality commonly acknowledge that general moral principles such as the “golden rule” can be interpreted differently in different contexts or by groups of people who translate these into specific behavioral guidelines (e.g., Churchland, 2011 ; Giner-Sorolla, 2012 ; Greene, 2013 ; Haidt & Graham, 2007 ; Haidt & Kesebir, 2010 ; Harvey & Callan, 2014 ). This is also the key message of the seminal study on moral reasoning by Haidt et al. (1993) . Such group-specific interpretations of the same universal values also help to explain why conflicts about moral issues are so stressful and difficult to resolve (see also Ellemers, 2017 ; Ellemers & Van der Toorn, 2015 ). Yet, researchers have only recently begun to examine these issues more systematically (e.g., Rom & Conway, 2018 ).

Thus, the imbalance observed in research themes addressed and levels of analysis at which relevant mechanisms have been examined reveal an important discrepancy between empirical research on morality and leading theoretical approaches that emphasize the importance of morality for group life and for individuals living together in communities (e.g., Gert, 1988 ; Janoff-Bulman & Carnes, 2013 ; Rai & Fiske, 2011 ; Tooby & Cosmides, 2010 ). As a result, we know a lot about intrapersonal and some about interpersonal considerations relating to morality, but have relatively little insight into the social functions of morality (see also Ellemers & Van den Bos, 2012 ) that also incorporate relevant mechanisms pertaining to intragroup dynamics and intergroup processes.

Key Characteristics of Human Morality Remain Underexamined in Research

A third conclusion emerging from this review is that there is a disjoint between seminal theoretical approaches to human morality and empirical work that is carried out. Our identification of seminal publications revealed that the theoretical perspectives that we have used to derive key characteristics of human morality are also the ones that are frequently cited by researchers in this area. However, closer inspection of the research included in our review reveals that the studies these researchers conducted do not systematically address or reflect the key features characterizing foundational theoretical approaches. This is visible in different ways.

To begin with, the notion that shared identities shape the development of specific moral guidelines, which in turn inform the behavioral regulation of individuals living in social groups, is a key feature identified by different approaches seeking to understand the psychology of morality. Yet, cluster analysis of the studies carried out to examine this reveals that empirical approaches tend to focus either on the identification of general principles and individuals who endorse them or on the impact of specific norms and how these affect the choices people make in concrete realities. However, they mostly do this while neglecting to examine how moral norms pertaining to specific behaviors can be traced to general moral principles. Yet, the ambiguity in translating abstract moral principles into specific behavioral guidelines is where the action is. This is what causes disagreement between individuals or groups endorsing diverging interpretations of the same moral rule. This ambiguity also provides the leeway for people to redeem their moral self after moral transgressions by selectively choosing which specific behaviors are diagnostic for their broader moral intentions and which are not.

Furthermore, the emotional burden of moral experiences and the impact this has on subsequent moral reasoning and moral judgments are strongly emphasized in different perspectives that are seen as influential in this literature (e.g., Blasi, 1980 ; Haidt, 2001 ). Notably, the emotions that are seen as distinctive for human morality (shame and guilt) refer to explicitly self-reflective states. The experience of these particular emotions helps people to identify the moral implications of their judgments and behaviors, and the anticipation of these emotions supports efforts to regulate their behavior accordingly. Here too there is a disjoint between what theoretical perspectives emphasize and what empirical studies examine. That is, across the board, moral emotions constitute the least frequently examined research theme. Furthermore, even the studies that do address moral emotions do not always tap into these uniquely human and self-reflective moral emotions. Instead, there seems to be a preference for research paradigms that focus on the emergence of disgust. While this allows researchers to use implicit measures to assess physical or symbolic distancing of the self from aversive situations, other studies have noted that the stimuli examined in this way may not necessarily have moral overtones. As a result, the added value of such work for understanding the emotional implications of moral situations or charting the role of emotions in the regulation of one’s own moral behavior is limited.

Highly influential approaches that are very frequently cited in the studies reviewed (most notably, Blasi, 1980 ; Haidt, 2001 ) emphasize the importance of connecting “thoughts” and cognitions to “experiences” and actions. Yet, we conclude that the clusters of research that emerge are located in a space where these emerge as opposite extremes. Most studies either address general principles, overall guidelines, or abstract preferences in rule endorsement or focus on concrete experiences and actions, without connecting the two. Furthermore, the role of moral emotions in relation to moral judgments, moral reasoning, moral behaviors, and moral self-views remains underexamined in this literature.

Reliance on Self-Reports Versus Observation of Self-Justifying Tendencies

A fourth conclusion emerging from our review resonates with concerns expressed by Augusto Blasi, more than 35 years ago. That is, he noted that researchers examining moral cognition (including information, norms, attitudes, values, reasoning, and judgments) ultimately aim to understand the role that different elements play in creating moral action. At the same time, he concluded that the designs and measures used in the 71 studies he reviewed actually did not allow researchers to substantially advance their understanding of the issues they aimed to examine and accused them of “intellectual laziness” (p. 9) in failing to provide a clearly articulated theoretical rationale for relations examined.

In our review examining more than 1,000 empirical studies that were published since, we still see similar concerns emerging. In fact, there is a marked reliance on self-reports, explicit judgments or choices, and self-stated behavioral intentions, and we found very few examples of studies using implicit indicators of moral concerns or (psycho)physiological measures. This is unfortunate, in view of the far-reaching social implications of moral choices and moral behaviors, causing self-presentational concerns and defensive responses to guide the deliberate responses of research participants (see also Ellemers, 2017 ).

Furthermore, the empirical measures generally used largely rely on self-reports of general dispositions or overall preferences and intentions. This does not reflect current theoretical insights on the prevalence of defensive and self-justifying mechanisms in the way people think about the moral behaviors of themselves and others. It is also not in line with the results of empirical studies reviewed here, documenting how strategic self-presentation, biased judgments, and other self-defensive responses can be raised by various types of situational features that may be incidental and unrelated to the moral issue at hand. In light of the empirical evidence demonstrating various types of bias in each of the research themes examined, it is difficult to understand why so many researchers still rely on measures that capture individual differences or general tendencies and assume these have predictive value across situations.

Even though studies documenting factors that may induce biased judgments call into question the predictive value of standardized measures of morality, we do think it is theoretically meaningful to establish these situational variations. The crucial implication of these findings is that seemingly unimportant or irrelevant situational features can have far-reaching implications for real-life moral decisions. This knowledge can be used to redesign relevant conditions, for instance, at work, to support employees who feel they need to blow the whistle ( Keenan, 1995 ) or to help sales persons decide how to deal with customer interests ( Kurland, 1995 ).

Recent Developments and Where to Go From Here

We devote this final section of our review to promising avenues that researchers have started to pursue, which offer concrete examples of how to connect different strands of research and examine additional levels of analysis that may inspire future researchers. Even though we have criticized the lack of integration between the different research themes examined, some of the seminal studies in our review stand out in that they are also frequently cited in another theme than where they were classified. This is the case for the seminal study by Graham et al. (2009) on moral reasoning, the work of Bandura et al. (1996) on moral disengagement, and the work by Leach et al. (2007) on the importance of morality for group identities. This attests to the fact that at least some of the studies reviewed here have successfully connected different themes in research on morality.

This tendency seems to be followed up in some recent studies we found. For instance, several researchers have begun to investigate how general principles in moral reasoning relate to concrete behaviors in specific situations. These include studies revealing relations between the endorsement of abstract moral principles to donations people make to different causes (migrants, medical research, international aid; Nilsson, Erlandsson, Vastfjall, 2016 ). Similarly, endorsement of general moral principles or values has been related to specific behaviors in experimental games (trust game, thieves game; Clark, Swails, Pontinen, Boverman, Kriz, & Hendricks, 2017 ; Kistler, Thöni, & Welzel, 2017 ). This has yielded more insight into how abstract principles relate to specific behaviors and has demonstrated which principles are relevant in which situations. For instance, actions requiring the exercise of self-control were found to relate to “binding” moral foundations in particular ( Mooijman, Meindl, et al., 2018 ).

Another promising avenue for future research is charted by researchers who have begun to address the role of emotions in guiding other responses relating to morality. This includes work demonstrating how individual differences in emotion regulation affect moral reasoning ( Zhang, Kong, & Li, 2017 ). Furthermore, it has been shown that interventions that alter emotional responses can affect moral behaviors (e.g., Jackson, Gaertner, & Batson, 2016 ; see also Yip & Schweitzer, 2016 ). Others have shown that understanding the experience of guilt and shame in response to harm done to others helps predict subsequent self-forgiving and self-punishing responses ( Griffin, Moloney, Green, et al., 2016 ).

The overreliance on intrapersonal and interpersonal mechanisms in the study of morality has been noted before (see also Ellemers, 2017 ; Ellemers, Pagliaro, & Barreto, 2013 ). Recent research has begun to document a number of intragroup mechanisms that are relevant to increase our understanding of moral behavior. This includes work showing the reluctance of groups to include individuals in particular when their morality is called into question ( Van der Lee, Ellemers, Scheepers, & Rutjens, 2017 ). Recent studies also document the ways in which shared social identities and group-specific moral norms may affect moral reasoning ( Gao, Chen, & Li, 2016 ), affect moral behaviors, and overrule individual convictions as people seek to receive respect from other ingroup members ( Bizumic, Kenny, Iyer, Tanuwira, & Huxey, 2017 ; Mooijman, Hoover, Lin, Ji, & Dehghani, 2018 ).

Depending on the nature of the group and the moral norms these endorse, this can have positive as well as negative implications ( Pulfrey & Butera, 2016 ; Renger, Mommert, Renger, & Simon, 2016 ; Stoeber & Hotham, 2016 ; Stoeber & Yang, 2016 ). The relevance and everyday implications of these phenomena are also documented in studies examining the emergence of moral conformity on social media ( Kelly, Ngo, Chituc, Huettel, & Sinnott-Armstrong, 2017 ) or the way international experiences and exposure to multiple moral norms in different foreign countries can elicit moral relativism ( Lu, Quoidbach, Gino, Chakroff, Maddux, & Galinsky, 2017 ).

Furthermore, the overreliance on U.S. samples and political ideologies is now beginning to be complemented by studies examining how moral concerns may be similar or different in different cultural and political contexts (e.g., Nilson, & Strupp-Levitsky, 2016 ). Recent work has compared the moral foundations endorsed by Chinese versus U.S. samples ( Kwan, 2016 ), has examined this among Muslims in Turkey (Yilmaz, Harma, & Bakçekapili, & Cesur, 2016), and has made other intercultural comparisons ( Stankov & Lee, 2016a , 2016b ; Sullivan, Stewart, Landau, Liu, Yang, & Diefendorf, 2016 ). This helps understand that some moral concerns emerge consistently across different cultural contexts, and the macro-level cultural values and corruption indicators that characterize them ( Mann, Garcia-Rada, Hornuf, Tafurt, & Ariely, 2016 ). However, it has also revealed that different political systems (in Finland, Kivikangas, Lönnquist, & Ravaja, 2017 ), cultural values (in India, Clark, Bauman, Kamble, & Knowles, 2017 ), or relations between social groups (in Lebanon and Morocco, Obeid, Argo, & Ginges, 2017 ) may raise different moral concerns and behaviors than are commonly observed in the United States (see also Haidt et al., 1993 ).

The Paradox of Morality

The increased interest of psychological researchers in issues relating to morality was prompted at least partly by societal developments during the past years. These have raised questions from the general public and made available research funds to address issues relating to civic conduct, ethical leadership, and moral behavior in various professional contexts ranging from finance and sports, to community care and science. Therefore, we think it is relevant to consider how the body of evidence that is currently available speaks to these issues.

A recurring theme in this literature, which also explains some of the difficulties encountered by empirical researchers, relates to what we will refer to as the “paradox of morality.” That is, from all the research reviewed here, it is clear that most people have a strong desire to be moral and to appear moral in the eyes of (important) others. The paradox is that the sincere motivation to do what is considered “right” and the strong aversion to being considered morally deficient can make people untruthful and unreliable as they are reluctant to own up to moral lapses or attempt to compensate for them. Paradoxically too, those who care less about their moral identity may actually be more consistent in their behavior and more accurate in their self-reports as they are less bothered by appearing morally inadequate. As a result, all the research that reveals self-defensive responses when people are unable to live up to their own standards or those of others, or when they are reminded of their moral lapses, implies that there is limited value in relying on people’s self-stated moral principles or moral ideals to predict their real-life behaviors.

On an applied note, this paradox of morality also clarifies some of the difficulties of aiming for moral improvement by confronting people with their morally questionable behaviors. Such criticism undermines people’s moral self-views and likely raises guilt and shame. This in turn elicits self-defensive responses (justifications, victim blaming, moral disengagement) in particular among those who think of themselves as endorsing universal moral guidelines prescribing fairness and care. Furthermore, questioning people’s moral viewpoints easily raises moral outrage and aggression toward others who think differently. This is also visible in studies examining moral rebels and moral courage (those who stand up for their own principles) or moral entrepreneurship and moral exporting (those who actively seek to convince others of their own moral principles). While the behavior of such individuals would seem to deserve praise and admiration as exemplifying morality, it also involves going against other people’s convictions and challenging their values, which is not always welcomed by these others. All these responses stand in the way of behavioral improvement. Instead of focusing on people’s explicit moral choices to make them adapt their behavior, it may therefore be more effective to nudge them toward change by altering goal primes, situational features, or decision frames.

We have noted above that it would be misleading to think that morality can be captured as an individual difference that has predictive value across situations. Yet, this is the conclusion that is often implicitly drawn and also informs many of the attempts to monitor and guard moral behavior in practice. For instance, in many businesses, the standard response to integrity incidents or moral transgressions is to sanction or expel specific individuals and to make newcomers pass assessment tests and take pledges. The research reviewed here suggests that attempts to guard moral behavior, for instance at work, may be more effective when these also take into account contextual features, for instance, by critically assessing organizational norms, team climates, or leadership behaviors that have allowed for such behavior to emerge.

The overreliance on intrapersonal analyses and individual moral judgments easily masks that individual moral standards are defined in relation to group norms. Whether individuals are considered to do what is “good” or “bad” depends on how their moral standards relate to what the group deems (in)appropriate. Indeed, we have seen that what is considered “immoral” behavior by some might be seen as morally adequate or even desirable by others. For instance, collective interests and limits to the circle of care may lead individuals to show loyalty to the moral guidelines of their own group while placing others outside their circle of care. Bolstering people’s sense of community and common identity or appealing to their altruism and empathy may therefore not necessarily resolve moral issues. Instead, this may just as well increase biased decision making or intensify intergroup conflicts on what is morally acceptable behavior. The current emphasis of many studies on individual differences and the focus on finding out how to suppress selfishness or how to avoid cheating may mask such group-level concerns.

During the past years, many researchers have examined questions relating to the psychology of morality. Our main conclusion from the studies reviewed here is that these have yielded insights that are unbalanced, neglect some key features of human morality specified in influential theoretical perspectives, and are not well integrated. The current challenge for theory development and research in morality therefore is to consider the complexity and multifaceted nature of the psychological antecedents and implications of moral behavior and to connect different mechanisms—instead of studying them in isolation.

Supplemental Material

Author Contributions: The division of tasks and responsibilities between the authors was as follows: N.E. designed the study; developed the coding scheme; coded and interpreted studies published from 2000 through 2017; supervised the further data collection, analyses, and preparation of tables and figures; and prepared text for the introduction, method, results, and discussion. J.V.d.T. designed the study, helped develop the stcoding scheme and coded studies published from 2000 through 2017, and revised text for the introduction, method, results, and discussion. Y.P. collected and interpreted studies published from 2000 through 2013 and prepared the database emerging from the first wave of data collection for further coding and analysis. T.v.L. conducted the bibliometric analyses, prepared figures and statistics reporting these analyses, and prepared text describing the method and results of the bibliometric analyses.

Authors’ Note: This research was made possible by a Netherlands Organization for Scientific Research (NWO) SPINOZA grant and a National Institute of Advanced Studies (NIAS) Fellowship grant awarded to the first author and an NWO RUBICON grant awarded to the second author. We thank Jamie Breukel, Nadia Buiter, Kai van Eekelen, Piet Groot, Miriam Hoffmann-Harnisch, Martine Kloet, Jeanette van der Lee, Marleen van Stokkum, Esmee Veenstra, Melissa Vink, and Erik van Wijk for their assistance in completing the database and preparing materials for the article.

Declaration of Conflicting Interests: The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.

Funding: The author(s) received no financial support for the research, authorship, and/or publication of this article.

Supplemental Material: Supplemental material is available online with this article.

Moral Philosophy

Definition of moral philosophy.

Moral philosophy, often called ethics, is like a compass for right and wrong actions. Imagine you’re at a fork in the road and each direction leads to a different action. Moral philosophy is your guide, helping you figure out which direction to go.

The first simple definition of moral philosophy is this: it’s a set of tools that help us choose the best path when making decisions. This isn’t just about following rules, but it’s about understanding why we feel certain actions are correct and others are not, and how our decisions affect everyone involved.

The second definition is: moral philosophy is about figuring out how to live well together. This means we look at the big picture of what our actions mean and how they can help us create a peaceful world where we treat each other kindly.

Types of Moral Philosophy

There are many ways to think about what is right and wrong. Here are three major types:

  • Consequentialism : This says that the results of what we do are the most important part. It suggests that if the outcome of our actions is good, then the action was also good. Imagine you bake cookies for a friend who’s feeling down, and it cheers them up. This act is seen as good because it made your friend happy.
  • Deontology : This one is focused on following rules, without worrying about the outcome. It’s like saying that you should always tell the truth, even if it might hurt someone’s feelings because the rule itself is good and must be respected.
  • Virtue Ethics: This approach is all about being a good person. It’s not so much about each action or rule, but about whether you’re honest, brave, and kind. When you make a choice, you think about whether it’s helping you become a better person.

Examples of Moral Philosophy

Here are some real-life situations where moral philosophy comes into play:

  • Consequentialism: If your actions at school lead to everyone getting a longer recess and being happier, consequentialism says that decision was a good one because it led to a great result for many people.
  • Deontology: Let’s say you find a $20 bill on the ground at school. Deontology would tell you to turn it in to the lost and found, because keeping it would be like stealing, and stealing is against the rules.
  • Virtue Ethics: When a new student comes to your school and seems alone, if you decide to befriend them because it’s kind and you want to be a friendly person, that’s virtue ethics guiding your choice.

Why is Moral Philosophy Important?

Moral philosophy is vital because it gives us a framework to think about our decisions and their impacts. Imagine tossing a pebble into a pond. The ripples spread far and wide, just like the effects of our choices. By using moral philosophy, we help to ensure the ripples we make in the world spread kindness and fairness, touching our families, friends, and even strangers in positive ways.

For the average person, moral philosophy helps us figure out how to act in tough situations. It’s like a guidebook for living a good life. Let’s say you’re in a group project and someone isn’t doing their part. Moral philosophy can help you decide the best way to handle it, so the project succeeds, and everyone is treated fairly. It helps us build a world where everyone can succeed and be happy.

Origin of Moral Philosophy

Thousands of years ago, smart people from different parts of the world started talking about the right way to live. Think of people like Confucius in China, the Buddha in India, and philosophers in Greece; they all explored life’s big questions and shared their knowledge . Thanks to their early thoughts on ethics, we still learn from their wisdom on how to be good today.

Controversies in Moral Philosophy

People often disagree on some parts of moral philosophy, and here are a few examples:

  • The fact-value distinction: This is the debate about whether what’s true and what’s important are totally separate, or if they sometimes overlap. Is there a clear-cut difference between hard facts and personal values, or do they influence each other?
  • Moral relativism vs. moral absolutism : Relativists think that what’s right or wrong changes depending on the situation or culture, while absolutists believe there is one true answer to moral questions, no matter the circumstances.
  • The role of emotion in moral decision-making: Some people believe that our feelings should lead us when deciding what’s right or wrong, while others argue that clear, logical thinking should guide us instead.

As new challenges arise with things like technology and environmental issues, moral philosophy keeps changing. We have ongoing conversations that help us continue to learn and improve our understanding.

Related Topics

Moral philosophy is connected to many other subjects. Here are some that share its principles:

  • Political Philosophy : This examines how societies should be governed. Political decisions often involve moral judgments about what is right for the community and the individuals in it.
  • Justice: This concept is all about being fair. It looks at the way people are treated by the law, what is considered just or unjust, and whether everyone has equal opportunities. Moral philosophy plays a big role in how we think about justice.
  • Social Philosophy: This deals with how societies are structured and how people should act within them. It includes thinking about community life, individual responsibilities, and how we can live peacefully side by side, which are all key concerns in moral philosophy as well.

In conclusion, moral philosophy assists us in deeply considering our actions and lives. It guides us towards fairness and goodness, so we can build a world where we all have a chance to flourish. By learning different angles like consequentialism, deontology, and virtue ethics, and thinking about connected subjects like politics and justice, we become better equipped to serve the common good, making thoughtful choices that benefit everyone.

Encyclopedia Britannica

  • Games & Quizzes
  • History & Society
  • Science & Tech
  • Biographies
  • Animals & Nature
  • Geography & Travel
  • Arts & Culture
  • On This Day
  • One Good Fact
  • New Articles
  • Lifestyles & Social Issues
  • Philosophy & Religion
  • Politics, Law & Government
  • World History
  • Health & Medicine
  • Browse Biographies
  • Birds, Reptiles & Other Vertebrates
  • Bugs, Mollusks & Other Invertebrates
  • Environment
  • Fossils & Geologic Time
  • Entertainment & Pop Culture
  • Sports & Recreation
  • Visual Arts
  • Demystified
  • Image Galleries
  • Infographics
  • Top Questions
  • Britannica Kids
  • Saving Earth
  • Space Next 50
  • Student Center

What’s the Difference Between Morality and Ethics?

Well-balanced of stones on the top of boulder

Generally, the terms ethics and morality are used interchangeably, although a few different communities (academic, legal, or religious, for example) will occasionally make a distinction. In fact, Britannica’s article on ethics considers the terms to be the same as moral philosophy. While understanding that most ethicists (that is, philosophers who study ethics) consider the terms interchangeable, let’s go ahead and dive into these distinctions.

(Read Peter Singer's Britannica entry on ethics.)

Both morality and ethics loosely have to do with distinguishing the difference between “good and bad” or “right and wrong.” Many people think of morality as something that’s personal and normative, whereas ethics is the standards of “good and bad” distinguished by a certain community or social setting. For example, your local community may think adultery is immoral, and you personally may agree with that. However, the distinction can be useful if your local community has no strong feelings about adultery, but you consider adultery immoral on a personal level. By these definitions of the terms, your morality would contradict the ethics of your community. In popular discourse, however, we’ll often use the terms moral and immoral when talking about issues like adultery regardless of whether it’s being discussed in a personal or in a community-based situation. As you can see, the distinction can get a bit tricky.

It’s important to consider how the two terms have been used in discourse in different fields so that we can consider the connotations of both terms. For example, morality has a Christian connotation to many Westerners, since moral theology is prominent in the church. Similarly, ethics is the term used in conjunction with business , medicine, or law . In these cases, ethics serves as a personal code of conduct for people working in those fields, and the ethics themselves are often highly debated and contentious. These connotations have helped guide the distinctions between morality and ethics.

Ethicists today, however, use the terms interchangeably. If they do want to differentiate morality from ethics , the onus is on the ethicist to state the definitions of both terms. Ultimately, the distinction between the two is as substantial as a line drawn in the sand.

Online ordering is currently unavailable due to technical issues. We apologise for any delays responding to customers while we resolve this. For further updates please visit our website: https://www.cambridge.org/news-and-insights/technical-incident Due to planned maintenance there will be periods of time where the website may be unavailable. We apologise for any inconvenience.

We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings .

Login Alert

definition essay morals

  • > Morality and Politics
  • > Introduction

definition essay morals

Book contents

  • Frontmatter
  • Introduction
  • Acknowledgments
  • Contributors
  • What's Morality Got to Do With It? Making the Right Distinctions
  • Unauthorized Humanitarian Intervention
  • Thinking Constitutionally: The Problem of Deliberative Democracy
  • Representing Ignorance
  • Dual Citizenship and American Democracy: Patriotism, National Attachment, and National Identity
  • Policy Implications of Zero Discounting: An Exploration in Politics and Morality
  • Reflections on Espionage
  • Mr. Pinocchio Goes to Washington: Lying in Politics
  • A Subject of Distaste; An Object of Judgment
  • Against Civic Schooling
  • Political Morality as Convention
  • Autonomy and Empathy
  • God's Image and Egalitarian Politics
  • Should political Liberals Be Compassionate Conservatives? Philosophical Foundations of the Faith-Based Initiative

Published online by Cambridge University Press:  04 August 2010

Since the ancients, philosophers, theologians, and political actors have pondered the relationship between the moral realm and the political realm. Complicating the long debate over the intersection of morality and politics are diverse conceptions of fundamental concepts: the right and the good, justice and equality, personal liberty and public interest. Divisions abound, also, about whether politics should be held to a higher moral standard at all, or whether, instead, pragmatic considerations or realpolitik should be the final word. Perhaps the two poles are represented most conspicuously by Aristotle and Machiavelli. For Aristotle, the proper aim of politics is moral virtue: “politics takes the greatest care in making the citizens to be of a certain sort, namely good and capable of noble actions.” Thus, the statesman is a craftsman or scientist who designs a legal system that enshrines universal principles, and the politician's task is to maintain and reform the system when necessary. The science of the political includes more than drafting good laws and institutions, however, since the city-state must create a system of moral education for its citizens. In marked contrast, Machiavelli's prince exalted pragmatism over morality, the maintenance of power over the pursuit of justice. Machiavelli instructed that “a prince, and especially a new prince, cannot observe all those things which are considered good in men, being often obliged, in order to maintain the state, to act against faith, against charity, against humanity, and against religion.”

Access options

Save book to kindle.

To save this book to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle .

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service .

  • Edited by Ellen Frankel Paul , Bowling Green State University, Ohio , Fred D. Miller, Jr , Bowling Green State University, Ohio , Jeffrey Paul , Bowling Green State University, Ohio
  • Book: Morality and Politics
  • Online publication: 04 August 2010
  • Chapter DOI: https://doi.org/10.1017/CBO9780511573019.001

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox .

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive .

one pixel image

Home — Blog — Topic Ideas — 200 Ethical Topics & Questions to Debate in Essay

200 Ethical Topics & Questions to Debate in Essay

ethical topics

Ethical topics and questions are essential for stimulating thoughtful discussions and deepening our understanding of complex moral landscapes. Ethics, the study of what is right and wrong, underpins many aspects of human life and societal functioning. Whether you're crafting an essay or preparing for a debate, delving into ethical issues allows you to explore various perspectives and develop critical thinking skills.

Ethical issues encompass a wide range of dilemmas and conflicts where individuals or societies must choose between competing moral principles. Understanding what are ethical issues involves recognizing situations that challenge our values, behaviors, and decisions. This article provides a thorough guide to ethical topics, offering insights into current ethical issues, and presenting a detailed list of questions and topics to inspire your writing and debates.

Ethical Issues Definition

Ethical issues refer to situations where a decision, action, or policy conflicts with ethical principles or societal norms. These dilemmas often involve a choice between competing values or interests, such as fairness vs. efficiency, privacy vs. security, or individual rights vs. collective good. Ethical issues arise in various fields, including medicine, business, technology, and the environment. They challenge individuals and organizations to consider the moral implications of their actions and to seek solutions that align with ethical standards. Understanding ethical issues requires an analysis of both the potential benefits and the moral costs associated with different courses of action.

⭐ Top 10 Ethical Topics [2024]

  • Climate Change Responsibility
  • Data Privacy in the Digital Age
  • Genetic Engineering
  • Euthanasia and Assisted Suicide
  • Corporate Social Responsibility
  • AI and Automation
  • Animal Rights
  • Freedom of Speech vs. Hate Speech
  • Healthcare Accessibility
  • Human Rights in the Age of Globalization

Ethics Essay Writing Guide

Writing an ethics essay involves more than just presenting facts; it requires a thoughtful analysis of moral principles and their application to real-world scenarios. Understanding ethical topics and what constitutes ethical issues is essential for crafting a compelling essay. Here’s a guide to help you address current ethical issues effectively:

  • Choose a Clear Topic: Select an ethical issue that is both interesting and relevant. Understanding the definition of ethical issues will help you narrow down your choices.
  • Research Thoroughly: Gather information from credible sources to support your arguments. Knowing what ethical issues are and how they are defined can provide a solid foundation for your research.
  • Present Multiple Perspectives: Show an understanding of different viewpoints on the issue. This will demonstrate your grasp of the complexity of current ethical issues.
  • Use Real-world Examples: Illustrate your points with concrete examples. This not only strengthens your arguments but also helps to explain ethical topics in a relatable way.
  • Structure Your Essay: Organize your essay with a clear introduction, body, and conclusion. A well-structured essay makes it easier to present your analysis of ethical issues.
  • Provide a Balanced Argument: Weigh the pros and cons to offer a well-rounded discussion. Addressing various aspects of current ethical issues will make your essay more comprehensive.
  • Conclude Thoughtfully: Summarize your findings and reflect on the broader implications of the issue. This is where you can discuss the impact of ethical issues on society and future considerations.

By following this guide, you will be able to write an ethics essay that not only presents facts but also offers a deep and nuanced analysis of ethical topics.

Selecting the Right Research Topic in Ethics

Choosing the right research topic in ethics can be challenging, but it is crucial for writing an engaging and insightful essay. Here are some tips:

  • Relevance: Ensure the topic is relevant to current societal issues.
  • Interest: Pick a topic that genuinely interests you.
  • Scope: Choose a topic with enough scope for research and debate.
  • Complexity: Aim for a topic that is complex enough to allow for in-depth analysis.
  • Availability of Sources: Make sure there are enough resources available to support your research.

What Style Should an Ethics Essay Be Written In?

When writing an ethics essay, it is essential to adopt a formal and objective style. Clarity and conciseness are paramount, as the essay should avoid unnecessary jargon and overly complex sentences that might obscure the main points. Maintaining objectivity is crucial; presenting arguments without bias ensures that the discussion remains balanced and fair. Proper citations are vital to give credit to sources and uphold academic integrity.

Engaging the reader through a logical flow of ideas is important, as it helps sustain interest and facilitates a better understanding of the ethical topics being discussed. Additionally, the essay should be persuasive, making compelling arguments supported by evidence to effectively convey the analysis of moral issues. By following these guidelines, the essay will not only be informative but also impactful in its examination of ethical dilemmas.

List of Current Ethical Issues

  • The impact of social media on privacy.
  • Ethical considerations in genetic cloning.
  • Balancing national security with individual rights.
  • Privacy concerns in the digital age.
  • The ethics of biohacking.
  • Ethical considerations in space exploration.
  • The ethics of surveillance and data collection by governments and corporations.
  • Ethical issues in the use of facial recognition technology.
  • The ethical implications of autonomous vehicles.
  • The morality of animal testing in scientific research.
  • Ethical concerns in the gig economy.
  • The impact of climate change on ethical business practices.
  • The ethics of consumer data usage by companies.
  • Ethical dilemmas in end-of-life care and assisted suicide.
  • The role of ethics in the development of renewable energy sources.

Ethical Issues in Psychology

  • Confidentiality vs. duty to warn in therapy.
  • Ethical dilemmas in psychological research.
  • The use of placebo in psychological treatment.
  • Ethical issues in the treatment of vulnerable populations.
  • The ethics of involuntary commitment and treatment.
  • Dual relationships and conflicts of interest in therapy.
  • The use of deception in psychological experiments.
  • The ethics of cognitive enhancement drugs.
  • Ethical considerations in online therapy and telepsychology.
  • Cultural competence and ethical practice in psychology.
  • The ethics of forensic psychology and assessment.
  • The impact of social media on mental health and ethical practice.
  • The use of emerging technologies in psychological treatment.
  • Ethical issues in the diagnosis and treatment of mental disorders.
  • The role of ethics in psychological testing and assessment.

Ethical Debate Topics

  • Is capital punishment morally justified?
  • Should organ donation be mandatory?
  • The ethics of artificial intelligence in warfare.
  • Is euthanasia ethically permissible?
  • Should human cloning be allowed?
  • The morality of animal rights vs. human benefit.
  • Is it ethical to use animals for entertainment?
  • Should there be limits on free speech?
  • The ethics of genetic modification in humans.
  • Is it ethical to have mandatory vaccinations?
  • The morality of government surveillance programs.
  • Should assisted reproductive technologies be regulated?
  • The ethics of using performance-enhancing drugs in sports.
  • Should healthcare be considered a human right?
  • The ethical implications of wealth inequality and redistribution.

Medical Ethics Topics

  • Ariel Case Study: a Comprehensive Analysis
  • The Case for and Against Daylight Saving Time
  • Technological Advancements in Medical, Educational & Other Fields
  • The Language of Medicine
  • Medical Ethics: Beneficence and Non-maleficence
  • Overview of What Sonography is
  • The Use of Steroids and HGH in Sports
  • Media and The Scientific Community Treat People Like Tools
  • Informative Speech for Organ Donation
  • Medicine in Our World
  • The Origin of Medical Terminology
  • Preserving Sight: My Journey to Becoming an Optometrist
  • Case of Dr. Eric Poehlman's Ethical Violation
  • Should The NHS Treat Patients with Self-Inflicted Illnesses
  • My Education as a Medical Technologist

Ethics Essay Topics on Business

  • Ethics Report on Panasonic Corporation
  • Case Study on The ACS Code of Morals
  • Differences in Business Ethics Among East Asian Countries
  • Business Ethics in Sports
  • Business Ethics in Different Countries, and Its Importance
  • Selfless Service and Its Impact on Social Change
  • Challenges in Doing Business Across The Border
  • The Importance of Ethics in Advertising
  • Ethical Issues that Businesses Face
  • Profitability of Business Ethics
  • The Law and Morality in Business
  • How Ethnic Variances Effect Worldwide Business
  • The Ethical Practices in The Business Sector in the Modern Economy
  • Key Responsibilities and Code of Ethics in Engineering Profession
  • Analysis of The Code of Ethics in Walmart

Ethics Essay Topics on Environment

  • Understanding The Importance of Keeping Animals Safe
  • The Importance of Treating Animals with Respect
  • CWU and The Issue of Chimpanzee Captivity
  • The Process of Suicidal Reproduction in the Animal World
  • Analysis of The Egg Industry to Understand The Causes of The High Prices in Eggs
  • The Dangers of Zoos
  • Importance for Animals to Be Free from Harm by Humans
  • Should Animals Be Killed for The Benefit of Humans
  • Reasons Why Genetic Engineering Should Be Banned
  • What I Learned in Ethics Class: Environmental Ethics
  • Nanotechnology and Environment
  • Review of The Environmental Protection Act
  • How The Idea of Preservation of Nature Can Benefit from Environmental Ethics
  • The Relation and Controversy Between American Diet and Environmental Ethics
  • Green Technology

Work Ethics Essay Topics

  • The impact of workplace surveillance on employee privacy.
  • Ethical considerations in remote work.
  • Discrimination in the workplace.
  • An Examination of Addiction to Work in The Protestant Work Ethic
  • The Work Ethic of The Millennials
  • My Understanding of The Proper Environment in the Workplace
  • Social Responsibility & Ethics Management Program in Business
  • The Maternity Benefits Act, 1961
  • The Issue of Stealing in The Workplace
  • Chinese Work Management and Business Identity
  • Ethical Issues of Using Social Media at the Workplace
  • The Teleological Ethical Theories
  • Learning Journal on Ethical Conflicts, Environmental Issues, and Social Responsibilities
  • Social Media at Workplace: Ethics and Influence
  • Ethical Issue of Employees Stealing and Whistleblowing

Ethics Essay Topics on Philosophy

  • A Critical Analysis of Ethical Dilemmas in Education and Beyond
  • Overview of What an Ethical Dilemma is
  • The Implications of Exculpatory Language
  • Ethical Dilemmas in End-of-life Decision Making
  • What I Learned in Ethics Class: Integrating Ethics in Aviation
  • Doing What is Right is not Always Popular: Philosophy of Ethics
  • An Analysis of Public Trust and Corporate Ethics
  • Ethical Concerns of Beauty Pageants
  • Simone De Beauvoir’s Contribution to Philosophy and Ethics
  • The Impact on Decision-making and Life Choices
  • Importance and Improvement of Personal Ethics
  • Personal Ethics and Integrity in Our Life
  • Analysis of The Philosophical Concept of Virtue Ethics
  • Understanding Moral Action
  • How to Become a Gentleman
  • A Call for Emphasis on Private Morality and Virtue Teaching
  • A Positive Spin on Ethical Marketing in The Gambling Industry
  • An Overview of The Ethical Dilemma in a Personal Case
  • Bioethical Principles and Professional Responsibilities
  • Ethical Considerations in Counseling Adolescents
  • Ethical Dilemma in College Life
  • Ethical Theories: Deontology and Utilitarianism
  • Issues of Fraud, Ethics, and Regulation in Healthcare
  • Navigating Ethical Dimensions in Education
  • The Ethical Landscape of Advanced Technology
  • Research Paper on The Ethical Issue of Publishing The Pentagon Papers
  • The Trolley Problem: an Ethical Dilemma
  • Analysis of "To The Bitter End" Case Study
  • Ethical Theories: Virtue and Utilitarian Ethics
  • Feminist Ethics: Deconstructing Gender and Morality
  • Is Deadpool a Hero Research Paper
  • My Moral and Ethical Stance
  • The Concept of Ethics and The Pursuit of Happiness
  • The Ethics of Graphic Photojournalism
  • The Quintessence of Justice: a Critical Evaluation of Juror 11's Role
  • The Wolf of Wall Street: Ethics of Greed
  • The Importance of Ethics in Our Daily Life
  • Analysis of The Envy Emotion and My Emotional Norms
  • The Topic of Animal Rights in Relation to The Virtue Theory

Ethics Essay Topics on Science

  • The Cause of Cancer as Illustrated in a Bioethics Study
  • Bioethical Issues Related to Genetic Engineering
  • Ethical Issues in Stem Cell Research
  • The Role of Ethics Committees in Biomedical Research
  • The Legal and Bioethical Aspects of Personalised Medicine Based on Genetic Composition
  • The Ethics of Clinical Trials: Ensuring Informed Consent and Patient Safety
  • Ethical Challenges in Neuroethics: Brain Privacy and Cognitive Liberty
  • Gene Therapy: Ethical Dilemmas and Social Implications
  • Overview of Bioethics The Trigger of Contentious Moral Topics
  • The Progression of Bioethics and Its Importance
  • The Impact of Artificial Intelligence on Medical Ethics
  • The Drawbacks of Free Healthcare: Economic, Quality, and Access Issues
  • Bioethical Issues in My Sister’s Keeper: Having Your Autonomy Taken to Save Your Sibling
  • The Ethics of Biotechnology in Agriculture: GMOs and Food Safety
  • Ethical Considerations in Organ Donation and Transplantation

List of Ethical Questions for Students

Exploring ethical topics is crucial for students to develop critical thinking and moral reasoning. Here is a comprehensive list of ethical questions for students to discuss and debate. These topics cover a wide range of issues, encouraging thoughtful discussion and deeper understanding.

Good Ethical Questions for Discussion

  • Is it ethical to eat meat?
  • Should parents have the right to genetically modify their children?
  • Is it ever acceptable to lie?
  • Should schools monitor students' social media activity?
  • Is it ethical to use animals in scientific research?
  • Should companies be allowed to patent human genes?
  • Is it right to impose cultural values on others?
  • Should the government regulate internet content?
  • Is it ethical to have designer babies?
  • Should wealthy countries help poorer nations?
  • Is it ethical to keep animals in zoos?
  • Should there be limits to freedom of speech?
  • Is it right to use artificial intelligence in decision-making?
  • Should we prioritize privacy over security?
  • Is it ethical to manipulate emotions through advertising?

Moral Questions to Debate

  • Is genetic modification in humans ethical?
  • Should vaccinations be mandatory?
  • Is government surveillance justified?
  • Is it ethical to use performance-enhancing drugs in sports?
  • Is wealth inequality morally acceptable?
  • Should education be free for everyone?
  • Is it ethical to allow autonomous robots to make life-and-death decisions?

Ethical topics and questions are a rich field for exploration and discussion. Examining these issues, we can better understand the moral principles that guide our actions and decisions. Whether you're writing an essay or preparing for a debate, this comprehensive list of ethical topics and questions will help you engage with complex moral dilemmas and develop your critical thinking skills.

persuasive speech topics

We use cookies to personalyze your web-site experience. By continuing we’ll assume you board with our cookie policy .

definition essay morals

SEP home page

  • Table of Contents
  • Random Entry
  • Chronological
  • Editorial Information
  • About the SEP
  • Editorial Board
  • How to Cite the SEP
  • Special Characters
  • Advanced Tools
  • Support the SEP
  • PDFs for SEP Friends
  • Make a Donation
  • SEPIA for Libraries
  • Entry Contents

Bibliography

Academic tools.

  • Friends PDF Preview
  • Author and Citation Info
  • Back to Top

Moral Responsibility

Making judgments about whether a person is morally responsible for their behavior, and holding others and ourselves responsible for actions and the consequences of actions, is a fundamental and familiar part of our moral practices and our interpersonal relationships.

The judgment that a person is morally responsible for their behavior involves—at least to a first approximation—attributing certain powers and capacities to that person, and viewing their behavior as arising, in the right way, from the fact that the person has, and has exercised, these powers and capacities. Whatever the correct account of the powers and capacities at issue (and canvassing different accounts is one task of this entry), their possession qualifies an agent as morally responsible in a general sense: that is, as one who may be morally responsible for particular exercises of agency. Normal adult human beings may possess the powers and capacities in question, and other agents (such as non-human animals and very young children) are generally taken to lack them.

To hold someone responsible involves—again, to a first approximation—responding to that person in ways that are made appropriate by the judgment that they are morally responsible. These responses often constitute instances of moral praise or moral blame (though there may be reason to allow for morally responsible behavior that is neither praiseworthy nor blameworthy: see McKenna 2012, 16–17 and M. Zimmerman 1988, 61–62). Blame is a response that may follow on the judgment that a person is morally responsible for behavior that is wrong or bad, and praise is a response that may follow on the judgment that a person is morally responsible for behavior that is right or good. (See Menges 2017 for an account that emphasizes the independence of blame from judgments about blameworthiness.)

The attention in the philosophical literature given to blame far exceeds that given to praise. One reason for this is that blameworthiness, unlike praiseworthiness, is often taken to involve liability to sanction. Thus, articulating the conditions on blameworthiness may seem the more pressing matter. Perhaps for related reasons, there is a richer language for expressing blame than praise (Watson [1996]2004, 283), and “blame” finds its way into idioms for which there is no ready parallel employing “praise”: compare “ S is to blame for x ” and “ S is to praise for x .” Note, as well, that “holding responsible” is not a neutral expression: it typically arises in blaming contexts (Watson [1996]2004, 284).

Additionally, there may be asymmetries in the contexts in which praise and blame are appropriate: private blame is more familiar than private praise (Coates and Tognazzini 2013b), and while minor wrongs may reasonably earn blame, minimally decent behavior seems insufficient for praise (Eshleman 2014). Finally, the widespread assumption that praiseworthiness and blameworthiness are at least symmetrical in terms of the capacities they require has also been questioned (Nelkin 2008, 2011; Wolf 1980, 1990). Like most work on moral responsibility, this entry will focus largely on the negative side of the phenomenon; for more, see the entry on blame .

In everyday speech, one hears references to “moral responsibility” where the point is to indicate the presence of an obligation. Someone may say that “the United States has a moral responsibility to assist Ukraine,” where this means that the United States ought to adopt certain policies or take certain actions. This entry, however, is concerned not with accounts that specify people’s responsibilities in the sense of obligations, but rather with accounts of whether a person bears the right relation to their actions to be properly held accountable for them.

Moral responsibility should also be distinguished from causal responsibility. We may assign causal responsibility to someone for an outcome that they have caused, and we may also judge the person morally responsible for having caused the outcome. But the powers and capacities that are required for moral responsibility are not identical with an agent’s causal powers, so we cannot always infer moral responsibility from an assignment of causal responsibility. A young child can cause an outcome while failing to fulfill the general requirements on moral responsibility, and even agents who fulfill the general requirements on moral responsibility may explain or defend their behavior in ways that call into question their moral responsibility for outcomes for which they are causally responsible. Suppose that S causes an explosion by flipping a switch: the fact that S had no reason to expect such an outcome may call into question their moral responsibility (or at least their blameworthiness) for the explosion without calling into question their causal contribution to it. (For discussion of moral responsibility for causal outcomes, see §3.5 .)

Having distinguished different senses of “responsibility,” the word will be used in what follows to refer to “moral responsibility” in the sense specified above.

For a long time, the bulk of philosophical work on moral responsibility was conducted in the context of debates about free will and the threat that determinism might pose to free will. A largely unquestioned assumption was that free will is required for moral responsibility, and the central questions had to do with the ingredients of free will and with whether their possession is compatible with determinism. Recently, however, the literature on moral responsibility has addressed issues that are of interest independently of worries about determinism. Much of this entry will deal with these latter aspects of the moral responsibility debate. However, it will be useful to begin with issues at the intersection of concerns about free will and moral responsibility.

1. Freedom, Responsibility, and Determinism

2.1 forward-looking accounts, 2.2.1 “freedom and resentment”, 2.2.2 criticisms of strawson’s approach, 2.2.3 resentment and blame, 2.3 reasons-responsiveness views, 3.1.1 attributability versus accountability, 3.1.2 attributionism, 3.1.3 answerability, 3.2 the moral competence condition on responsibility, 3.3 conversational approaches to responsibility, 3.4 standing to hold responsible, 3.5 responsibility for outcomes, 3.6 skepticism about responsibility, 3.7 moral luck and responsibility, 3.8 ultimate responsibility, 3.9 personal history and manipulation, 3.10 the epistemic condition on responsibility, other internet resources, related entries.

What power do responsible agents exercise over their actions? One (partial) answer is that the relevant power is a form of control, and, in particular, a form of control such that the agent could have done otherwise than to perform the action in question. This captures one standard notion of free will, and one of the central issues in debates about free will has been about whether possession of it (free will, in the ability-to-do-otherwise sense) is compatible with causal determinism (or with, for example, divine foreknowledge—see the entry on foreknowledge and free will ).

If causal determinism obtains, then the occurrence of every event (including events involving human deliberation, choice, and action) was made inevitable by—because it was causally necessitated by—the facts about the past (and about the laws of nature) prior to the event. Under these conditions, the facts about the present, and about the future, are uniquely fixed by the facts about the past (and about the laws of nature): given these earlier facts, the present and the future can unfold in only one way. For more, see the entry on causal determinism .

If free will requires the ability to do otherwise, then it is easy to see why free will may be incompatible with causal determinism. One way of getting at this incompatibilist worry is to focus on the way in which performance of a given action by an agent should be up to the agent if they have the sort of free will required for moral responsibility. As the influential Consequence Argument has it (Ginet 1966; van Inwagen 1983, 55–105), the truth of determinism entails that an agent’s actions are not really up to the agent since they are the unavoidable consequences of things over which the agent lacks control. Here is an informal summary of this argument from Peter van Inwagen’s An Essay on Free Will (1983):

If determinism is true, then our acts are the consequences of the laws of nature and events in the remote past. But it is not up to us what went on before we were born, and neither is it up to us what the laws of nature are. Therefore, the consequences of these things (including our present acts) are not up to us. (1983: 16)

For an important argument that the Consequence Argument conflates different senses in which the laws of nature are not up to us, see Lewis (1981). For more on incompatibilism, see the entries on free will , arguments for incompatibilism , and incompatibilist (nondeterministic) theories of free will , as well as Clarke (2003).

Compatibilists maintain that free will and moral responsibility are compatible with determinism. Versions of compatibilism have been defended since ancient times. The Stoics—Chryssipus, in particular—argued that the truth of determinism does not entail that human actions are entirely explained by factors external to agents; thus, human actions are not necessarily explained in a way that is incompatible with praise and blame (see Bobzien 1998 and Salles 2005 for Stoic views on freedom and determinism). Similarly, philosophers in the Modern period (such as Hobbes and Hume) distinguished the general way in which our actions are necessitated if determinism is true from the specific instances of necessity sometimes imposed on us by everyday constraints on behavior (e.g., coercive pressures or physical impediments that make it impossible to act as we would like). The difference is that the necessity involved in determinism is compatible with agents acting as they choose: even if S ’s behavior is causally determined, it may be behavior that S chose to perform. And perhaps the ability that matters for free will (and responsibility) is just the ability to act as one chooses, which seems to require only the absence of external constraints and not the absence of determinism.

This compatibilist tradition was carried into the 20 th century by logical positivists such as Ayer (1954) and Schlick ([1930]1966). Here is how Schlick expressed a central compatibilist insight in 1930 (drawing, in particular, on Hume):

Freedom means the opposite of compulsion; a man is free if he does not act under compulsion , and he is compelled or unfree when he is hindered from without…when he is locked up, or chained, or when someone forces him at the point of a gun to do what otherwise he would not do. (1930 [1966: 59])

Since deterministic causal pressures do not always force one to “do what otherwise he would not do,” freedom—at least of the sort specified by Schlick—is compatible with determinism.

A related compatibilist strategy, influential in the early and mid-20 th century, was to offer a conditional analysis of the ability to do otherwise (Ayer 1954, Moore 1912; for earlier expressions, see Hobbes [1654]1999 and Hume [1748]1978). As noted above, even if determinism is true, agents may often act as they choose; it is also compatible with determinism that an agent who performed act A (on the basis of their choice to do so) might have performed a different action on the condition that the agent had chosen to perform the other action. Even if a person’s actual behavior is causally determined by the actual past, it may be that if the past had been suitably different (if the person’s desires, intentions, and choices had been different), then they would have acted differently. Perhaps this is all that the ability to do otherwise comes to.

However, this compatibilist picture is open to serious objections. It might be granted that an ability to act as one sees fit is valuable, and perhaps related to the type of freedom at issue in the free will debate, but it does not follow that this is all that possession of free will comes to. People who have certain desires as a result of indoctrination, brainwashing, or psychopathology may act as they choose, but their possession of free will and moral responsibility may be questioned. (For more on the relevance of such factors, see §3.2 and §3.9 .) The conditional analysis also seems open to the following counterexample. It might be true that an agent who performs act A would have omitted A if they had so chosen, but it might also be true that the agent in question suffers from an overwhelming compulsion to perform act A . The conditional analysis suggests that the agent in question retains the ability to do otherwise than A , but given their compulsion, it seems clear that they lack this ability (Chisholm 1964, Lehrer 1968, van Inwagen 1983).

Despite the above objections, the compatibilist project described so far has had lasting influence. The fact that determined agents can act as they see fit is still an important inspiration for compatibilists, as is the fact that determined agents may have acted differently in counterfactual circumstances. For more, see the entry on compatibilism . For recent accounts related to and improving upon early compatibilist approaches, see Fara (2008), M. Smith (2003), and Vihvelin (2004); for criticism of these accounts, see Clarke (2009).

Compatibilists have also argued that moral responsibility does not require the ability to do otherwise. If this is right, then determinism would not threaten responsibility by ruling out access to alternatives (though it might threaten responsibility in other ways: see van Inwagen 1983, 182–88 and Fischer and Ravizza 1998, 151–168). In an influential 1969 paper, Harry Frankfurt offers examples meant to show that an agent can be morally responsible for an action even if he could not have done otherwise. Versions of these examples are often called Frankfurt cases or Frankfurt examples . In the basic form of the example, an agent, Jones, considers a certain action. Another agent, Black, would like to see Jones perform this action and, if necessary, Black can make Jones perform it by intervening in Jones’s deliberative processes. However, as things transpire, Black does not intervene in Jones’s decision making since he can see that Jones will perform the action on his own. Black does not intervene to ensure Jones’s action, but he could have and would have had Jones showed some sign that he would not perform the action on his own. Therefore, Jones could not have done otherwise , yet he seems responsible for his behavior since he does it on his own.

There are questions about whether Frankfurt’s example really shows that Jones couldn’t have done otherwise and that he is morally responsible. How can Black be certain whether Jones would perform the action on his own? There seems to be a dilemma here. Perhaps determinism obtains in the universe of the example, and Black sees some sign that indicates the presence of factors that causally ensure that Jones will behave in a particular way. But in this case, incompatibilists are unlikely to grant that Jones is morally responsible since they believe that moral responsibility is incompatible with determinism. On the other hand, perhaps determinism is not true in the universe of the example, but then it is not clear that the example excludes alternatives for Jones: if Jones’s behavior isn’t causally determined, then perhaps he can do otherwise. For objections to Frankfurt’s original example along these lines, see Ginet (1996) and Widerker (1995); for defenses of Frankfurt, see Fischer (2002; 2010); and for refined versions of Frankfurt’s example, meant to clearly deny Jones access to alternatives, see Mele and Robb (1998), Hunt (2000), and Pereboom (2000; 2001, 18–28). For a valuable collection on this topic, see Widerker and McKenna (2006).

In response to such criticisms, Frankfurt has said that his example was intended mainly to draw attention to the fact “that making an action unavoidable is not the same thing as bringing it about that the action is performed” (2006, 340; emphasis in original). In particular, while determinism may make an agent’s action unavoidable, it does not follow that the agent acts only because determinism is true: it may also be true that the agent acts a certain way because they want to. The point of his original example, Frankfurt suggests, was to draw attention to the significance that the actual causes of an agent’s behavior can have independently of whether the agent might have done something else. Frankfurt concludes that “[w]hen a person acts for reasons of his own … the question of whether he could have done something else instead is quite irrelevant” for the purposes of assessing responsibility (2006, 340). A focus on the actual causes that lead to behavior, as well as investigation into when an agent can be said to act on their own reasons, has characterized a great deal of work on responsibility since Frankfurt’s essay.

2. Some Approaches to Moral Responsibility

Forward-looking approaches to moral responsibility justify responsibility practices by focusing on the beneficial consequences that can be obtained by engaging in these practices. This approach was influential in the earlier parts of the 20 th century (as well as before), had fallen out of favor by the closing decades of that century, and has recently been the subject of renewed interest.

Forward-looking perspectives emphasize one of the points discussed in the previous section: an agent’s being subject to determinism does not entail that they are subject to constraints that force them to act independently of their choices. If this is true, then, regardless of the truth of determinism, it may be useful to offer certain incentives to agents—to praise and blame them—in order to encourage them to make certain future choices and thus to secure positive behavioral outcomes.

According to some articulations of the forward-looking approach, to be a responsible agent is simply to be an agent whose motives, choices, and behavior can be shaped in this way. Thus, Schlick argued that

The question of who is responsible is the question concerning the correct point of application of the motive …. in this its meaning is completely exhausted; behind it lurks no mysterious connection between transgression and requital…. It is a matter only of knowing who is to be punished or rewarded, in order that punishment and reward function as such—be able to achieve their goal. (1930 [1966: 61]; emphasis in original)

According to Schlick, the goals of punishment and reward have nothing to do with the past: the idea that punishment “is a natural retaliation for past wrong, ought no longer to be defended in cultivated society” ([1930]1966, 60; emphasis in original). Instead, punishment ought to be “concerned only with the institution of causes, of motives of conduct …. Analogously, in the case of reward we are concerned with an incentive” ([1930]1966, 60; emphasis in original).

J. J. C. Smart (1961) also defended a well-known forward-looking approach to responsibility. Smart claimed that to blame someone for their behavior is simply to assess the behavior negatively (to “dispraise” it) while simultaneously ascribing responsibility for the behavior to the agent. And, for Smart, an ascription of responsibility merely involves taking an agent to be such that they would have omitted the behavior if they had been provided with a motive to do so. Whatever sanctions may follow an ascription of responsibility are administered with eye to giving an agent a motive to refrain from such behavior in the future.

Smart’s approach has its contemporary defenders (Arneson 2003), but many have found it lacking. R. Jay Wallace argues that an approach like Smart’s “leaves out the underlying attitudinal aspect of moral blame” (Wallace 1996, 56, emphasis in original; see the next subsection for more on blaming attitudes). According to Wallace, the attitudes involved in blame are “backward-looking and focused on the individual agent who has done something morally wrong” (Wallace 1996, 56). But a forward-looking approach, with its focus on bringing about desirable outcomes “is not directed exclusively toward the individual agent who has done something morally wrong, but takes account of anyone else who is susceptible to being influenced by our responses” (Wallace 1996, 56; emphasis added). In exceptional cases, a focus on beneficial outcomes may provide grounds for treating as blameworthy those who are known to be innocent (Smart 1973). This feature of some forward-looking approaches has led to particularly strong criticism.

Recent efforts have been made to develop partially forward-looking accounts of responsibility that evade some of the criticisms mentioned above. These accounts justify our general system of responsibility practices by appeal to its suitability for fostering moral agency and the acquisition of capacities required for such agency. Most notable in this regard is Manuel Vargas’s “agency cultivation model” of responsibility (2013; also see Jefferson 2019 and McGeer 2015). Recent conversational accounts of responsibility ( §3.3 ) also have a forward-looking component insofar as they regard those with whom one might have fruitful moral interactions as candidates for responsibility. Some responsibility skeptics have also emphasized the forward-looking benefits of certain responsibility practices. Derk Pereboom—who rejects desert-based blame—has argued that some conventional blaming practices can be maintained (even after ordinary notions of blameworthiness have been left behind) insofar as these practices are grounded in “non-desert invoking moral desiderata” such as “protection of potential victims, reconciliation to relationships both personal and with the moral community more generally, and moral formation” (2014, 134; also see Caruso 2016, Caruso and Pereboom 2022, Levy 2012, Milam 2016). (For more on skepticism about responsibility, see §3.6 and the entry on skepticism about moral responsibility .)

2.2 The Reactive Attitudes Approach

P. F. Strawson’s 1962 paper, “Freedom and Resentment,” is the inspiration for a great deal of contemporary work on responsibility, especially the work of compatibilists. Strawson focuses on the emotions—the reactive attitudes—that play a fundamental role in our practices of holding one another responsible. He suggests that attending to the logic of these emotional responses yields an account of what it is to be open to praise and blame that need not invoke the incompatibilist’s conception of free will.

Part of the novelty of Strawson’s approach is its emphasis on the “importance that we attach to the attitudes and intentions towards us of other human beings” ([1962]1993, 48) and on “how much it matters to us, whether the actions of other people … reflect attitudes towards us of goodwill, affection, or esteem on the one hand or contempt, indifference, or malevolence on the other” ([1962]1993, 49). For Strawson, our practices of holding others responsible are largely responses to these things: that is, “to the quality of others’ wills towards us” ([1962]1993, 56).

To get a sense of the importance of quality of will for our interpersonal relations, note the difference in your response to one who injures you accidentally as compared to how you respond to one who does you the same injury out of “contemptuous disregard” or “a malevolent wish to injure [you]” (P. Strawson [1962]1993, 49). The second case is likely to arouse a type and intensity of resentment that would not reasonably be felt in the first case. Corresponding points may be made about gratitude: you would likely not have the same feelings of gratitude toward a person who benefits you accidentally as you would toward one who does so out of concern for your welfare.

According to Strawson, the tendency to respond with reactive attitudes to another’s display of good or ill will involves imposing on the other a demand for moral respect and due regard ([1962]1993, 63). Thus, among the circumstances that mollify a person’s negative reactive attitudes are those which show that—perhaps despite initial appearances—the demand for due regard has not been ignored or flouted. When someone explains that the injury they caused you was entirely unforeseen and accidental, they indicate that their regard for your welfare was not insufficient and that they are, therefore, not an appropriate target of the attitudes involved in blame.

An agent who excuses themselves from blame in the above way is not calling into question their status as a generally responsible agent: they are still open to the demand for due regard and liable, in principle, to reactive responses. Other agents, however, may be inapt targets for blame and the reactive emotions precisely because they are not legitimate targets of a demand for regard. In these cases, an agent is not excused from blame, they are exempted from it: it is not that their behavior is discovered to have been non-malicious, but rather that they are recognized as one of whom better behavior cannot reasonably be demanded. (The widely-used terminology in which the above contrast is drawn—“excuses” versus “exemptions”—is due to Watson [1987]2004).

For Strawson, the most important group of exempt agents includes those who are, at least for a time, significantly impaired for normal interpersonal relationships. These agents may be children, or psychologically impaired like the “schizophrenic” (P. Strawson [1962]1993, 51). Alternatively, exempt agents may simply be “wholly lacking … in moral sense” (P. Strawson [1962]1993, 58), perhaps because they suffered from “peculiarly unfortunate … formative circumstances” (P. Strawson [1962]1993, 52). These agents are not candidates for the range of responses involved in our personal relationships because they do not participate in these relationships in the right way for such responses to be sensibly applied to them. Rather than taking up interpersonally-engaged attitudes (that presuppose a demand for respect) toward exempt agents, we take an objective attitude toward them. Such an agent may be regarded merely as “an object of social policy,” something “to be managed or handled or cured or trained” (P. Strawson [1962]1993, 52).

Strawson’s perspective has an important compatibilist upshot. For one thing, Strawson claims that our “commitment to participation in ordinary interpersonal relationships is … too thoroughgoing and deeply rooted for us to take seriously the thought that” the truth of determinism entails that such relationships do not, or should not, exist ([1962]1993, 54); but being involved in these relationships “precisely is being exposed to the range of reactive attitudes” that constitute our responsibility practices ([1962]1993, 54). So, regardless of the truth of determinism, we cannot give up—not entirely at least—these ways of engaging with one another. Strawson also insists that the truth of determinism would not show that human beings generally occupy excusing or exempting conditions. It would not follow from the truth of determinism “that anyone who caused an injury either was quite simply ignorant of causing it or had acceptably overriding reasons for” doing so (P. Strawson [1962]1993, 53; emphasis in original); nor would it follow “that nobody knows what he’s doing or that everybody’s behaviour is unintelligible in terms of conscious purposes or that everybody lives in a world of delusion or that nobody has a moral sense” (P. Strawson [1962]1993, 59).

Strawson argues that learning that determinism is true would not raise general concerns about our responsibility practices. This is because the truth of determinism would not show that human beings are generally abnormal in a way that would call into question their openness to the reactive attitudes: “it cannot be a consequence of any thesis which is not itself self-contradictory that abnormality is the universal condition” (P. Strawson [1962]1993, 54). But it has been noted that while the truth of determinism might not suggest universal abnormality, it may well show that normal human beings are morally incapacitated in a way that is relevant to our responsibility practices (Russell 1992, 298–301). Strawson’s claims that we are too deeply and naturally committed to our reactive-attitude-involving practices to give them up, and that doing so would irreparably distort our moral lives, have also been questioned (Nelkin 2011, 42–45; G. Strawson 1986, 84–120; Watson [1987]2004, 255–58).

A different objection emphasizes the response-dependence of Strawson’s account: that is, the way it explains an agent’s responsibility in terms of the responses that characterize a given community’s responsibility practices, rather than in terms of independent facts about whether the agent is responsible. This feature of Strawson’s approach invites the following reading:

In Strawson’s view, there is no such independent notion of responsibility that explains the propriety of the reactive attitudes. The explanatory priority is the other way around: It is not that we hold people responsible because they are responsible; rather, the idea ( our idea) that we are responsible is to be understood by the practice, which itself is not a matter of holding some propositions to be true, but of expressing our concerns and demands about our treatment of one another. (Watson [1987]2004, 222; emphasis in original; see Bennett 1980 for a related, non-cognitivist interpretation of Strawson’s approach)

Strawson’s approach would be particularly problematic if, as the above reading might suggest, it entails that a group’s responsibility practices are—as they stand and however they stand—beyond criticism simply because they are that group’s practices (Fischer and Ravizza 1993, 18).

But there is something to be said from the other side of the debate. It may seem obvious that people are appropriately held responsible only if there are independent facts about their responsibility status. But as Wallace argues, it can be difficult “to make sense of the idea of a prior and thoroughly independent realm of moral responsibility facts” that is separate from our practices and yet to which our practices must answer (1996, 88). For Wallace, giving up on practice-independent responsibility facts doesn’t mean giving up on facts about responsibility; rather, “we must interpret the relevant facts [about responsibility] as somehow dependent on our practices of holding people responsible” (1996, 89). Such an interpretation requires an investigation into our practices, and what emerges most conspicuously, for Wallace, is the degree to which our responsibility practices are organized around a fundamental commitment to fairness (1996, 101). Wallace develops this commitment to norms of fairness into an account of the conditions under which people are appropriately held morally responsible (1996, 103–109). (For a more recent defense of the response-dependent approach to responsibility, see Shoemaker 2017b; for criticism of such approaches, see Todd 2016.)

Due to Strawson’s influence, philosophers often now think of blameworthiness as centrally involving an agent’s being an appropriate object of certain emotions, particularly resentment. (For accounts that focus instead on the appropriateness of guilt, see Carlsson 2017, Clarke 2016, and Duggan 2018, as well as some of the essays in Carlsson 2022).

Emotions seem to have, in some way or other, a representational component, and whether an emotion is fitting in a given context can be assessed, at least in part, in terms of its representational accuracy. So, for example, the emotion of fear may represent its object as dangerous and an episode of fear may be fitting if the object of that emotion is in fact dangerous. (For more, see the entry on emotion .) It is possible, then, to give an account of blameworthiness in terms of the fittingness of resentment, which will involve giving an account of how resentment represents its object. Recent efforts along these lines include Graham (2014), Rosen (2015), and Strabbing (2019), all of whom take resentment to involve certain thoughts, and the fittingness of resentment to depend on the accuracy of these thoughts. As Rosen puts it, “[f]or X to be morally blameworthy for A just is for it to be appropriate to resent X for A , or in other words, for the thoughts implicit in resentment … to be true” (2015, 72). See D’Arms (2022) for criticism of Rosen’s approach. D’Arms and his co-author Jacobson (2023) hold that emotional fittingness is generally not a matter of some thought being true, it is rather a matter of correct appraisal, though they do conceive of resentment as involving certain thoughts since it is a cognitive “sharpening” of a more basic emotion kind such as anger (2023, 109 note 6).

For Graham, the thought involved in resentment is that the object of blame “has violated a moral requirement of respect” (2014, 408); for Rosen, it is that “[i]n doing A , X showed an objectionable pattern of concern” (2015, 77); for Strabbing, “the following thought partly constitutes resentment: in doing A, S expressed insufficient good will” (2019, 3127). But Rosen and Strabbing find additional thoughts to also be part of resentment. For Rosen, resentment involves not just the thought that another has acted with an objectionable pattern of concern, it also includes the “ the retributive thought ” that the other deserves to suffer for acting as they did (2015, 83; emphasis in original). This will rule out resentment and blame in the case of an agent who violates a moral requirement but who “lacked the capacity to recognize and respond to the reasons for complying with it” since it would be, Rosen claims, unfair to sanction such an agent (2015, 84). (See Wallace 1996 and Watson [1987]2004 for other accounts that impose a fairness condition on resentment in view of its supposed sanctioning nature.) Strabbing argues that resentment is constituted not just by the thought that another showed insufficient good will but also by the thought that the other “could have acted with a better quality of will” (2019, 3129). Again, this will make resentment unfitting in the case of some agents who fail to show proper concern for others.

There is disagreement about whether wrongdoers who faultlessly acquire a commitment to flawed moral values—perhaps as a result of cultural context—are open to blame (for more, see §3.2 , §3.10 ). These wrongdoers may behave permissibly according to their own culturally-supported values, yet they may also act with an objectionable quality of will. Rosen’s and Strabbing’s accounts would explain why resentment might be inappropriate in the case of such wrongdoers: it may be unfair to sanction them or to expect them to act with a better quality of will. On the other hand, if the cognitive content of resentment is narrower than Rosen and Strabbing suggest—if, for example, it involves merely an attribution of ill will—then resentment may be fitting in some of these cases. Alternatively, it may be possible to distinguish between varieties of resentment: there may be a resentment-like emotion partly constituted by relatively narrow cognitive content (i.e., the thought that another acted with ill will), and a distinct resentment-like emotion partly constituted by the broader cognitive content suggested by Rosen and Strabbing. In this case, the wrongdoers in question may be open to a type of resentment that represents them simply as wrongdoers, but not to a more complex type of resentment; see Hieronymi (2014) and Talbert (2014) for suggestions like this.

As noted in §1 , a lasting influence of Frankfurt’s work was to draw attention to the actual causes of agents’ behavior, and particularly to whether an agent acted for their own reasons. Reasons-responsiveness approaches have been particularly attentive to these issues. These approaches ground responsibility by reference to agents’ capacities for being appropriately sensitive to the rational considerations that bear on their actions. Interpreted broadly, reasons-responsiveness approaches include a diverse collection of views: Brink and Nelkin (2013), Fischer and Ravizza (1998), McKenna (2013), Nelkin (2011), Sartorio (2016), Wallace (1996), and Wolf (1990). Fischer and Ravizza’s Responsibility and Control (1998) is the most influential articulation of this approach.

Fischer and Ravizza take Frankfurt cases ( §1 ) to show that access to alternatives is not necessary for moral responsibility. Rather, what is required is “guidance control,” which is manifested when an agent guides their behavior in a particular direction, and regardless of whether it was open to them to guide their behavior differently (Fischer and Ravizza 1998, 29–34).

If a person’s behavior is brought about by hypnosis or genuinely irresistible urges, then they may not be morally responsible for their behavior because they do not reflectively guide it in the way required for responsibility (Fischer and Ravizza 1998, 35). More specifically, an agent in the above circumstances is not likely to be responsible because he “is not responsive to reasons—his behavior would be the same, no matter what reasons there were” (Fischer and Ravizza 1998, 37). Thus, Fischer and Ravizza characterize possession of guidance control as dependent on responsiveness to reasons. In particular, guidance control depends on whether the psychological mechanism that issues in an agent’s behavior is responsive to reasons. (Guidance control also requires that an agent owns the mechanism on which they act. According to Fischer and Ravizza, this requires placing historical conditions on responsibility; see §3.9 .)

Fischer and Ravizza’s focus on mechanisms is motivated by the following reasoning. In a Frankfurt case, an agent is responsible for an action even though their action is ensured by external factors. But the presence of these external factors means that the agent in a Frankfurt case would have acted the same no matter what reasons they were confronted with. So, the responsible agent in a Frankfurt scenario is not responsive to reasons. Fischer and Ravizza’s solution to this problem is to argue that while the agent in a Frankfurt case may not be responsive to reasons, the agent’s mechanism—“the process that leads to the relevant upshot [i.e., the agent’s action]”—may well be responsive to reasons (1998, 38). In other words, the agent’s generally-specified psychological mechanism might have responded (under counterfactual conditions) to considerations in favor of omitting the action that the agent performed. Fischer and Ravizza thus conclude that “relatively clear cases of moral responsibility”—those in which an agent is not hypnotized, etc.—are distinguished by the fact that “an agent exhibits guidance control of an action insofar as the mechanism that actually issues in the action is his own, reasons-responsive mechanism” (1998, 39).

But how responsive to reasons does an agent’s mechanism need to be? Fischer and Ravizza argue that moderate (as opposed to strong or weak) reasons responsiveness is required for guidance control (1998, 69–85). A mechanism that is moderately responsive to reasons may not be receptive to every sufficient reason to act in a certain way, but it will exhibit “an understandable pattern of (actual and hypothetical) reasons-receptivity” (Fischer and Ravizza 1998, 71; emphasis in original). Such a pattern will indicate that an agent understands “how reasons fit together” and that, for example, “acceptance of one reason as sufficient implies that a stronger reason must also be sufficient” (Fischer and Ravizza 1998, 71). In addition, the desired pattern of regular receptivity to reasons will include receptivity to a range of moral considerations (Fischer and Ravizza 1998, 77; see Todd and Tognazzini 2008 for criticism of Fischer and Ravizza’s articulation of this condition.) This will rule out attributing moral responsibility to non-moral agents.

Fischer and Ravizza’s account has generated a great deal of attention and criticism. Some critics focus on the contrast Fischer and Ravizza draw between the capacity for receptivity to reasons and the capacity for reactivity to reasons (McKenna 2005, Mele 2006a, Watson 2001). Others are dissatisfied with their focus on the powers of mechanisms as opposed to agents. This has led some authors to develop agent-based reasons-responsiveness accounts that address the concerns that led Fischer and Ravizza to their mechanism-based approach (Brink and Nelkin 2013, McKenna 2013, Sartorio 2016).

3. Contemporary Debates

3.1 the “faces” of responsibility.

Do our responsibility practices accommodate distinct forms of moral responsibility? Interest in this question stems from a debate between Susan Wolf and Gary Watson. Among other things, Wolf’s important 1990 book, Freedom Within Reason , offers a critical discussion of “Real Self” theories of responsibility. On these views, a person is responsible for behavior that is attributable to their real self, and “an agent’s behavior is attributable to the agent’s real self … if she is at liberty (or able) both to govern her behavior on the basis of her will and to govern her will on the basis of her valuational system” (Wolf 1990, 33). A responsible agent is, therefore, not simply moved by their strongest desires; rather, they are moved by desires that the agent endorses insofar as the desires are in conformity either with the agent’s values or with their higher-order desires. Wolf’s central example of a Real Self View is Watson (1975). (In an earlier paper, Wolf 1987 characterizes Watson 1975, Frankfurt 1971, and Taylor 1976 as offering “deep self views.” For more on real-self/deep-self views, see §3.9 ; for a recent presentation of a real-self view, see Sripada 2016.)

According to Wolf, Real Self views can explain why people acting under the influence of hypnosis or compulsive desires are not responsible (1990, 33). Since these agents are unable to govern their behavior on the basis of their valuational systems, they are alienated from their behavior in a way that undermines responsibility. But for Wolf it is a mark against Real Self views that they are silent on the topic of how agents came to be the way they are. An agent’s real self might be the product of a traumatic upbringing, and Wolf argues that this would give us reason to question the “agent’s responsibility for her real self” and thus her responsibility for the present behavior that issues from that self (1990, 37; emphasis in original). For an account of an agent with such an upbringing, see Wolf’s (1987) fictional example of JoJo; see Watson ([1987]2004) for a related discussion of the convicted murderer Robert Alton Harris. (For discussion of JoJo, see §3.2 ; for discussion of the relevance of personal history for present responsibility see §3.9 .)

Wolf suggests that when a person’s real self is the product of childhood trauma (or similar factors), then that person is potentially responsible for their behavior only in a superficial sense that merely attributes bad actions to the agent’s real self (1990, 37–40). However, Wolf argues that ascriptions of moral responsibility go deeper than such attributions can reach:

When … we consider an individual worthy of blame or of praise, we are not merely judging the moral quality of the event with which the individual is so intimately associated; we are judging the moral quality of the individual herself in some more focused, noninstrumental, and seemingly more serious way. (1990, 41)

This deeper form of assessment requires more than that an agent is “able to form her actions on the basis of her values,” it also requires that “she is able to form her values on the basis of what is True and Good” (Wolf 1990, 75). This latter ability may be limited in an agent whose real self is the product of pressures (such as a traumatic upbringing) that have impaired their moral competence. (For more on moral competence, see §3.2 .)

In his response to Wolf, Watson ([1996]2004) agrees that some approaches to responsibility—i.e., self-disclosure views (a phrase Watson borrows from Benson 1987)—focus narrowly on whether behavior is attributable to an agent. But Watson denies that these attributions constitute a merely superficial form of assessment. Behavior that is attributable to an agent because it issues from their valuational system often discloses something interpersonally and morally significant about the agent’s “fundamental evaluative orientation” (Watson [1996]2004, 271). Thus, ascriptions of responsibility in this responsibility-as-attributability sense are “central to ethical life and ethical appraisal” (Watson [1996]2004, 263).

However, Watson agrees with Wolf that there is more to responsibility than attributing actions to agents. In addition, we hold agents responsible for their behavior, which “is not just a matter of the relation of an individual to her behavior” (Watson [1996]2004, 262). When we hold responsible, we also “demand … certain conduct from one another and respond adversely to one another’s failures to comply with these demands” (Watson [1996]2004, 262). The moral demands, and potential for adverse treatment, associated with holding others responsible are part of our accountability (as opposed to attributability) practices, and these features of accountability raise issues of fairness that do not arise in the context of determining whether behavior is attributable to an agent (Watson [1996]2004, 273; also see material in §2.2.3 ). Therefore, conditions may apply to accountability that do not apply to attributability: perhaps “accountability blame” should be—as Wolf suggested—moderated in the case of an agent whose “squalid circumstances made it overwhelmingly difficult to develop a respect for the standards to which we would hold him accountable” (Watson [1996]2004, 281).

So, on Watson’s account, there is responsibility-as-attributability, and when an agent satisfies the conditions on this form of responsibility, behavior is properly attributed to the agent as reflecting morally important features of the agent’s self. But there is also responsibility-as-accountability, and when an agent satisfies the conditions on this form of responsibility, which requires more than the correct attribution of behavior, they can be held accountable for that behavior in the ways that characterize moral blame.

It has become common for the views of several authors to be described (with varying degrees of accuracy) as instances of “attributionism”; see Levy (2005) for the first use of this term. These authors include Adams (1985), Arpaly (2003), Hieronymi (2004), Scanlon (1998, 2008), Sher (2006, 2009), A. Smith (2005, 2008), Schlossberger (2021), and Talbert (2012a). Attributionists take moral responsibility assessments to be concerned with whether an action (omission, character trait, or belief) is attributable to an agent for the purposes of moral assessment, where this usually means that the action (or omission, etc.) reflects the agent’s “judgment sensitive attitudes” (Scanlon 1998), “evaluative judgments” (A. Smith 2005), or, more generally, the agent’s “moral personality” (Hieronymi 2008).

Attributionism resembles the self-disclosure views mentioned by Watson (see the previous subsection) insofar as both focus on the way that a responsible agent’s behavior discloses morally significant features of the agent’s self. However, attributionists are interested in more than specifying the conditions for what Watson calls responsibility-as-attributability. Attributionists take themselves to give conditions for holding agents responsible in Watson’s accountability sense. (See the previous subsection for the distinction between accountability and attributability.)

According to attributionism, fulfillment of attributability conditions is sufficient for holding agents accountable for their behavior. This means that attributionism rejects conditions on moral responsibility that would excuse agents if their characters were shaped under adverse conditions (Scanlon 1998, 278–85), or if the thing for which the agent is blamed was not under their control (Sher 2006b and 2009, A. Smith 2005), or if the agent can’t be expected to recognize the moral status of their behavior (Scanlon 1998, 287–290; Talbert 2012a). Attributionists reject these conditions on responsibility because morally significant behavior is attributable to agents that do not fulfill them. Attributionists have also argued that blame may profitably be understood as a form of moral protest (Hieronymi 2001, A. Smith 2013, Talbert 2012a); part of the appeal of this move is that moral protests may be legitimate in cases in which the above conditions are not met.

Some argue that attributionists are wrong to reject the conditions on responsibility mentioned in the last paragraph (Levy 2005, 2011; Shoemaker 2011, 2015; Watson 2011). It has also been argued that the attributionist account of blame is too close to mere negative appraisal (Levy 2005; Wallace 1996, 80–1; Watson 2002). In addition, Scanlon (2008) has been criticized for failing to take negative emotions such as resentment to be central to the phenomenon of blame (Wallace 2011, Wolf 2011; the criticism could also be applied to Sher 2006). For overviews of attributionism, see Schlossberger (2021) and Talbert (2022).

Building on the distinction between attributability and accountability ( §3.1.1 ), David Shoemaker (2011 and 2015) introduces a third form of responsibility: answerability. On Shoemaker’s view, attributability-responsibility assessments respond to facts about an agent’s character, accountability-responsibility responds to an agent’s degree of regard for others, and answerability-responsibility responds to an agent’s evaluative judgments. A. Smith (2015) and Hieronymi (2008 and 2014) use “answerability” to refer to a view more like the attributionist perspective described in the previous subsection, and Pereboom (2014) has used the term to indicate a form of responsibility more congenial to responsibility skeptics.

Possession of moral competence—the ability to recognize and respond to moral considerations—is often taken to be a condition on moral responsibility. Wolf’s (1987) story of JoJo illustrates this proposal. JoJo was raised by an evil dictator and becomes the same sort of sadistic tyrant that his father was. JoJo is happy to be the sort of person that he is, and he is moved by precisely the desires (e.g., to imprison and torture his subjects) that he wants to be moved by. Thus, JoJo fulfills important conditions on responsibility (see, in particular, the discussion of structural accounts of responsibility in §3.9 ), however, Wolf argues that it may be unfair to hold JoJo responsible for his objectionable behavior.

JoJo’s upbringing plays an important role in Wolf’s argument, but only because it left JoJo unable to appreciate the wrongfulness of his behavior. It is JoJo’s impaired moral competence that does the real excusing work, and similar conclusions of non-responsibility should be drawn about others whom we think “could not help but be mistaken about their [bad] values” (Wolf 1987, 57).

Many join Wolf in arguing that impaired moral competence (perhaps on account of one’s upbringing or other environmental factors) undermines moral responsibility (Benson 2001, Fischer and Ravizza 1998, Fricker 2010, Levy 2003, Russell 1995 and 2004, Wallace 1996, Watson [1987]2004). Part of what motivates this conclusion is the thought that it can be unreasonable to expect morally-impaired agents to avoid wrongful behavior, and that it is therefore unfair to expose these agents to the harm of moral blame (also see §2.2.3 and §3.1.1 ). For detailed development of the moral competence requirement on responsibility in terms of considerations of fairness, see Wallace (1996); also see Kelly (2013), Levy (2009), and Watson ([1987]2004). For rejection of the claim that blame is unfair in the case of morally-impaired agents, see several of the defenders of attributionism mentioned in §3.1.2 .

The moral competence condition on responsibility can also be motivated by the suggestion that impaired agents are not able to commit wrongs that have the sort of moral significance to which blame would be an appropriate response. While morally-impaired agents can fail to show appropriate respect for others, these failures do not necessarily constitute the kind of flouting of moral norms that grounds blame (Watson [1987]2004, 234). In other words, a failure to respect others, is not always an instance of blame-grounding disrespect for others, since the latter (but not the former) requires the ability to comprehend the norms that one violates (Levy 2007, Shoemaker 2011; for a reply, see Talbert 2012b).

Conversational theories of responsibility construe elements of our responsibility practices as moves in a moral conversation.

Several prominent versions of the conversational approach develop P. F. Strawson’s suggestion ( §2.2.1 ) that the negative reactive attitudes involved in blame are expressions of a demand for moral regard. Considerations about moral competence ( §3.2 ) are relevant here. Watson argues that a demand “presumes,” as a condition on the intelligibility of expressing it, “understanding on the part of the object of the demand” ([1987]2004, 230). Therefore, since, “[t]he reactive attitudes are incipiently forms of communication,” they are intelligibly expressed “only on the assumption that the other can comprehend the message,” and since the message is a moral one, “blaming and praising those with diminished moral understanding loses its ‘point’” (Watson [1987]2004, 230; see Watson 2011 for a modification of his original proposal). Wallace argues, similarly, that since responsibility practices are internal to moral relationships that are “defined by the successful exchange of moral criticism and justification…. It will be reasonable to hold accountable only someone who is at least a candidate for this kind of exchange of criticism and justification” (1996, 164).

Michael McKenna’s Conversation and Responsibility (2012) offers the most developed conversational analysis of responsibility. For McKenna, the “moral responsibility exchange” occurs in stages: an initial “moral contribution” of morally salient behavior; the “moral address” of, e.g., blame that responds to the moral contribution; the “moral account” in which the first contributor responds to moral address with, e.g., apology; and so on (2012, 89). Like Wallace and Watson, McKenna notes the way in which a morally-impaired agent will find it difficult “to appreciate the challenges put to her by those who hold [her] morally responsible,” but he also argues that a sufficiently impaired agent cannot even make the first move in a moral conversation (2012, 78). Thus, a morally-impaired agent’s responsibility is called into question not only because they are unable to respond appropriately to moral demands, but also because “she is incapable of acting from a will with a moral quality that could be a candidate for assessment from the standpoint of holding responsible” (McKenna 2012, 78). This is related to Levy’s and Shoemaker’s contention ( §3.2 ) that impairments of moral competence can leave an agent unable to express the type of ill will to which blame responds. By contrast, Watson (2011), allows that significant moral impairment is compatible with the ability to perform blame-relevant wrongdoing, even if such impairment undermines the wrongdoer’s moral accountability for their actions.

For another important account of responsibility in broadly conversational terms, see Shoemaker’s discussion of the sort of moral anger involved in holding others accountable for their behavior (2015, 87–117). For additional defenses and articulations of the conversational approach to responsibility, see Darwall (2006), Fricker (2016), and Macnamara (2015).

It was suggested above that blame may amount to the expression of a moral demand. Macnamara (2013) argues, to the contrary, that blame is not helpfully construed in such terms, and that the prospects for construing praise as a demand are even worse. Macnamara suggests that we should interpret both blame and praise as ways of recognizing the moral significance of behavior, and as calling on the blamed and the praised to express similar recognitions of the quality of their actions. In successful cases, this will involve the target of blame being subject to feelings of guilt or remorse, and the target of praise being subject to feelings of self-approbation. Similarly, Telech (2021), interprets praise not as issuing a demand but rather as issuing an invitation to the praiseworthy person to accept moral credit by jointly (i.e., with the praiser) valuing what was creditworthy in their action.

A number of philosophers have recently investigated the conditions under which one may lack the standing to hold another person morally responsible. With respect to blame, the thought is that a blamer can, for one reason or another, lack the authority to blame even if the one they blame is blameworthy. There is disagreement about whether the authority just mentioned amounts to a right that permits one to blame or whether it also involves a normative power to issue a demand for some appropriate response (e.g., an apology). With respect to the first possibility, standingless blame is pro tanto impermissible because one lacks the right to blame; with respect to the second possibility, standingless blame fails to generate imperatives for the blamee. (For the distinction just mentioned, see Fritz and Miller 2022; for accounts of the normative power involved in this context, see Edwards 2019 and Piovarchy 2020). There is also uncertainty in the literature about whether lack of standing should inhibit only overt blaming responses or whether private blame—which may amount only to a blamer’s being subject to otherwise fitting emotional responses (see §2.2.3 )— can also be ruled out on grounds of lack of standing.

Several conditions on standing to blame have been proposed, but most attention has been given to two: the no-meddling condition (where one has standing to blame only if blame would not amount to an inappropriate intrusion into the affairs of others—see McKiernan 2016 and Seim 2019) and the non-hypocrisy condition (where one has standing to blame only if they can do so non-hypocritically). Of these two conditions, the second has received more attention.

In a case of hypocritical blame, one blames another for violating a norm that they themselves have unrepentantly violated. Wallace (2010) argues that the hypocritical blamer is open to a distinct moral objection that undermines their standing to blame. The basis for this objection is that the hypocritical blamer denies “the presumption of the equal standing of persons” (Wallace 2010, 330). This presumption—constitutive, Wallace argues, of the moral practice in which the hypocritical blamer is engaged—is denied because the hypocritical blamer takes themselves to remain insulated from blame yet does not take the similarly-morally-positioned target of their blame to enjoy the same protection. (Wallace takes the hypocrite to lack standing not just for expressions of blame but also for the private experience of blaming emotions.)

Fritz and Miller (2018) say that the hypocritical blamer has a “differential blaming disposition”: they are disposed to blame another but not themselves, where there is no morally relevant difference that would justify this. This makes hypocritical blame unfair, which provide “a moral reason that counts against blaming” in contexts of hypocrisy (Fritz and Miller 2018, 122). (It could just as well be concluded that the hypocritical blamer has moral reason to blame more rather than less: that is, they have reason to extend their blame to themselves. A hypocritical blamer may regain standing to blame in this way; see Fritz and Miller 2018 and Todd 2019.) For Fritz and Miller, the unfairness of a differential blaming disposition accounts for what is objectionable in hypocritical blame. To motivate the conclusion that the hypocritical blamer lacks standing to blame, they argue that our right to blame others is grounded in the fact that persons are morally equal. Since “hypocrisy involves at least an implicit rejection of the equality of persons” (Fritz and Miller 2018, 125), the hypocritical blamer rejects the very thing that would ground their right to blame, so they lack standing to blame.

Todd (2019) objects to the preceding accounts, arguing that “we cannot derive the non-hypocrisy condition from facts about the equality of persons” (2019, 371). Against Fritz and Miller, Todd argues that reliance on the equality of persons gives an unwelcome result: it entails that a merely inconsistent blamer lacks standing to blame. If A is disposed, for no good reason, to differentially blame B and C , then A has a differential blaming disposition. So does A , like the hypocritical blamer, lose standing to blame B and C ? For his own part, Todd suggests that we may not be able to derive the non-hypocrisy condition from anything more basic (such as considerations about rights or equality), but perhaps we can at least give a partially unifying account of what lack of standing to blame involves. Failure to meet an important subset of standing conditions involves, Todd argues, a blaming agent’s own lack of sufficient commitment to the moral values that the agent blames others for failing to sufficiently respect. For other defenses of this “commitment” view, see Lippert-Rasmussen 2020, Riedener 2019, and Rossi 2018.

In arguing against the non-hypocrisy condition, Bell (2013) notes that “people may … evince a wide variety of moral faults through their blame: they can show meanness, pettiness, stinginess, arrogance, and so on” (2013, 275). But since the arrogant blamer does not clearly lack standing to blame, perhaps we need not conclude that the hypocritical blamer lacks such standing. After all, some of the aims of blame—educating the blamer or providing them with motivation to avoid further wrongdoing—are obtainable even if the one who blames does so hypocritically (Bell 2013, 275). See Fritz and Miller (2018) for a reply to Bell on these points.

King (2019) is also skeptical about a standing condition on blame. He argues (i) that the prospects are dim for giving a plausible account of the right on which standing to blame is supposed to rest, and (ii) that we can appeal to something other than standing to account for what goes wrong in cases of hypocritical and meddling blame. In both cases, the objectionable blamer simply has reason to not blame; rather, they ought to attend to something else (to their own business in the meddling case, to their own faults in the hypocrisy case).

Standing conditions may also apply to praise. Telech (2021) notes that one who lacks an appropriate commitment to the values that a praiseworthy person respects may not be correctly positioned to offer praise: the praiseworthy person may reasonably reject such a praiser’s invitation to accept moral credit (2021, 172). Jeppsson and Brandenburg (2022) argue that hypocritical praise may fail to respect the equality of persons: If A praises B for a type of action that A is not committed to performing, this may indicate that A holds B to a higher standard than the one to which A holds themselves. And what if A is partly responsible for B having to exert themselves in a praiseworthy way? Here, B may rightly ask of A , “Who are you to praise me ?” (Jeppsson and Brandenburg 2022, 671; emphasis in original). Finally, Lippert-Rasmussen (2021) has argued that a person may lack standing to praise themselves when they do so hypocritically—that is, when they would not praise another on the same grounds that they praise themselves.

It’s widely held that moral agents can be responsible not just for actions but also for the causal outcomes of their actions. This can be accounted for by appeal to derivative responsibility : an agent’s responsibility for an outcome may derive from their responsibility for a causally related action. Responsibility for outcomes also involves an epistemic condition: the responsible agent must have been aware of—or at least it must be that they could have and should have been aware of—the likely consequences their actions. (The last point is related to the material in §3.10 ). Carolina Sartorio collects these elements in her Principle of Derivative Responsibility : “If an agent is responsible for X, X causes Y, and the relevant epistemic conditions for responsibility obtain, then the agent is also responsible for Y” (2016, 76). Blameworthiness for outcomes can perhaps be accounted for in a related way: if an agent fulfills the relevant causal and epistemic conditions on responsibility with respect to some outcome, and they fulfill those conditions in a way that makes them blameworthy, then the agent is blameworthy for the outcome. For proposals along these lines, see Sartorio’s Principle of Derivative Blameworthiness (2016, 77) as well as Björnsson (2017b) and Gunnemyr and Touborg (2023).

If an agent can be responsible for an outcome in virtue of some earlier action, can they also be responsible for an outcome in virtue of an omission? But what are omissions? Are they constituted by other actions that an agent performs, or are omissions simply absences? In the latter case, it may be difficult to see how omissions—being absences—can enter into causal relations with events such as outcomes. But even if omissions are not, strictly speaking, causes, they may still be related to outcomes in a way that is sufficient to support responsibility: when someone fails to act, it may be quite pertinent that an outcome occurs that would not have occurred had the agent not omitted the action in question. For development of this idea, see Clarke (2014, Chapter 2) and Sartorio (2016, Chapter 2) as well as the authors they cite, particularly Dowe (2000). For another important account of responsibility for omissions, see Fischer and Ravizza (1998, Chapter 5). Clarke (2014) offers a valuable treatment of many issues associated with omissions; also see the essays in Nelkin and Rickless (2017a).

If responsibility for outcomes partly depends on the obtaining of causal (or related) relationships, then factors that affect judgments about causation may also affect judgments about moral responsibility. For example, if different theories of causation yield different answers to the question of whether an agent caused an outcome, they may also yield different answers to questions about the agent’s responsibility for the outcome (Bernstein 2017). And in cases of group causation, it may be that the addition or subtraction of causal contributors will affect judgments about the degree to which any individual in the group caused the outcome; again, a corresponding effect on judgments about individual responsibility should be expected. (See Bernstein 2017 and Sartorio 2015 for the last point; both authors note that a form of moral luck may be in play here since whether an agent is part of a larger or a smaller group of causal contributors may be beyond the agent’s control; regarding moral luck, see §3.7 ) There may also be cases in which it is simply indeterminate what an agent has caused, and judgments about responsibility in these cases may likewise be indeterminate (Bernstein 2016).

In contrast to the tenor of the discussion so far, Kutz (2000) argues that founding responsibility on causal connections can—at least in cases of group agency—lead to counterintuitive results. Kutz’s central example is the Allied bombing campaign that destroyed the German city of Dresden in WWII (2000, 116–24). Far more bombs and bombers were used in the raid than were required to destroy the city, and each bomber pilot might plausibly claim that their casual contribution made no difference to that outcome. Kutz argues that, for the purposes of assessing individual moral accountability, we should refer not to individual causal contributions but rather to the pilots’ overlapping intentions and attitudes that led them to participate in the raid on Dresden.

Lawson (2013) develops an account similar to Kutz’s; Petersson (2013) objects to Kutz and defends the importance of individual causal contributions for assessing responsibility. Sinnott-Armstrong (2005) and Nefsky (2017) are other important investigations of the problem of how to assess non-difference-making causal contributions. Nefsky argues that an individual can make non-superfluous contributions to preventing or bringing about an outcome even if their contributions do not decide whether the outcome occurs. Gunneymr and Touborg’s (2023) emphasis on the way that individual, non-difference-making causal contributions may increase or decrease the “security” of an outcome is also relevant here. Kaiserman (2024) applies a view developed in Kaiserman (2016) to cases like Kutz’s, arguing that an agent can partly contribute to an outcome even if there is no identifiable part of the outcome that they caused.

Positing responsibility for outcomes may involve a commitment to outcome moral luck ( §3.7 ) because while an agent may control their action, whether that action leads to a certain outcome is typically not entirely within the agent’s control. Skepticism about outcome moral luck may thus lead to skepticism about responsibility and blameworthiness for outcomes. Perhaps agents are never responsible for outcomes but only for their action-explaining motives and intentions, or for exercising their will in a certain way. The same may be true of blameworthiness. Andrew Khoury argues that “the only things that one can be blameworthy for are those things that make one blameworthy,” and for Khoury, it is only the moral quality of our “willings,” and never the outcomes to which these willings may lead, that can make us blameworthy (Khoury 2018, 1363). Also see Graham (2014) and (2017) for important contributions in this vein.

If moral responsibility requires free will and free will requires a type of access to alternatives that is not compatible with determinism (see §1 ), then it follows that if determinism is true, no one is ever morally responsible for their behavior. The above reasoning, and the skeptical conclusion it reaches about responsibility, is endorsed by the hard determinist perspective on free will and responsibility, which was defended historically by Spinoza and d’Holbach (among others) and more recently by Honderich (2002). But given that determinism may well be false, contemporary skeptics about responsibility more often pursue a hard incompatibilist line of argument according to which the kind of free will required for desert-based (as opposed to forward-looking, see §2.1 ) moral responsibility is incompatible with the truth or falsity of determinism (Pereboom 2001, 2014).

Discussion of skeptical positions that do not depend on the truth of determinism can be found in each of the four subsections below. For additional skeptical accounts, see Smilansky (2000), Waller (2011); also see the entry on skepticism about moral responsibility .

A person is subject to moral luck if factors that are not under that person’s control affect the moral assessments to which they are open (Nagel [1976]1979; also see Williams [1976]1981 and the entry on moral luck .)

Can luck affect moral responsibility? Consider an unsuccessful assassin who shoots at their target but misses because their bullet is deflected by a passing bird. This assassin has good outcome moral luck . Because of factors beyond their control; their moral record is better than it might have been: they are not a murderer and not morally responsible for causing anyone’s death. One might think, in addition, that an unsuccessful assassin is less blameworthy than a successful assassin with whom they are otherwise identical, and that the reason for this is just that the successful assassin intentionally killed someone while the unsuccessful assassin did not. (For important recent defenses of moral luck, see Hanna 2014 and Hartman 2017)

On the other hand, one might think that if the two assassins are identical in terms of their values, goals, intentions, and motivations, then the addition of a bit of luck to the unsuccessful assassin’s story cannot ground a deep contrast between the two in terms of their moral responsibility. One way to sustain this position is to argue that moral responsibility is a function solely of internal features of agents, such as their motives and intentions (Graham 2014 and Khoury 2018; also see §3.5 ; see Enoch and Marmor 2007 for the main arguments against moral luck). Of course, the successful assassin is responsible for something (killing a person) for which the unsuccessful assassin is not, but perhaps both are responsible—and presumably blameworthy— to the same degree insofar as it was true of both that they aimed to kill, and that they did so for the same reasons and with the same commitment toward bringing about that outcome (M. Zimmerman 2002 and 2015).

But now consider a different would-be assassin who does not even try to kill anyone, but only because their circumstances did not favor this option. This would-be assassin is willing to kill under favorable circumstances (so they may have had good circumstantial moral luck since they were not in those circumstances). Perhaps the degree of responsibility attributed to the successful and unsuccessful assassins described in the previous paragraph depends not so much on the fact that they both tried to kill as on the fact that they were both willing to kill, and the would-be assassin may share the same degree of responsibility since they share the same willingness to kill. But an account that focuses on what agents would be willing to do under counterfactual circumstances is likely to generate unintuitive conclusions about responsibility since many agents who are typically judged blameless might willingly perform terrible actions under the right circumstances. (M. Zimmerman 2002 and 2015 does not shy away from this consequence, but critics—Hanna 2014, Hartman 2017—have made much of it; see Peels 2015 for a position related to Zimmerman’s that may avoid the unintuitive consequence just mentioned.)

Once luck is taken fully into account, there is reason to worry that responsibility may be generally undermined. Consider constitutive moral luck: luck in how one is constituted in terms of the “inclinations, capacities, and temperament” one finds within oneself (Nagel [1976]1979, 28). Facts about a person’s inclinations, capacities, and temperament explain much—if not all—of that person’s behavior, and if the facts that explain why a person acts as they do are a result of good or bad luck, then perhaps it is unfair to hold them responsible for their behavior. And as Nagel notes, once the full sweep of the various kinds of luck comes into view, “[t]he area of genuine agency” may shrink to nothing since our actions and their consequences “result from the combined influence of factors, antecedent and posterior to action, that are not within the agent’s control” ([1976]1979, 35). If this is right, then perhaps, “nothing remains which can be ascribed to the responsible self, and we are left with nothing but a … sequence of events, which can be deplored or celebrated, but not blamed or praised” (Nagel [1976]1979, 37).

Nagel doesn’t fully embrace a skeptical conclusion about responsibility on the above grounds, but others have done so, most notably, Neil Levy (2011). According to Levy’s “hard luck view,” the encompassing nature of moral luck means “that there are no desert-entailing differences between moral agents” (2011, 10). There are differences between agents in terms of their characters and the good or bad actions and outcomes that they produce, but Levy’s point is that, given the influence of luck in generating these differences, they don’t provide a sound basis for differential treatment of people in terms of moral praise and blame. (See Russell 2017 for a compatibilist account that leads to a variety of pessimism, though not skepticism, on the basis of the concerns about moral luck.)

Galen Strawson’s Basic Argument concludes that “we cannot be truly or ultimately morally responsible for our actions” (1994, 5). (Since the argument targets “ultimate” responsibility, it does not necessarily exclude other forms, such as forward-looking responsibility [ §2.1 ] and, on some understandings, responsibility-as-attributability [ §3.1.1 ].) The argument begins by noting that agents make the choices they do because of what seems choiceworthy to them. (This is related to the discussion of constitutive moral luck in §3.7 .) So, in order to be responsible for their choices, agents must be responsible for the fact that certain things seem choiceworthy to them. But how can agents be responsible for these prior facts about themselves? Wouldn’t this require a prior choice on the part of the agent, one that resulted in their present disposition to see certain ends as choiceworthy? But this prior choice would itself be something for which the agent would be responsible only if the agent is also responsible for the fact that the prior choice seemed choiceworthy to them. A regress looms here, and Strawson claims that it cannot be stopped except by positing an initial act of self-creation on the responsible agent’s part (G. Strawson 1994, 5, 15). But self-creation is impossible, so no one is ever ultimately responsible for their behavior.

A number of replies to this argument are possible. One might simply deny that how a person came to be the way they are matters for present responsibility: perhaps all we need to know in order to judge a person’s responsibility are facts about their present constitution and about how that constitution is related to the person’s present behavior. (For views like this, see the discussion of attributionism [ §3.1.2 ] and the discussion of non-historical accounts of responsibility in the next subsection). Alternatively, one might think that while personal history matters for moral responsibility, Strawson’s argument sets the bar too high (see Fischer 2006; for a reply, see Levy 2011, 5). Perhaps what is needed is not literal self-creation, but simply an ability to enact changes in oneself so as to acquire responsibility for the self that results from these changes (Clarke 2005). A picture along these lines can be found in Aristotle’s suggestion (in Book III of the Nicomachean Ethics ) that one can be responsible for being a careless person if one’s present state of carelessness is the result of earlier choices that one made (also see Moody-Adams 1990).

Roughly in this Aristotelian vein, Robert Kane offers an incompatibilist account of how an agent can be ultimately responsibility for their actions (1996 and 2007). On Kane’s view, for an agent “to be ultimately responsible for [a] choice, the agent must be at least in part responsible by virtue of choices or actions voluntarily performed in the past for having the character and motives he or she now has” (2007, 14; emphasis in original). This position may appear to be open to the regress concerns presented in Strawson’s argument above, but Kane thinks a regress is avoided in cases in which a person’s character-forming choices are undetermined. Since these undetermined choices will have no sufficient causes, there is no relevant prior cause for which the agent must be responsible, so there is no regress problem (Kane 2007, 15–16; see Pereboom 2001, 47–50 for criticism.)

Of particular interest to Kane are potential character-forming choices that occur “when we are torn between competing visions of what we should do or become” (2007, 26). In such cases, if a person sees reasons in favor of either choice that they might make, and the choice that they make is undetermined, then whichever choice they make will have been chosen for their own reasons. According to Kane, when an agent makes this kind of choice, they shape their own character, and since the agent’s choice is not determined by prior causal factors, they are responsible for that choice, for the character shaped by it, and for the character-determined choices that the agent may make in the future.

Accounts such as Levy’s (2011) and G. Strawson’s (1994), described in the two preceding subsections, argue that a person’s present responsibility can depend on facts about the way that person came to be as they are. But non-historical views, such as attributionism ( §3.1.2 ) and the views that Susan Wolf calls “Real Self” theories ( §3.1.1 ), reject this contention. Real Self accounts are sometimes referred to as “structural” or “hierarchical” theories. By whatever name, the basic idea is that an agent is morally responsible insofar as their will has the right structure: in particular, there needs to be an appropriate relationship between the desires that actually move an agent and that agent’s values, or between the desires that move an agent and that agent’s higher-order desires, the latter of which are the agent’s reflective preferences about which desires should move them. (For approaches along these lines, see Dworkin 1987; Frankfurt 1971, 1987; and Watson 1975.)

Harry Frankfurt’s comparison between a willing drug addict and an unwilling addict illustrates important features of his version of the structural approach to responsibility. Both of Frankfurt’s addicts strongly desire to take the drug to which they are addicted and these first-order desires will ultimately move both addicts to take the drug. But the addicts have different higher-order perspectives on their first-order desire to take the drug. The willing addict endorses and identifies with his addictive desire, but the unwilling addict repudiates his addictive desire to such an extent that, when it ends up being effective, Frankfurt says that this addict is “helplessly violated by his own desires” (1971, 12). The willing addict has a kind of freedom that the unwilling addict lacks: they may both act on the desire to take the drug, but insofar as the willing addict is moved by a desire that he endorses, he acts freely in a way that the unwilling addict does not (Frankfurt 1971, 19). A related conclusion about responsibility may be drawn: perhaps the unwilling addict’s addictive desire is alien to him in such a way that his responsibility for acting on it is called into question (for a recent defense of this conclusion, see Sripada 2017).

Frankfurt assumes that an agent’s higher-order desires have the authority to speak for the agent—they reveal (or constitute) the agent’s “real self,” to use Wolf’s language (1990). But if higher-order desires are invoked out of a concern that an agent’s lower-order desires may not speak for the agent, why won’t the same worry recur with respect to higher-order desires? When ascending through the orders of desires, why stop at any particular point? Why not think that appeal to a still higher order is always necessary to reveal where an agent stands? See Watson (1975) for this objection, which partly motivates Watson—in his articulation of a structural approach—to focus on whether an agent’s desires conform with their values , rather than with their higher-order desires.

Even if one agrees with Frankfurt about the structural elements required for responsibility, one might wonder how an agent’s will came to have its particular structure. An objection to Frankfurt’s view notes that the relevant structure might have been put in place by factors that intuitively undermine responsibility, in which case the presence of the relevant structure is not sufficient for responsibility (Fischer and Ravizza 1998, 196–201; Locke 1975). Fischer and Ravizza argue that “[i]f the mesh [between higher- and lower-order desires] were produced by … brainwashing or subliminal advertising … we would not hold the agent morally responsible for his behavior” because the psychological mechanism that produced the behavior would not be, “in an important intuitive sense, the agent’s own ” (1998, 197; emphasis in original). In response to this type of worry, Fischer and Ravizza argue that responsibility has a historical component, which they attempt to capture with their account of how agents can “take responsibility” for the psychological mechanism that produces their behavior (1998, 207–239). (For criticism of Fischer and Ravizza’s account of taking responsibility, see Levy 2011, 103–106 and Pereboom 2001, 120–22; for elaboration and defense of Fischer and Ravizza’s account, see Fischer 2004; for quite different accounts of taking responsibility, see Enoch 2012; Mason 2019, 179–207; and Wolf 2001. For work on the general significance of personal histories for responsibility, see Christman 1991, Vargas 2006, and D. Zimmerman 2003.)

Part of Fischer and Ravizza’s motivation for developing their account of “taking responsibility” was to ensure that agents who have been manipulated in certain ways do not count as responsible on their view. Several examples and arguments featuring the sort of manipulation that worry Fischer and Ravizza have played important roles in the recent literature on responsibility. One of these is Alfred Mele’s Beth/Ann example (1995, 2006b), which emphasizes the difficulties faced by accounts of responsibility that eschew historical conditions. Ann has acquired her preferences and values in the normal way, but Beth is manipulated by a team of neuroscientists so that she now has preferences and values that are identical to Ann’s. After the manipulation, Beth reflectively endorses her new values. Such endorsement might be a sign of the self-governance associated with responsibility, but Mele argues that Beth, unlike Ann, exhibits merely “ersatz self-government” since Beth’s new values were imposed on her (1995, 155). And if other kinds of personal histories similarly undermine an agent’s ability to authentically govern their behavior, then agents with these histories will not be morally responsible. For replies to Mele and general insights into manipulation cases, see Arpaly (2003), King (2013), and Todd (2011); for discussion of issues about personal identity that arise in manipulation cases, see Khoury (2013), Matheson (2014), Shoemaker (2012).

One can take a hard line in Beth’s case (Mckenna 2004). That is, one might note that while Beth acquired her new values in a strange way, everyone acquires their values in ways that are not fully under their control. Indeed, following Galen Strawson’s (1994) line of argument (described in §3.8 ), it might be noted that no one has ultimate control over their values, and even if normal agents have some capacity to address and alter their values, the dispositional factors that govern use of this capacity ultimately result from factors beyond agents’ control. Perhaps, then, Beth is not so easily distinguished from normal agents; perhaps she is just as responsible as they are. But this reasoning can cut both ways: instead of showing that Beth is assimilated into the class of normal, responsible agents, it might show that normal agents are assimilated into the class of non-responsible agents. Derk Pereboom’s four-case argument reasons along these lines (1995, 2001, 2007, 2014). (The “zygote argument” is also relevant here; see Mele 1995, 2006b, and 2008.)

Pereboom’s argument presents four scenarios involving Plum in which Plum kills White while satisfying the conditions on moral responsibility most often proposed by compatibilists (and described in earlier sections of this entry). In Case 1, Plum is “created by neuroscientists, who … manipulate him directly through the use of radio-like technology” (Pereboom 2001, 112). These scientists cause Plum’s reasoning to take a certain path that culminates in Plum deciding to kill White. Pereboom believes that Plum is clearly not responsible for killing White in Case 1 since his behavior was determined by the neuroscientists. In Cases 2 and 3, Plum is causally determined to undertake the same reasoning process as in Case 1, but in Case 2 Plum is merely “programmed” to do so by neuroscientists, and in Case 3 Plum’s reasoning is the result of socio-cultural influences that determine his character. In Case 4, Plum is a normal human being in a causally deterministic universe, and he decides to kill White in the same way as in the previous cases.

Pereboom claims that there is no relevant difference between Cases 1, 2, and 3, so judgments about Plum’s responsibility should be the same in these cases. Plum is not responsible in these cases because his behavior is causally determined by forces beyond his control (Pereboom 2001, 116). But then, Pereboom argues, we should conclude that Plum is not responsible in Case 4 since causal determinism is the defining feature of that case, and the same conclusion should apply to anyone living in a causally deterministic universe.

A possible reply to Pereboom is that the manipulation to which Plum is subjected in Case 1 undermines his responsibility for some other reason besides the fact that it causally determines his behavior. This would stop the generalization of non-responsibility from Case 1 to the subsequent cases. (See Demetriou (Mickelson) 2010, Fischer 2004, Mele 2005; for a response, see Matheson 2016; Pereboom addresses this concern in his 2014 presentation of the argument; also see Shabo 2010). Alternatively, it might be argued, on compatibilist grounds, that Plum is responsible in Case 4 and that this conclusion should be extended to the earlier cases since Plum fulfills the same compatibilist conditions on responsibility in those cases (McKenna 2008).

The four-case argument attempts to show that if determinism is true, then we cannot be the sources of our actions in the way required for moral responsibility. It is, therefore, an argument for incompatibilism rather than for skepticism about moral responsibility. But in combination with Pereboom’s argument that we lack the sort of free will required for responsibility even if determinism is false (2001, 38–88; 2014, 30–70), the four-case argument has emerged as an important motivation for skepticism about responsibility.

There has been a recent surge in interest in the epistemic condition on responsibility (as opposed to the freedom or control condition that is at the center of the free will debate).

Sometimes agents act in ignorance of the bad consequences of their actions, and sometimes their ignorance excuses them from blame. But in other cases, an agent’s ignorance does not excuse them. How can we distinguish the cases where ignorance excuses from those in which it does not? One proposal is that ignorance fails to excuse when the ignorance is itself something for which the agent is to blame. And one proposal for when ignorance is blameworthy is that it issues from a blameworthy benighting act in which an agent culpably impairs, or fails to improve, their epistemic position (H. Smith 1983). In such a case, the agent’s ignorance seems to be their own fault, so it cannot excuse them.

But when is a benighting act blameworthy? Several philosophers, such Levy (2011), Rosen (2004), and M. Zimmerman (1997), have suggested that agents are culpable for benighting acts only when they perform them knowingly. The idea is that ignorance for which one is blameworthy, and that leads to blameworthy unwitting wrongdoing, must have its source in knowing wrongful behavior. So, if someone unwittingly does something wrong, then that person will be blameworthy only if we can explain their lack of knowledge (their “unwittingness”) by reference to something else that the agent knowingly and wrongfully did. Thus, Rosen concludes that “ the only possible locus of original responsibility [for a later unwitting act] is an akratic act …. a knowing sin” (2004, 307; emphasis in original). Similarly, Michael Zimmerman argues that “all culpability can be traced to culpability that involves lack of ignorance, that is, that involves a belief on the agent’s part that he or she is doing something morally wrong” (1997, 418). (In certain structural respects, the argument here resembles Galen Strawson’s skeptical argument in §3.8 )

The above reasoning may apply not just to cases in which a person is unaware of the consequences of their action, but also to cases in which a person is unaware of the moral status of their behavior. A slaveowner, for example, might think that slaveholding is permissible, and so, on the account considered here, they will be blameworthy only if they are culpable for their ignorance about the moral status of slavery, which will require that they ignored evidence about its moral status while knowing that this is something that they should not do (Rosen 2003 and 2004).

These reflections can give rise to a couple forms of skepticism about moral responsibility (and particularly about blameworthiness). One might endorse a form of epistemic skepticism on the grounds that we rarely have insight into whether a wrongdoer knowingly acted wrongly at some suitable point in the history of a given action (Rosen 2004). Alternatively, or in addition, one might endorse a more substantive form of skepticism on the grounds that a great many normal wrongdoers don’t exhibit the sort of knowing wrongdoing supposedly required for responsibility. Perhaps very many wrongdoers don’t know that they are wrongdoers and their ignorance on this score is not their fault since it doesn’t arise from an earlier instance of knowing wrongdoing. In this case, very many ordinary wrongdoers may fail to be responsible for their behavior. (For skeptical conclusions along these lines, see M. Zimmerman 1997 and Levy 2011.)

There is more to the epistemic dimension of responsibility than what is contained in the above skeptical argument, but the argument does bring out a lot of what is of interest in this domain. For one thing, it prominently relies on a tracing strategy. This strategy is used in accounts that feature a person who does not, at the time of action, fulfill control or knowledge conditions on responsibility, but who nonetheless seems responsible for their behavior. In such a case, the agent’s responsibility may be grounded in the fact that their failure to fulfill certain conditions on responsibility is traceable to earlier actions undertaken by the agent when they did fulfill these conditions (also see the discussion of derivative responsibility in §3.5 ). For example, a person may be so intoxicated that they lack control over, or awareness of, their behavior, and yet it may still be appropriate to hold them responsible for their intoxicated behavior insofar as they freely intoxicated themselves. The tracing strategy plays an important role in many accounts of responsibility (see, e.g., Fischer and Ravizza 1998, 49–51), but it has also been subjected to important criticisms (see Vargas 2005; for a reply see Fischer and Tognazzini 2009; for more on tracing, see Khoury 2012, King 2011, and Shabo 2015).

Various strategies for rejecting the above skeptical argument also illustrate stances one can take on the relationship between knowledge and responsibility. These strategies typically involve rejecting the claim that knowing wrongdoing is fundamental to blameworthiness. It has, for example, been argued that it is often morally blameworthy to perform an action when one is merely uncertain whether the action is wrong (see Guerrero 2007; also see Nelkin and Rickless 2017b and Robichaud 2014). Another strategy would be to argue that blameworthiness can be grounded in cases of morally ignorant wrongdoing if it is reasonable to expect the wrongdoer to have avoided their moral ignorance, and particularly if their ignorance is itself caused by the agent’s own epistemic and moral vices (FitzPatrick 2008 and 2017). Relatedly, it might be argued that one who is unaware that they do wrong is blameworthy if they possessed relevant capacities for avoiding their ignorance; this approach may be particularly promising in cases in which an agent’s lack of moral awareness stems from a failure to remember their moral duties (Clarke 2014, 2017 and Sher 2006, 2009; also see Rudy-Hiller 2017). Finally, it might simply be claimed that morally ignorant wrongdoers can harbor, and express through their behavior, objectionable attitudes or qualities of will that suffice for blameworthiness (Arpaly 2003, Björnsson 2017a, Harman 2011, Mason 2015). This approach may be most promising in cases in which a wrongdoer is aware of the material outcomes of their conduct but unaware of the fact that they do wrong in bringing about those outcomes.

For more, see the entry on the epistemic condition for moral responsibility as well as the essays in Robichaud and Wieland (2017).

  • Adams, Robert Merrihew, 1985, “Involuntary Sins”, The Philosophical Review , 94(1): 3–31. doi:10.2307/2184713
  • Aristotle, 1999, Nicomachean Ethics , T. Irwin (ed. and trans.), Indianapolis: Hackett.
  • Arneson, Richard, 2003, “The Smart Theory of Moral Responsibility and Desert”, in Serena Olsaretti (ed.), Desert and Justice , Oxford: Clarendon Press, pp. 233–258.
  • Arpaly, Nomy, 2003, Unprincipled Virtue: An Inquiry Into Moral Agency , Oxford: Oxford University Press. doi:10.1093/0195152042.001.0001
  • Ayer, A. J., 1954, “Freedom and Necessity”, in his Philosophical Essays , London: MacMillan, pp. 271–284.
  • Bell, Macalester, 2013, “The Standing to Blame: A Critique”, in Coates and Tognazzini 2013b: 263–281.
  • Bennett, Jonathan, 1980, “Accountability”, in Zak van Straaten (ed.), Philosophical Subjects: Essays Presented to P. F. Strawson , Oxford: Oxford University Press, pp. 59–80.
  • Benson, Paul, 1987, “Freedom and Value”, The Journal of Philosophy , 84(9): 465–486. doi:10.2307/2027060
  • –––, 2001, “Culture and Responsibility: A Reply to Moody-Adams”, Journal of Social Philosophy , 32(4): 610–620. doi:10.1111/0047-2786.0011
  • Bernstein, Sara, 2016, “Causal and Moral Indeterminacy”, Ratio , 29: 434–447.
  • –––, 2017, “Causal Proportions and Moral Responsibility”, in Shoemaker 2017a: 164–182.
  • Björnsson, Gunnar, 2017a, “Explaining Away Epistemic Skepticism about Culpability”, in Shoemaker, 2017a: 141–162.
  • –––, 2017b, “Explaining (Away) the Epistemic Condition on Moral Responsibility”, in Robichaud and Wieland 2017: 146–62.
  • Bobzien, Susanne, 1998, Determinism and Freedom in Stoic Philosophy , Oxford: Oxford University Press. doi:10.1093/0199247676.001.0001
  • Brink, David O. and Dana K. Nelkin, 2013, “Fairness and the Architecture of Responsibility”, in Shoemaker 2013: 284–314. doi:10.1093/acprof:oso/9780199694853.003.0013
  • Carlsson, Andreas Brekke, 2017, “Blameworthiness as Deserved Guilt” , Journal of Ethics , 21: 89–115.
  • –––, (ed.), 2022, Self Blame and Moral Responsibility , Cambridge: Cambridge University Press.
  • Caruso, Gregg D., 2016, “Free Will Skepticism and Criminal Behavior: A Public Health-Quarantine Model (Presidential Address)”, Southwest Philosophy Review , 32(1): 25–48. doi:10.5840/swphilreview20163214
  • Caruso, Gregg and Derk Pereboom, 2022, Moral Responsibility Reconsidered , Cambridge: Cambridge University Press.
  • Chisholm, Roderick, 1964, “Human Freedom and the Self”, The Lindley Lecture, Department of Philosophy, University of Kansas. Reprinted in Gary Watson (ed.), Free Will , second edition, New York: Oxford University Press, 2003, pp. 26–37.
  • Christman, John, 1991, “Autonomy and Personal History”, Canadian Journal of Philosophy , 21(1): 1–24. doi:10.1080/00455091.1991.10717234
  • Clarke, Randolph, 2003, Libertarian Accounts of Free Will , New York: Oxford University Press. doi:10.1093/019515987X.001.0001
  • –––, 2005, “On an Argument for the Impossibility of Moral Responsibility”, Midwest Studies in Philosophy , 29: 13–24. doi:10.1111/j.1475-4975.2005.00103.x
  • –––, 2009, “Dispositions, Abilities to Act, and Free Will: The New Dispositionalism”, Mind , 118(470): 323–351. doi:10.1093/mind/fzp034
  • –––, 2014, Omissions: Agency, Metaphysics, and Responsibility , New York: Oxford University Press. doi:10.1093/acprof:oso/9780199347520.001.0001
  • –––, 2016, “Moral Responsibility, Guilt, and Retributivism”, Journal of Ethics , 20: 121–137.
  • –––, 2017, “Blameworthiness and Unwitting Omissions”, in Nelkin and Rickless 2017a: 63–83.
  • Clarke, Randolph, Michael McKenna, and Angela Smith (eds.), 2015, The Nature of Moral Responsibility , New York: Oxford University Press.
  • Coates, D. Justin and Neal A. Tognazzini, 2013a, “The Contours of Blame”, in Coates and Tognazzini 2013b: 3–26. doi:10.1093/acprof:oso/9780199860821.003.0001
  • ––– (eds.), 2013b, Blame: Its Nature and Norms , New York: Oxford University Press. doi:10.1093/acprof:oso/9780199860821.001.0001
  • D’Arms, Justin, 2022, “Fitting Emotions”, in C. Howard and R. A. Rowland (eds.), Fittingness: Essays in the Philosophy of Normativity , New York: Oxford University Press, pp. 105–129.
  • D’Arms, Justin and Daniel Jacobson, 2023, Rational Sentimentalism , New York: Oxford University Press.
  • Darwall, Stephen, 2006, The Second-Person Standpoint: Morality, Respect, and Accountability , Cambridge, MA: Harvard University Press.
  • Demetriou (Mickelson), Kristin, 2010, “The Soft-Line Solution to Pereboom’s Four-Case Argument”, Australasian Journal of Philosophy , 88(4): 595–617. doi:10.1080/00048400903382691
  • Dowe, Phil, 2000, Physical Causation , Cambridge: Cambridge University Press.
  • Duggan, A. P., 2018, “Moral Responsibility as Guiltworthiness”, Ethical Theory and Moral Practice , 21: 291–309.
  • Dworkin, Gerald, 1970, “Acting Freely”, Noûs , 4(4): 367–383. doi:10.2307/2214680
  • Edwards, James, 2019, “Standing to Hold Responsible”, Journal of Moral Philosophy , 16: 437–462.
  • Enoch, David, 2012, “Being Responsible, Taking Responsibility, and Penumbral Agency”, in Luck, Value, and Commitment: Themes From the Ethics of Bernard Williams , Ulrike Heuer and Gerald Lang (eds.), Oxford: Oxford University Press, 95–132. doi:10.1093/acprof:oso/9780199599325.003.0005
  • Enoch, David and Andrei Marmor, 2007, “The Case Against Moral Luck”, Law and Philosophy , 26(4): 405–436. doi:10.1007/s10982-006-9001-3
  • Eshleman, Andrew, 2014, “Worthy of Praise: Responsibility and Better-than-Minimally-Decent Agency”, in Shoemaker and Tognazzini 2014: 216–242.
  • Fara, Michael, 2008, “Masked Abilities and Compatibilism”, Mind , 117(468): 843–865. doi:10.1093/mind/fzn078
  • Fine, Cordelia and Jeanette Kennett, 2004, “Mental Impairment, Moral Understanding and Criminal Responsibility: Psychopathy and the Purposes of Punishment”, International Journal of Law and Psychiatry , 27(5): 425–443. doi:10.1016/j.ijlp.2004.06.005
  • Fischer, John Martin, 2002, “Frankfurt-Style Compatibilism”, in Contours of Agency: Essays on Themes from Harry Frankfurt , Sarah Buss and Lee Overton (eds.), Cambridge MA: MIT Press, pp. 1–26.
  • –––, 2004, “Responsibility and Manipulation”, The Journal of Ethics , 8(2): 145–177. doi:10.1023/B:JOET.0000018773.97209.84
  • –––, 2006, “The Cards That Are Dealt You”, The Journal of Ethics , 10(1–2): 107–129. doi:10.1007/s10892-005-4594-6
  • –––, 2010, “The Frankfurt Cases: The Moral of the Stories”, The Philosophical Review , 119(3): 315–336. doi:10.1215/00318108-2010-002
  • Fischer, John Martin, Robert Kane, Derk Pereboom, and Manuel Vargas (eds.), 2007, Four Views on Free Will , (Great Debates in Philosophy), Oxford: Blackwell.
  • Fischer, John Martin and Mark Ravizza, 1993a, “Introduction”, in Fischer and Ravizza 1993b: 1–41.
  • –––, (eds.), 1993b, Perspectives on Moral Responsibility , Ithaca, NY: Cornell University Press.
  • –––, 1998, Responsibility and Control: A Theory of Moral Responsibility , Cambridge: Cambridge University Press. doi:10.1017/CBO9780511814594
  • Fischer, John Martin and Neal A. Tognazzini, 2009, “The Truth about Tracing”, Noûs , 43(3): 531–556. doi:10.1111/j.1468-0068.2009.00717.x
  • FitzPatrick, William J., 2008, “Moral Responsibility and Normative Ignorance: Answering a New Skeptical Challenge”, Ethics , 118(4): 589–613. doi:10.1086/589532
  • –––, 2017, “Unwitting Wrongdoing, Reasonable Expectations, and Blameworthiness”, in Robichaud and Wieland 2017: 29–46.
  • Frankfurt, Harry G., 1969, “Alternate Possibilities and Moral Responsibility”, The Journal of Philosophy , 66(23): 829–839. doi:10.2307/2023833
  • –––, 1971, “Freedom of the Will and the Concept of a Person”, The Journal of Philosophy , 68(1): 5–20. doi:10.2307/2024717
  • –––, 1987, “Identification and Wholeheartedness”, in Schoeman 1987: 27–45. doi:10.1017/CBO9780511625411.002
  • –––, 2006, “Some Thoughts Concerning PAP”, in Moral Responsibility and Alternative Possibilities: Essays on the Importance of Alternative Possibilities , David Widerker and Michael McKenna (eds.), Burlington, VT: Ashgate, pp. 339–445.
  • Fricker, Miranda, 2010, “The Relativism of Blame and Williams’s Relativism of Distance”, Aristotelian Society Supplementary Volume , 84: 151–177. doi:10.1111/j.1467-8349.2010.00190.x
  • –––, 2016, “What’s the Point of Blame? A Paradigm Based Explanation”, Noûs , 50(1): 165–183. doi:10.1111/nous.12067
  • Fritz, Kyle G. and Daniel Miller, 2018, “Hypocrisy and the Standing to Blame”, Pacific Philosophical Quarterly , 99: 118–139.
  • –––, 2022, “A Standing Asymmetry Between Blame and Forgiveness”, Ethics , 132: 759–786.
  • Ginet, Carl, 1966, “Might We Have No Choice?”, in Freedom and Determinism , Keith Lehrer (ed.), New York: Random House, pp. 87–104.
  • –––, 1996, “In Defense of the Principle of Alternative Possibilities: Why I Don’t Find Frankfurt’s Argument Convincing”, Philosophical Perspectives , 10: 403–417.
  • Graham, Peter, 2014, “A Sketch of a Theory of Moral Blameworthiness”, Philosophy and Phenomenological Research , 88: 388–409.
  • –––, 2017, “The Epistemic Condition on Moral Blameworthiness: A Theoretical Epiphenomenon”, in Robichaud and Wieland 2017: 163–79.
  • Guerrero, Alexander A., 2007, “Don’t Know, Don’t Kill: Moral Ignorance, Culpability, and Caution”, Philosophical Studies , 136(1): 59–97. doi:10.1007/s11098-007-9143-7
  • Gunnemyr, Mattias and Caroline Torpe Touborg, 2023, “You Just Didn’t Care Enough: Quality of Will, Causation, and Blameworthiness for Actions, Omissions, and Outcomes”, Journal of Ethics and Social Philosophy , 24: 1–35.
  • Hanna, Nathan, 2014, “Moral Luck Defended: Moral Luck Defended”, Noûs , 48(4): 683–698. doi:10.1111/j.1468-0068.2012.00869.x
  • Harman, Elizabeth, 2011, “Does Moral Ignorance Exculpate?”, Ratio , 24(4): 443–468. doi:10.1111/j.1467-9329.2011.00511.x
  • Hartman, Robert J., 2017, In Defense of Moral Luck: Why Luck Often Affects Praiseworthiness and Blameworthiness , New York: Routledge.
  • Hieronymi, Pamela, 2001, “Articulating an Uncompromising Forgiveness”, Philosophy and Phenomenological Research , 62(3): 529–555. doi:10.1111/j.1933-1592.2001.tb00073.x
  • –––, 2004, “The Force and Fairness of Blame”, Philosophical Perspectives , 18(1): 115–148. doi:10.1111/j.1520-8583.2004.00023.x
  • –––, 2008, “Responsibility for Believing”, Synthese , 161(3): 357–373. doi:10.1007/s11229-006-9089-x
  • –––, 2014, “Reflection and Responsibility: Reflection and Responsibility”, Philosophy & Public Affairs , 42(1): 3–41. doi:10.1111/papa.12024
  • Hobbes, Thomas, 1654 [1999], Of Liberty and Necessity , Reprinted in Hobbes and Bramhall on Liberty and Necessity , Vera Chappell (ed.), Cambridge: Cambridge University Press, pp. 15–42.
  • Honderich, Ted, 2002, How Free Are You?: The Determinism Problem , Oxford: Oxford University Press.
  • Hume, David, 1748 [1978], An Enquiry Concerning Human Understanding , P. H. Nidditch (ed.), Oxford: Oxford University Press.
  • Hunt, David P., 2000, “Moral Responsibility and Unavoidable Action”, Philosophical Studies , 97(2): 195–227. doi:10.1023/A:1018331202006
  • Jefferson, Anneli, 2019, “Instrumentalism about Moral Responsibility Revisited”, The Philosophical Quarterly , 69(276): 555–573. doi:10.1093/pq/pqy062
  • Jeppsson, Sofia and Daphne Brandenburg, 2022, “Patronizing Praise”, The Journal of Ethics , 26: 663–682.
  • Kaiserman, Alex. (2016). “Causal Contribution”, Proceedings of the Aristotelian Society , 116: 387–394.
  • –––, 2024. “Responsibility and Causation”, in M. Kiener (ed.) The Routledge Handbook of Philosophy of Responsibility , New York: Routledge, pp. 164–176.
  • Kane, Robert, 1996, The Significance of Free Will , New York: Oxford University Press. doi:10.1093/0195126564.001.0001
  • –––, 2007, “Libertarianism”, in Fischer, Kane, Pereboom, and Vargas 2007: 5–43.
  • Kelly, Erin I., 2013, “What Is an Excuse?”, in Coates and Tognazzini 2013b: 244–262. doi:10.1093/acprof:oso/9780199860821.003.0013
  • Khoury, Andrew C., 2012, “Responsibility, Tracing, and Consequences”, Canadian Journal of Philosophy , 42(3–4): 187–207. doi:10.1080/00455091.2012.10716774
  • –––, 2013, “Synchronic and Diachronic Responsibility”, Philosophical Studies , 165(3): 735–752. doi:10.1007/s11098-012-9976-6
  • –––, 2018, “The Objects of Moral Responsibility”, Philosophical Studies , 175(6): 1357–1381. doi:10.1007/s11098-017-0914-5
  • King, Matt, 2013, “The Problem with Manipulation”, Ethics , 124(1): 65–83. doi:10.1086/671391
  • –––, 2014, “Traction without Tracing: A (Partial) Solution for Control-Based Accounts of Moral Responsibility”, European Journal of Philosophy , 22(3): 463–482. doi:10.1111/j.1468-0378.2011.00502.x
  • –––, 2019, “Skepticism about the Standing to Blame”, in D. Shoemaker (ed.), Oxford Studies in Agency and Responsibility, Volume 6 , New York: Oxford University Press, pp. 265–288.
  • Kutz, Christopher, 2000, Complicity: Ethics and Law for a Collective Age , Cambridge: Cambridge University Press.
  • Lawson, Brian, 2013, “Individual complicity in collective wrongdoing”, Ethical Theory and Moral Practice , 16: 227–43.
  • Lehrer, Keith, 1968, “Cans without Ifs”, Analysis , 29(1): 29–32. doi:10.1093/analys/29.1.29
  • Levy, Neil, 2003, “Cultural Membership and Moral Responsibility”, The Monist , 86(2): 145–163. doi:10.5840/monist200386211
  • –––, 2005, “The Good, the Bad, and the Blameworthy”, Journal of Ethics and Social Philosophy , 1(2): 1–16. doi:10.26556/jesp.v1i2.6
  • –––, 2007, “The Responsibility of the Psychopath Revisited”, Philosophy, Psychiatry, & Psychology , 14(2): 129–138. doi:10.1353/ppp.0.0003
  • –––, 2009, “Culpable Ignorance and Moral Responsibility: A Reply to FitzPatrick”, Ethics , 119(4): 729–741. doi:10.1086/605018
  • –––, 2011, Hard Luck: How Luck Undermines Free Will and Moral Responsibility , New York: Oxford University Press. doi:10.1093/acprof:oso/9780199601387.001.0001
  • –––, 2012, “Skepticism and Sanction: The Benefits of Rejecting Moral Responsibility”, Law and Philosophy , 31(5): 477–493. doi:10.1007/s10982-012-9128-3
  • Lewis, David, 1981, “Are We Free to Break the Laws?”, Theoria , 47(3): 113–121. doi:10.1111/j.1755-2567.1981.tb00473.x
  • Lippert-Rasmussen, Kasper, 2020, “Why the Moral Equality Account of the Hypocrite’s Lack of Standing to Blame Fails”, Analysis , 80: 666–674.
  • –––, 2021, “Praising Without Standing”, The Journal of Ethics , 26: 229–246.
  • Locke, Don, 1975, “Three Concepts of Free Action: I”, Proceedings of the Aristotelian Society , 49: 95–112.
  • Macnamara, Coleen, 2013, “‘Screw you! & ‘Thank you’”, Philosophical Studies , 165: 893–914.
  • –––, 2015, “Blame, Communication, and Morally Responsible Agency”, in Clarke, McKenna, and Smith 2015: 211–236. doi:10.1093/acprof:oso/9780199998074.003.0010
  • Mason, Elinor, 2015, “Moral Ignorance and Blameworthiness”, Philosophical Studies , 172(11): 3037–3057. doi:10.1007/s11098-015-0456-7
  • –––, 2017, “Moral Incapacity and Moral Ignorance”, in Rik Peels (ed.), Perspectives on Ignorance from Moral and Social Philosophy , New York: Routledge, 30–52.
  • –––, 2019, Ways to Be Blameworthy: Rightness, Wrongness, and Responsibility , Oxford: Oxford University Press. doi:10.1093/oso/9780198833604.001.0001
  • Matheson, Benjamin, 2014, “Compatibilism and Personal Identity”, Philosophical Studies , 170(2): 317–334. doi:10.1007/s11098-013-0220-9
  • –––, 2016, “In Defence of the Four-Case Argument”, Philosophical Studies , 173(7): 1963–1982. doi:10.1007/s11098-015-0587-x
  • McGeer, Victoria, 2015, “Building a Better Theory of Responsibility”, Philosophical Studies , 172(10): 2635–2649. doi:10.1007/s11098-015-0478-1
  • McKenna, Michael, 2004, “Responsibility and Globally Manipulated Agents”, Philosophical Topics , 32(1/2): 169–192. doi:10.5840/philtopics2004321/222
  • –––, 2005, “Reasons Reactivity and Incompatibilist Intuitions”, Philosophical Explorations , 8(2): 131–143. doi:10.1080/13869790500091508
  • –––, 2008, “A Hard-Line Reply to Pereboom’s Four-Case Manipulation Argument”, Philosophy and Phenomenological Research , 77(1): 142–159. doi:10.1111/j.1933-1592.2008.00179.x
  • –––, 2012, Conversation and Responsibility , New York: Oxford University Press. doi:10.1093/acprof:oso/9780199740031.001.0001
  • –––, 2013, “Reasons-Responsiveness, Agents, and Mechanisms”, in Shoemaker 2013: 151–183. doi:10.1093/acprof:oso/9780199694853.003.000
  • McKiernan, Amy, 2016, “Standing Conditions and Blame”, Southwest Philosophy Review , 32: 145–151.
  • Menges, Leonhard, 2017, “The Emotion Account of Blame”, Philosophical Studies , 174: 257–273.
  • Mele, Alfred R., 1995, Autonomous Agents: From Self-Control to Autonomy , New York: Oxford University Press. doi:10.1093/0195150430.001.0001
  • –––, 2005, “A Critique of Pereboom’s ‘Four-Case Argument’ for Incompatibilism”, Analysis , 65(1): 75–80. doi:10.1093/analys/65.1.75
  • –––, 2006a, “Fischer and Ravizza on Moral Responsibility”, The Journal of Ethics , 10(3): 283–294. doi:10.1007/s10892-005-5780-2
  • –––, 2006b, Free Will and Luck , New York: Oxford University Press. doi:10.1093/0195305043.001.0001
  • –––, 2008, “Manipulation, Compatibilism, and Moral Responsibility”, The Journal of Ethics , 12(3–4): 263–286. doi:10.1007/s10892-008-9035-x
  • Mele, Alfred R. and David Robb, 1998, “Rescuing Frankfurt-Style Cases”, The Philosophical Review , 107(1): 97–112. doi:10.2307/2998316
  • Milam, Per-Erik, 2016, “Reactive Attitudes and Personal Relationships”, Canadian Journal of Philosophy , 46(1): 102–122. doi:10.1080/00455091.2016.1146032
  • Moody-Adams, Michele, 1990, “On the Old Saw that Character is Destiny”, in Identity, Character, and Morality: Essays in Moral Psychology , Owen Flanagan and Amélie Oksenberg Rorty (eds.), Cambridge, MA: MIT Press, 111–32.
  • Moore, G. E., 1912, Ethics , Oxford: Oxford University Press.
  • Nagel, Thomas, 1976 [1979], “Moral Luck”, in Proceedings of the Aristotelian Society Supplementary , 50: 137–151. Reprinted in his Mortal Questions , Cambridge: Cambridge University Press, 1979, 24–38. doi:10.1093/aristoteliansupp/50.1.115
  • Nefsky, Julia, 2017, “How You Can Help, Without Making a Difference”, Philosophical Studies , 174: 2743–2767.
  • Nelkin, Dana Kay, 2008, “Responsibility and Rational Abilities: Defending an Asymmetrical View”, Pacific Philosophical Quarterly , 89(4): 497–515. doi:10.1111/j.1468-0114.2008.00333.x
  • –––, 2011, Making Sense of Free Will and Responsibility , New York: Oxford University Press.
  • Nelkin, Dana Kay and Samuel C. Rickless (eds.), 2017a, The Ethics and Law of Omissions , Oxford: Oxford University Press. doi:10.1093/oso/9780190683450.001.0001
  • –––, 2017b, “Moral Responsibility for Unwitting Omissions: A New Tracing View”, in Nelkin and Rickless 2017a: 106–130.
  • Peels, Rik, 2015, “A Modal Solution to the Problem of Moral Luck”, American Philosophical Quarterly , 52(1): 73–87.
  • Pereboom, Derk, 1995, “Determinism al Dente”, Noûs , 29(1): 21–45. doi:10.2307/2215725
  • –––, 2000, “Alternative Possibilities and Causal Histories”, Philosophical Perspectives , 14: 119–137.
  • –––, 2001, Living Without Free Will , Cambridge: Cambridge University Press. doi:10.1017/CBO9780511498824
  • –––, 2007, “Hard Incompatibilism”, in Fischer, Kane, Pereboom, and Vargas 2007: 85–125.
  • –––, 2014, Free Will, Agency, and Meaning in Life , New York: Oxford University Press. doi:10.1093/acprof:oso/9780199685516.001.0001
  • –––, 2017, “Responsibility, Regret, and Protest”, in Shoemaker 2017a: 121–140.
  • Piovarchy, Adam, 2020, “Hypocrisy, Standing to Blame and Second-Personal Authority”, Pacific Philosophical Quarterly , 101: 603–627.
  • Robichaud, Philip, 2014, “On Culpable Ignorance and Akrasia”, Ethics , 125(1): 137–151. doi:10.1086/677139
  • Robichaud, Philip and Jan Willem Wieland (eds.), 2017, Responsibility: The Epistemic Condition , Oxford: Oxford University Press.
  • Rosen, Gideon, 2003, “Culpability and Ignorance”, Proceedings of the Aristotelian Society , 103(1): 61–84. doi:10.1111/j.0066-7372.2003.00064.x
  • –––, 2004, “Skepticism about Moral Responsibility”, Philosophical Perspectives , 18: 295–313.
  • –––, 2015, “The Alethic Conception of Moral Responsibility”, in Clarke, McKenna, and Smith 2015: 65–88.
  • Rossi, Benjamin, 2018, “The Commitment Account of Hypocrisy”, Ethical Theory and Moral Practice , 21: 553–567.
  • Rudy-Hiller, Fernando, 2017, “A Capacitarian Account of Culpable Ignorance”, Pacific Philosophical Quarterly , 98(S1): 398–426. doi:10.1111/papq.12190
  • Russell, Paul, 1992, “Strawson’s Way of Naturalizing Responsibility”, Ethics , 102(2): 287–302. doi:10.1086/293397
  • –––, 1995, Freedom and Moral Sentiment: Hume’s Way of Naturalizing Responsibility , Oxford: Oxford University Press. doi:10.1093/0195152905.001.0001
  • –––, 2004, “Responsibility and the Condition of Moral Sense”, Philosophical Topics , 32(1/2): 287–305. doi:10.5840/philtopics2004321/24
  • –––, 2017, “Free Will Pessimism”, in Shoemaker 2017a: 93–120.
  • Salles, Ricardo, 2005, The Stoics on Determinism and Compatibilism , Burlington, VT: Ashgate Publishing.
  • Sartorio, Carolina, 2016, Causation and Free Will , New York: Oxford University Press. doi:10.1093/acprof:oso/9780198746799.001.0001
  • Scanlon, T. M., 1998, What We Owe to Each Other , Cambridge, MA: Harvard University Press.
  • –––, 2008, Moral Dimensions: Permissibility, Meaning, and Blame , Cambridge, MA: Harvard University Press.
  • Schlick, Moritz, 1930 [1966], “When is a Man Responsible?”, in his Fragen der Ethik , Vienna: Verlag von Julius Springer. Translated in his Problems of Ethics , David Rynin (trans.), New York: Prentice-Hall, 1939. Reprinted in Bernard Berofsky (ed.), Free Will and Determinism , New York: Harper & Row, 1966, 54–63.
  • Schoeman, Ferdinand (ed.), 1987, Responsibility, Character, and the Emotions: New Essays in Moral Psychology , Cambridge: Cambridge University Press. doi:10.1017/CBO9780511625411
  • Schlossberger, Eugene, 2021, Moral Responsibility Beyond our Fingertips , London: Lexington Books.
  • Seim, Maria, 2019, “The Standing to Blame and Meddling”, Teorema , 38: 7–26.
  • Schramme, Thomas (ed.), 2014, Being Amoral: Psychopathy and Moral Incapacity , Cambridge, MA: MIT Press.
  • Shabo, Seth, 2010, “Uncompromising Source Incompatibilism”, Philosophy and Phenomenological Research , 80(2): 349–383. doi:10.1111/j.1933-1592.2010.00328.x
  • –––, 2015, “More Trouble with Tracing”, Erkenntnis , 80(5): 987–1011. doi:10.1007/s10670-014-9693-y
  • Sher, George, 2006a, In Praise of Blame , New York: Oxford University Press. doi:10.1093/0195187423.001.0001
  • –––, 2006b, “Out of Control”, Ethics , 116(2): 285–301. doi:10.1086/498464
  • –––, 2009, Who Knew? Responsibility Without Awareness , New York: Oxford University Press. doi:10.1093/acprof:oso/9780195389197.001.0001
  • Shoemaker, David, 2011, “Attributability, Answerability, and Accountability: Toward a Wider Theory of Moral Responsibility”, Ethics , 121(3): 602–632. doi:10.1086/659003
  • –––, 2012, “Responsibility Without Identity”, The Harvard Review of Philosophy , 18: 109–132. doi:10.5840/harvardreview20121816
  • –––, (ed.), 2013, Oxford Studies in Agency and Responsibility Volume 1 , New York: Oxford University Press. doi:10.1093/acprof:oso/9780199694853.001.0001
  • –––, 2015a, Responsibility from the Margins , New York: Oxford University Press. doi:10.1093/acprof:oso/9780198715672.001.0001
  • –––, (ed.), 2015b, Oxford Studies in Agency and Responsibility, Volume 3 , New York: Oxford University Press. doi:10.1093/acprof:oso/9780198744832.001.0001
  • –––, (ed.), 2017a, Oxford Studies in Agency and Responsibility, Volume 4 , New York: Oxford University Press. doi:10.1093/oso/9780198805601.001.0001
  • –––, 2017b, “Response-Dependent Responsibility; or, A Funny Thing Happened on the Way to Blame”, Philosophical Review , 126(4): 481–527. doi:10.1215/00318108-4173422
  • Shoemaker, David and Neal Tognazzini (eds.), 2014, Oxford Studies in Agency and Responsibility, Volume 2: “Freedom and Resentment” at 50 , New York: Oxford University Press. doi:10.1093/acprof:oso/9780198722120.001.0001
  • Sinnott-Armstrong, Walter, 2005, “It’s Not My Fault: Global Warming and Individual Moral Obligations”, in W. Sinnott-Armstrong and R. Howarth (eds.), Perspectives on Climate Change: Science, Economics, Politics, Ethics , Amsterdam: Elsevier, pp. 285–307.
  • Smart, J. J. C.;, 1961, “Free-Will, Praise and Blame”, Mind , 70(279): 291–306. doi:10.1093/mind/LXX.279.291
  • –––, 1973, “An Outline of a Utilitarian System of Ethics”, in Utilitarianism: For and Against , J. J. C. Smart and Bernard Williams, Cambridge: Cambridge University Press, pp. 3–74.
  • Smilansky, Saul, 2000, Free Will and Illusion , New York: Oxford University Press.
  • Smith, Angela M., 2005, “Responsibility for Attitudes: Activity and Passivity in Mental Life”, Ethics , 115(2): 236–271. doi:10.1086/426957
  • –––, 2008, “Control, Responsibility, and Moral Assessment”, Philosophical Studies , 138(3): 367–392. doi:10.1007/s11098-006-9048-x
  • –––, 2013, “Moral Blame and Moral Protest”, in Coates and Tognazzini 2013b: 27–48. doi:10.1093/acprof:oso/9780199860821.003.0002
  • –––, 2015, “Responsibility as Answerability”, Inquiry , 58(2): 99–126. doi:10.1080/0020174X.2015.986851
  • Smith, Holly, 1983, “Culpable Ignorance”, The Philosophical Review , 92(4): 543–571. doi:10.2307/2184880
  • Smith, Michael, 2003, “Rational Capacities, or: How to Distinguish Recklessness, Weakness, and Compulsion”, in Weakness of Will and Practical Irrationality , Sarah Stroud and Christine Tappolet (eds.), Oxford: Oxford University Press, 17–38. doi:10.1093/0199257361.003.0002
  • Sripada, Chandra, 2016, “Self-Expression: A Deep Self Theory of Moral Responsibility”, Philosophical Studies , 173(5): 1203–1232. doi:10.1007/s11098-015-0527-9
  • –––, 2017, “Frankfurt’s Unwilling and Willing Addicts”, Mind , 126(503): 781–815. doi:10.1093/mind/fzw013
  • Strabbing, Jada Twedt, 2019, “Accountability and the Thoughts in Reactive Attitudes”, Philosophical Studies , 176: 3121–3140.
  • Strawson, Galen, 1986, Freedom and Belief , Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780199247493.001.0001
  • –––, 1994, “The Impossibility of Moral Responsibility”, Philosophical Studies , 75(1–2): 5–24. doi:10.1007/BF00989879
  • Strawson, P. F., 1962 [1993], “Freedom and Resentment”, in Proceedings of the British Academy , 48: 1–25. Reprinted Fischer and Ravizza 1993b: 45–66.
  • Talbert, Matthew, 2012a, “Moral Competence, Moral Blame, and Protest”, The Journal of Ethics , 16(1): 89–109. doi:10.1007/s10892-011-9112-4
  • –––, 2012b, “Accountability, Aliens, and Psychopaths: A Reply to Shoemaker”, Ethics , 122: 562–74.
  • –––, 2014, “The Significance of Psychopathic Wrongdoing”, in Schramme 2014: 275–300.
  • –––, 2022, “Attributionist Theories of Moral Responsibility”, in D. K. Nelkin and D. Pereboom (eds), The Oxford Handbook of Moral Responsibility , New York: Oxford University Press, pp. 53–70.
  • Taylor, Charles, 1976, “Responsibility for Self”, in The Identities of Persons , Amélie Oksenberg Rorty (ed.), Berkeley, CA: University of California Press, pp. 281–99.
  • Telech, Daniel, 2021, “Praise as Moral Address”, in D. Shoemaker (ed.), Oxford Studies in Agency and Responsibility, Volume 7 , New York: Oxford University Press, pp. 154–181.
  • Todd, Patrick, 2011, “A New Approach to Manipulation Arguments”, Philosophical Studies , 152(1): 127–133. doi:10.1007/s11098-009-9465-8
  • –––, 2016, “Strawson, Moral Responsibility, and the ‘Order of Explanation’: An Intervention”, Ethics , 127(1): 208–240. doi:10.1086/687336
  • –––, 2019, “A Unified Account of the Moral Standing to Blame”, Nôus 53: 347–374.
  • Todd, Patrick and Neal A. Tognazzini, 2008, “A Problem for Guidance Control”, The Philosophical Quarterly , 58(233): 685–692. doi:10.1111/j.1467-9213.2008.576.x
  • van Inwagen, Peter, 1983, An Essay on Free Will , New York: Oxford University Press.
  • Vargas, Manuel, 2005, “The Trouble with Tracing”, Midwest Studies in Philosophy , 29: 269–291. doi:10.1111/j.1475-4975.2005.00117.x
  • –––, 2006, “On the Importance of History for Responsible Agency”, Philosophical Studies , 127(3): 351–382. doi:10.1007/s11098-004-7819-9
  • –––, 2013, Building Better Beings: A Theory of Moral Responsibility , New York: Oxford University Press. doi:10.1093/acprof:oso/9780199697540.001.0001
  • Vihvelin, Kadri, 2004, “Free Will Demystified: A Dispositional Account”, Philosophical Topics , 32(1/2): 427–450. doi:10.5840/philtopics2004321/211
  • Wallace, R. Jay, 1996, Responsibility and the Moral Sentiments, Cambridge, MA: Harvard University Press.
  • –––, 2010, “Hypocrisy, Moral Address, and the Equal Standing of Persons”, Philosophy and Public Affairs , 38: 307–341.
  • –––, 2011, “Dispassionate Opprobrium: On Blame and the Reactive Sentiments”, in Wallace, Kumar, and Freeman 2011: 348–372.
  • Wallace, R. Jay, Rahul Kumar, and Samuel Freeman, 2011, Reasons and Recognition: Essays on the Philosophy of T.M. Scanlon , New York: Oxford University Press. doi:10.1093/acprof:oso/9780199753673.001.0001
  • Waller, Bruce, 2011, Against Moral Responsibility, Cambridge, MA: MIT Press.
  • Watson, Gary, 1975, “Free Agency”, The Journal of Philosophy , 72(8): 205–220. doi:10.2307/2024703
  • –––, 2001, “Reasons and Responsibility”, Ethics , 111(2): 374–394. doi:10.1086/233477
  • –––, 2002, “Contractualism and the Boundaries of Morality: Remarks on Scanlon’s What We Owe to Each Other ”, Social Theory and Practice , 28(2): 221–241. doi:10.5840/soctheorpract20022829
  • –––, 1987 [2004], “Responsibility and the Limits of Evil: Variations on a Strawsonian Theme”, in Schoeman 1987: 256–286. Reprinted in Watson 2004: 219–259. doi:10.1017/CBO9780511625411.011
  • –––, 1996 [2004], “Two Faces of Responsibility”, Philosophical Topics , 24(2): 227–248. Reprinted in Watson 2004: 260–88. doi:10.5840/philtopics199624222
  • –––, 2004, Agency and Answerability: Selected Essays , New York: Oxford University Press. doi:10.1093/acprof:oso/9780199272273.001.0001
  • –––, 2011, “The Trouble with Psychopaths”, in Wallace, Kumar, and Freeman 2011: 307–31.
  • Widerker, David, 1995, “Libertarianism and Frankfurt’s Attack on the Principle of Alternative Possibilities”, The Philosophical Review , 104(2): 247–261. doi:10.2307/2185979
  • Widerker, David and Michael McKenna, 2006, Moral Responsibility and Alternative Possibilities , Burlington VT: Ashgate Publishing.
  • Williams, Bernard, 1976 [1981], “Moral Luck”, in Proceedings of the Aristotelian Society Supplementary , 50: 115–135. Reprinted in his, Moral Luck: Philosophical Papers 1973–1980 , Cambridge: Cambridge University Press, 1981, 20–39. doi:10.1093/aristoteliansupp/50.1.115
  • Wolf, Susan, 1980, “Asymmetrical Freedom”, The Journal of Philosophy , 77(3): 151–166. doi:10.2307/2025667
  • –––, 1987, “Sanity and the Metaphysics of Responsibility”, in Schoeman 1987: 46–62. doi:10.1017/CBO9780511625411.003
  • –––, 1990, Freedom Within Reason , New York: Oxford University Press.
  • –––, 2001, “The Moral of Moral Luck”, Philosophic Exchange , 31: 4–19. [ Wolf 2001 available online ]
  • –––, 2011, “Blame, Italian Style”, in Wallace, Kumar, and Freeman 2011: 332–347.
  • Zimmerman, David, 2003, “That Was Then, This Is Now: Personal History vs. Psychological Structure in Compatibilist Theories of Autonomous Agency”, Noûs , 37(4): 638–671. doi:10.1046/j.1468-0068.2003.00454.x
  • Zimmerman, Michael J., 1988, An Essay on Moral Responsibility , Totowa, NJ: Rowman and Littlefield.
  • –––, 1997, “Moral Responsibility and Ignorance”, Ethics , 107(3): 410–426. doi:10.1086/233742
  • –––, 2002, “Taking Luck Seriously”, The Journal of Philosophy , 99(11): 553–576. doi:10.2307/3655750
  • –––, 2015, “Moral Luck Reexamined”, in Shoemaker 2015b: 136–159.
How to cite this entry . Preview the PDF version of this entry at the Friends of the SEP Society . Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers , with links to its database.
  • The Determinism and Freedom Philosophy Website , edited by Ted Honderich, University College London.
  • Flickers of Freedom (multiple contributors, coordinated by Thomas Nadelhoffer, closed 9 February 2017, archive version)
  • Eshleman, Andrew, “Moral Responsibility”, Stanford Encyclopedia of Philosophy (Fall 2019 Edition), Edward N. Zalta (ed.), URL = < https://plato.stanford.edu/archives/fall2019/entries/moral-responsibility/ >. [This was the previous entry on this topic in the Stanford Encyclopedia of Philosophy — see the version history .]

blame | compatibilism | determinism: causal | free will | free will: divine foreknowledge and | incompatibilism: (nondeterministic) theories of free will | incompatibilism: arguments for | luck: moral | moral responsibility: the epistemic condition | responsibility: collective | skepticism: about moral responsibility

Acknowledgments

I would like to thank Derk Pereboom and Daniel Miller for their helpful comments on drafts of this entry.

Copyright © 2024 by Matthew Talbert < Matthew . Talbert @ fil . lu . se >

  • Accessibility

Support SEP

Mirror sites.

View this site from another server:

  • Info about mirror sites

The Stanford Encyclopedia of Philosophy is copyright © 2024 by The Metaphysics Research Lab , Department of Philosophy, Stanford University

Library of Congress Catalog Data: ISSN 1095-5054

  • Share full article

Advertisement

Supported by

Guest Essay

The Moral Limits of Bankruptcy Law

The torso of a man in a white shirt and tie is visible through a window, bordered by blinds.

By Melissa B. Jacoby

Ms. Jacoby is the author of the forthcoming book “Unjust Debts: How Our Bankruptcy System Makes America More Unequal.”

When Purdue Pharma filed for Chapter 11 bankruptcy in 2019 , it had over a billion dollars in the bank and owed no money to lenders. But it also had the Sacklers, its owners, who were eager to put behind them allegations that they played a leading role in the national opioid epidemic.

The United States Supreme Court is now considering whether the bankruptcy system should have given this wealthy family a permanent shield against civil liability. But there is a bigger question at stake, too: Why is a company with no lenders turning to the federal bankruptcy system in response to accusations of harm and misconduct?

The maker of OxyContin is one in a long line of companies that have turned Chapter 11 into a legal Swiss Army knife, tackling problems that are a mismatch for its rules. Managing costly and sprawling litigation through bankruptcy can be well intentioned. But Chapter 11 was designed around the goal of helping financially distressed businesses restructure loans and other contract obligations.

If companies instead turn to bankruptcy to permanently and comprehensively cap liability for wrongdoing — the objective not only of Purdue Pharma but also of many other entities over recent decades — they can shortchange the rights of individuals seeking accountability for corporate coverups of toxic products and other wrongdoing. And in a country that relies on lawsuits and the civil justice system to deter corporate malfeasance, permanently capping liability using a procedure focused primarily on debt and money could be making us less safe.

In 1978, a bipartisan group of lawmakers enacted sweeping reforms to American bankruptcy law. To enhance economic value and keep viable businesses alive for the benefit of workers and other stakeholders, these changes gave companies more protection and control in bankruptcy. This new bankruptcy code also made it easier to alter the legal rights of creditors during and after bankruptcy without their consent.

To provide more sweeping protection to a distressed but viable company, the new bankruptcy laws also expanded the definition of “creditor” to include people allegedly injured by the business. Yet the rules governing Chapter 11 were drafted primarily with loans and contracts, not large numbers of harmed individuals, in mind.

We are having trouble retrieving the article content.

Please enable JavaScript in your browser settings.

Thank you for your patience while we verify access. If you are in Reader mode please exit and  log into  your Times account, or  subscribe  for all of The Times.

Thank you for your patience while we verify access.

Already a subscriber?  Log in .

Want all of The Times?  Subscribe .

  • Artificial intelligence

The potential of AI technology has been percolating in the background for years. But when ChatGPT, the AI chatbot, began grabbing headlines in early 2023, it put generative AI in the spotlight. This guide is your go-to manual for generative AI, covering its benefits, limits, use cases, prospects and much more.

Amanda Hetler

  • Amanda Hetler, Senior Editor

What is ChatGPT?

ChatGPT is an artificial intelligence ( AI ) chatbot that uses natural language processing to create humanlike conversational dialogue. The language model can respond to questions and compose various written content, including articles, social media posts, essays, code and emails.

Uses of natural language processing.

ChatGPT is a form of generative AI -- a tool that lets users enter prompts to receive humanlike images, text or videos that are created by AI.

ChatGPT is similar to the automated chat services found on customer service websites, as people can ask it questions or request clarification to ChatGPT's replies. The GPT stands for "Generative Pre-trained Transformer," which refers to how ChatGPT processes requests and formulates responses. ChatGPT is trained with reinforcement learning through human feedback and reward models that rank the best responses. This feedback helps augment ChatGPT with machine learning to improve future responses.

Who created ChatGPT?

OpenAI -- an artificial intelligence research company -- created ChatGPT and launched the tool in November 2022. It was founded by a group of entrepreneurs and researchers including Elon Musk and Sam Altman in 2015. OpenAI is backed by several investors, with Microsoft being the most notable. OpenAI also created Dall-E , an AI text-to-art generator.

How does ChatGPT work?

ChatGPT works through its Generative Pre-trained Transformer, which uses specialized algorithms to find patterns within data sequences. ChatGPT originally used the GPT-3 large language model, a neural network machine learning model and the third generation of Generative Pre-trained Transformer. The transformer pulls from a significant amount of data to formulate a response.

This article is part of

What is generative AI? Everything you need to know

  • Which also includes:
  • 8 top generative AI tool categories for 2024
  • Will AI replace jobs? 9 job types that might be affected
  • 19 of the best large language models in 2024

ChatGPT now uses the GPT-3.5 model that includes a fine-tuning process for its algorithm. ChatGPT Plus uses GPT-4 , which offers a faster response time and internet plugins. GPT-4 can also handle more complex tasks compared with previous models, such as describing photos, generating captions for images and creating more detailed responses up to 25,000 words.

ChatGPT uses deep learning , a subset of machine learning, to produce humanlike text through transformer neural networks . The transformer predicts text -- including the next word, sentence or paragraph -- based on its training data's typical sequence.

Training begins with generic data, then moves to more tailored data for a specific task. ChatGPT was trained with online text to learn the human language, and then it used transcripts to learn the basics of conversations.

Human trainers provide conversations and rank the responses. These reward models help determine the best answers. To keep training the chatbot, users can upvote or downvote its response by clicking on thumbs-up or thumbs-down icons beside the answer. Users can also provide additional written feedback to improve and fine-tune future dialogue.

What kinds of questions can users ask ChatGPT?

Users can ask ChatGPT a variety of questions, including simple or more complex questions, such as, "What is the meaning of life?" or "What year did New York become a state?" ChatGPT is proficient with STEM disciplines and can debug or write code. There is no limitation to the types of questions to ask ChatGPT. However, ChatGPT uses data up to the year 2021, so it has no knowledge of events and data past that year. And since it is a conversational chatbot, users can ask for more information or ask it to try again when generating text.

How are people using ChatGPT?

ChatGPT is versatile and can be used for more than human conversations. People have used ChatGPT to do the following:

  • Code computer programs and check for bugs in code.
  • Compose music.
  • Draft emails.
  • Summarize articles, podcasts or presentations.
  • Script social media posts.
  • Create titles for articles.
  • Solve math problems.
  • Discover keywords for search engine optimization .
  • Create articles, blog posts and quizzes for websites.
  • Reword existing content for a different medium, such as a presentation transcript for a blog post.
  • Formulate product descriptions.
  • Play games.
  • Assist with job searches, including writing resumes and cover letters.
  • Ask trivia questions.
  • Describe complex topics more simply.
  • Write video scripts.
  • Research markets for products.
  • Generate art.

Unlike other chatbots, ChatGPT can remember various questions to continue the conversation in a more fluid manner.

Screenshot of ChatGPT responding to a question.

What are the benefits of ChatGPT?

Businesses and users are still exploring the benefits of ChatGPT as the program continues to evolve. Some benefits include the following:

  • Efficiency. AI-powered chatbots can handle routine and repetitive tasks, which can free up employees to focus on more complex and strategic responsibilities.
  • Cost savings. Using AI chatbots can be more cost-effective than hiring and training additional employees.
  • Improved content quality. Writers can use ChatGPT to improve grammatical or contextual errors or to help brainstorm ideas for content. Employees can take ordinary text and ask to improve its language or add expressions.
  • Education and training. ChatGPT can help provide explanations on more complex topics to help serve as a virtual tutor. Users can also ask for guides and any needed clarification on responses.
  • Better response time. ChatGPT provides instant responses, which reduces wait times for users seeking assistance.
  • Increased availability. AI models are available around the clock to provide continuous support and assistance.
  • Multilingual support. ChatGPT can communicate in multiple languages or provide translations for businesses with global audiences.
  • Personalization. AI chatbots can tailor responses to the user's preferences and behaviors based on previous interactions.
  • Scalability. ChatGPT can handle many users simultaneously, which is beneficial for applications with high user engagement.
  • Natural language understanding. ChatGPT understands and generates humanlike text, so it is useful for tasks such as generating content, answering questions, engaging in conversations and providing explanations.
  • Digital accessibility. ChatGPT and other AI chatbots can assist individuals with disabilities by providing text-based interactions, which can be easier to navigate than other interfaces.

What are the limitations of ChatGPT? How accurate is it?

Some limitations of ChatGPT include the following:

  • It does not fully understand the complexity of human language. ChatGPT is trained to generate words based on input. Because of this, responses might seem shallow and lack true insight.
  • Lack of knowledge for data and events after 2021. The training data ends with 2021 content. ChatGPT can provide incorrect information based on the data from which it pulls. If ChatGPT does not fully understand the query, it might also provide an inaccurate response. ChatGPT is still being trained, so feedback is recommended when an answer is incorrect.
  • Responses can sound like a machine and unnatural. Since ChatGPT predicts the next word, it can overuse words such as the or and . Because of this, people still need to review and edit content to make it flow more naturally, like human writing.
  • It summarizes but does not cite sources. ChatGPT does not provide analysis or insight into any data or statistics. ChatGPT might provide statistics but no real commentary on what these statistics mean or how they relate to the topic.
  • It cannot understand sarcasm and irony. ChatGPT is based on a data set of text.
  • It might focus on the wrong part of a question and not be able to shift. For example, if you ask ChatGPT, "Does a horse make a good pet based on its size?" and then ask it, "What about a cat?" ChatGPT might focus solely on the size of the animal versus giving information about having the animal as a pet. ChatGPT is not divergent and cannot shift its answer to cover multiple questions in a single response.

Learn more about the pros and cons of AI-generated content .

What are the ethical concerns associated with ChatGPT?

While ChatGPT can be helpful for some tasks, there are some ethical concerns that depend on how it is used, including bias , lack of privacy and security, and cheating in education and work.

Plagiarism and deceitful use

ChatGPT can be used unethically in ways such as cheating, impersonation or spreading misinformation due to its humanlike capabilities. Educators have brought up concerns about students using ChatGPT to cheat, plagiarize and write papers. CNET made the news when it used ChatGPT to create articles that were filled with errors.

To help prevent cheating and plagiarizing, OpenAI announced an AI text classifier to distinguish between human- and AI-generated text. However, after six months of availability, OpenAI pulled the tool due to a "low rate of accuracy."

There are online tools, such as Copyleaks or Writing.com, to classify how likely it is that text was written by a person versus being AI-generated. OpenAI plans to add a watermark to longer text pieces to help identify AI-generated content.

Because ChatGPT can write code, it also presents a problem for cybersecurity. Threat actors can use ChatGPT to help create malware. An update addressed the issue of creating malware by stopping the request, but threat actors might find ways around OpenAI's safety protocol.

ChatGPT can also be used to impersonate a person by training it to copy someone's writing and language style. The chatbot could then impersonate a trusted person to collect sensitive information or spread disinformation .

Bias in training data

One of the biggest ethical concerns with ChatGPT is its bias in training data . If the data the model pulls from has any bias, it is reflected in the model's output. ChatGPT also does not understand language that might be offensive or discriminatory. The data needs to be reviewed to avoid perpetuating bias, but including diverse and representative material can help control bias for accurate results.

Replacing jobs and human interaction

As technology advances, ChatGPT might automate certain tasks that are typically completed by humans, such as data entry and processing, customer service, and translation support. People are worried that it could replace their jobs, so it's important to consider ChatGPT and AI's effect on workers.

Rather than replacing workers, ChatGPT can be used as support for job functions and creating new job opportunities to avoid loss of employment. For example, lawyers can use ChatGPT to create summaries of case notes and draft contracts or agreements. And copywriters can use ChatGPT for article outlines and headline ideas.

Privacy issues

ChatGPT uses text based on input, so it could potentially reveal sensitive information. The model's output can also track and profile individuals by collecting information from a prompt and associating this information with the user's phone number and email. The information is then stored indefinitely.

How can you access ChatGPT?

To access ChatGPT, create an OpenAI account. Go to chat.openai.com and then select "Sign Up" and enter an email address, or use a Google or Microsoft account to log in.

After signing up, type a prompt or question in the message box on the ChatGPT homepage. Users can then do the following:

  • Enter a different prompt for a new query or ask for clarification.
  • Regenerate the response.
  • Share the response.
  • Like or dislike the response with the thumbs-up or thumbs-down option.
  • Copy the response.

What to do if ChatGPT is at capacity

Even though ChatGPT can handle numerous users at a time, it reaches maximum capacity occasionally when there is an overload. This usually happens during peak hours, such as early in the morning or in the evening, depending on the time zone.

If it is at capacity, try using it at different times or hit refresh on the browser. Another option is to upgrade to ChatGPT Plus, which is a subscription, but is typically always available, even during high-demand periods.

Is ChatGPT free?

ChatGPT is available for free through OpenAI's website. Users need to register for a free OpenAI account. There is also an option to upgrade to ChatGPT Plus for access to GPT-4, faster responses, no blackout windows and unlimited availability. ChatGPT Plus also gives priority access to new features for a subscription rate of $20 per month.

Without a subscription, there are limitations. The most notable limitation of the free version is access to ChatGPT when the program is at capacity. The Plus membership gives unlimited access to avoid capacity blackouts.

What are the alternatives to ChatGPT?

Because of ChatGPT's popularity, it is often unavailable due to capacity issues. Google announced Bard in response to ChatGPT . Google Bard will draw information directly from the internet through a Google search to provide the latest information.

Microsoft added ChatGPT functionality to Bing, giving the internet search engine a chat mode for users. The ChatGPT functionality in Bing isn't as limited because its training is up to date and doesn't end with 2021 data and events.

There are other text generator alternatives to ChatGPT, including the following:

  • Article Forge.
  • DeepL Write.
  • Google Bard.
  • Magic Write.
  • Open Assistant.
  • Peppertype.
  • Perplexity AI.

Coding alternatives for ChatGPT include the following:

  • Amazon CodeWhisperer.
  • CodeStarter.
  • Ghostwriter.
  • GitHub Copilot.
  • Mutable.ai.
  • OpenAI Codex.

Learn more about various AI content generators .

ChatGPT updates

In August 2023, OpenAI  announced  an enterprise version of ChatGPT. The enterprise version offers the higher-speed GPT-4 model with a longer context window , customization options and data analysis. This model of ChatGPT does not share data outside the organization.

In September 2023, OpenAI announced a new update that allows ChatGPT to speak and recognize images. Users can upload pictures of what they have in their refrigerator and ChatGPT will provide ideas for dinner. Users can engage to get step-by-step recipes with ingredients they already have. People can also use ChatGPT to ask questions about photos -- such as landmarks -- and engage in conversation to learn facts and history.

Users can also use voice to engage with ChatGPT and speak to it like other voice assistants . People can have conversations to request stories, ask trivia questions or request jokes among other options.

The voice update will be available on apps for both iOS and Android. Users will just need to opt-in to use it in their settings. Images will be available on all platforms -- including apps and ChatGPT’s website.

In November 2023, OpenAI announced the rollout of GPTs, which let users customize their own version of ChatGPT for a specific use case. For example, a user could create a GPT that only scripts social media posts, checks for bugs in code, or formulates product descriptions. The user can input instructions and knowledge files in the GPT builder to give the custom GPT context. OpenAI also announced the GPT store, which will let users share and monetize their custom bots.

In December 2023, OpenAI partnered with Axel Springer to train its AI models on news reporting. ChatGPT users will see summaries of news stories from Bild and Welt, Business Insider and Politico as part of this deal. This agreement gives ChatGPT more current information in its chatbot answers and gives users another way to access news stories. OpenAI also announced an agreement with the Associated Press to use the news reporting archive for chatbot responses.

Continue Reading About ChatGPT

  • 12 key benefits of AI for business
  • GitHub Copilot vs. ChatGPT: How do they compare?
  • Exploring GPT-3 architecture
  • How to detect AI-generated content
  • ChatGPT vs. GPT: How are they different?

Related Terms

NBASE-T Ethernet is an IEEE standard and Ethernet-signaling technology that enables existing twisted-pair copper cabling to ...

SD-WAN security refers to the practices, protocols and technologies protecting data and resources transmitted across ...

Net neutrality is the concept of an open, equal internet for everyone, regardless of content consumed or the device, application ...

A proof of concept (PoC) exploit is a nonharmful attack against a computer or network. PoC exploits are not meant to cause harm, ...

A virtual firewall is a firewall device or service that provides network traffic filtering and monitoring for virtual machines (...

Cloud penetration testing is a tactic an organization uses to assess its cloud security effectiveness by attempting to evade its ...

Regulation SCI (Regulation Systems Compliance and Integrity) is a set of rules adopted by the U.S. Securities and Exchange ...

Strategic management is the ongoing planning, monitoring, analysis and assessment of all necessities an organization needs to ...

IT budget is the amount of money spent on an organization's information technology systems and services. It includes compensation...

ADP Mobile Solutions is a self-service mobile app that enables employees to access work records such as pay, schedules, timecards...

Director of employee engagement is one of the job titles for a human resources (HR) manager who is responsible for an ...

Digital HR is the digital transformation of HR services and processes through the use of social, mobile, analytics and cloud (...

A virtual agent -- sometimes called an intelligent virtual agent (IVA) -- is a software program or cloud service that uses ...

A chatbot is a software or computer program that simulates human conversation or "chatter" through text or voice interactions.

Martech (marketing technology) refers to the integration of software tools, platforms, and applications designed to streamline ...

What is generative AI?

A green apple split into 3 parts on a gray background. Half of the apple is made out of a digital blue wireframe mesh.

In the months and years since ChatGPT burst on the scene in November 2022, generative AI (gen AI) has come a long way. Every month sees the launch of new tools, rules, or iterative technological advancements. While many have reacted to ChatGPT (and AI and machine learning more broadly) with fear, machine learning clearly has the potential for good. In the years since its wide deployment, machine learning has demonstrated impact in a number of industries, accomplishing things like medical imaging analysis  and high-resolution weather forecasts. A 2022 McKinsey survey shows that AI adoption has more than doubled  over the past five years, and investment in AI is increasing apace. It’s clear that generative AI tools like ChatGPT (the GPT stands for generative pretrained transformer) and image generator DALL-E (its name a mashup of the surrealist artist Salvador Dalí and the lovable Pixar robot WALL-E) have the potential to change how a range of jobs are performed. The full scope of that impact, though, is still unknown—as are the risks.

Get to know and directly engage with McKinsey's senior experts on generative AI

Aamer Baig is a senior partner in McKinsey’s Chicago office;  Lareina Yee  is a senior partner in the Bay Area office; and senior partners  Alex Singla  and Alexander Sukharevsky , global leaders of QuantumBlack, AI by McKinsey, are based in the Chicago and London offices, respectively.

Still, organizations of all stripes have raced to incorporate gen AI tools into their business models, looking to capture a piece of a sizable prize. McKinsey research indicates that gen AI applications stand to add up to $4.4 trillion  to the global economy—annually. Indeed, it seems possible that within the next three years, anything in the technology, media, and telecommunications space not connected to AI will be considered obsolete or ineffective .

But before all that value can be raked in, we need to get a few things straight: What is gen AI, how was it developed, and what does it mean for people and organizations? Read on to get the download.

To stay up to date on this critical topic, sign up for email alerts on “artificial intelligence” here .

Learn more about QuantumBlack , AI by McKinsey.

Moving illustration of wavy blue lines that was produced using computer code

What every CEO should know about generative AI

What’s the difference between machine learning and artificial intelligence, about quantumblack, ai by mckinsey.

QuantumBlack, McKinsey’s AI arm, helps companies transform using the power of technology, technical expertise, and industry experts. With thousands of practitioners at QuantumBlack (data engineers, data scientists, product managers, designers, and software engineers) and McKinsey (industry and domain experts), we are working to solve the world’s most important AI challenges. QuantumBlack Labs is our center of technology development and client innovation, which has been driving cutting-edge advancements and developments in AI through locations across the globe.

Artificial intelligence is pretty much just what it sounds like—the practice of getting machines to mimic human intelligence to perform tasks. You’ve probably interacted with AI even if you don’t realize it—voice assistants like Siri and Alexa are founded on AI technology, as are customer service chatbots that pop up to help you navigate websites.

Machine learning is a type of artificial intelligence. Through machine learning, practitioners develop artificial intelligence through models that can “learn” from data patterns without human direction. The unmanageably huge volume and complexity of data (unmanageable by humans, anyway) that is now being generated has increased machine learning’s potential , as well as the need for it.

What are the main types of machine learning models?

Machine learning is founded on a number of building blocks, starting with classical statistical techniques  developed between the 18th and 20th centuries for small data sets. In the 1930s and 1940s, the pioneers of computing—including theoretical mathematician Alan Turing—began working on the basic techniques for machine learning. But these techniques were limited to laboratories until the late 1970s, when scientists first developed computers powerful enough to mount them.

Until recently, machine learning was largely limited to predictive models, used to observe and classify patterns in content. For example, a classic machine learning problem is to start with an image or several images of, say, adorable cats. The program would then identify patterns among the images, and then scrutinize random images for ones that would match the adorable cat pattern. Generative AI was a breakthrough. Rather than simply perceive and classify a photo of a cat, machine learning is now able to create an image or text description of a cat on demand.

Circular, white maze filled with white semicircles.

Introducing McKinsey Explainers : Direct answers to complex questions

How do text-based machine learning models work how are they trained.

ChatGPT may be getting all the headlines now, but it’s not the first text-based machine learning model to make a splash. OpenAI’s GPT-3 and Google’s BERT both launched in recent years to some fanfare. But before ChatGPT, which by most accounts works pretty well most of the time (though it’s still being evaluated), AI chatbots didn’t always get the best reviews. GPT-3 is “by turns super impressive and super disappointing,” said New York Times tech reporter Cade Metz in a video where he and food writer Priya Krishna asked GPT-3 to write recipes for a (rather disastrous) Thanksgiving dinner .

The first machine learning models to work with text were trained by humans to classify various inputs according to labels set by researchers. One example would be a model trained to label social media  posts as either positive or negative. This type of training is known as supervised learning because a human is in charge of “teaching” the model what to do.

The next generation of text-based machine learning models rely on what’s known as self-supervised learning. This type of training involves feeding a model a massive amount of text so it becomes able to generate predictions. For example, some models can predict, based on a few words, how a sentence will end. With the right amount of sample text—say, a broad swath of the internet—these text models become quite accurate. We’re seeing just how accurate with the success of tools like ChatGPT.

What does it take to build a generative AI model?

Building a generative AI model has for the most part been a major undertaking, to the extent that only a few well-resourced tech heavyweights have made an attempt . OpenAI, the company behind ChatGPT, former GPT models, and DALL-E, has billions in funding from bold-face-name donors. DeepMind is a subsidiary of Alphabet, the parent company of Google, and even Meta has dipped a toe into the generative AI model pool with its Make-A-Video product. These companies employ some of the world’s best computer scientists and engineers.

But it’s not just talent. When you’re asking a model to train using nearly the entire internet, it’s going to cost you. OpenAI hasn’t released exact costs, but estimates indicate that GPT-3 was trained on around 45 terabytes of text data—that’s about one million feet of bookshelf space, or a quarter of the entire Library of Congress—at an estimated cost of several million dollars. These aren’t resources your garden-variety start-up can access.

What kinds of output can a generative AI model produce?

As you may have noticed above, outputs from generative AI models can be indistinguishable from human-generated content, or they can seem a little uncanny. The results depend on the quality of the model—as we’ve seen, ChatGPT’s outputs so far appear superior to those of its predecessors—and the match between the model and the use case, or input.

ChatGPT can produce what one commentator called a “ solid A- ” essay comparing theories of nationalism from Benedict Anderson and Ernest Gellner—in ten seconds. It also produced an already famous passage describing how to remove a peanut butter sandwich from a VCR in the style of the King James Bible. Image-generating AI models like DALL-E 2 can create strange, beautiful images on demand, like a Raphael painting of a Madonna and child, eating pizza . Other generative AI models can produce code, video, audio, or business simulations .

But the outputs aren’t always accurate—or appropriate. When Priya Krishna asked DALL-E 2 to come up with an image for Thanksgiving dinner, it produced a scene where the turkey was garnished with whole limes, set next to a bowl of what appeared to be guacamole. For its part, ChatGPT seems to have trouble counting, or solving basic algebra problems—or, indeed, overcoming the sexist and racist bias that lurks in the undercurrents of the internet and society more broadly.

Generative AI outputs are carefully calibrated combinations of the data used to train the algorithms. Because the amount of data used to train these algorithms is so incredibly massive—as noted, GPT-3 was trained on 45 terabytes of text data—the models can appear to be “creative” when producing outputs. What’s more, the models usually have random elements, which means they can produce a variety of outputs from one input request—making them seem even more lifelike.

What kinds of problems can a generative AI model solve?

The opportunity for businesses is clear. Generative AI tools can produce a wide variety of credible writing in seconds, then respond to criticism to make the writing more fit for purpose. This has implications for a wide variety of industries, from IT and software organizations that can benefit from the instantaneous, largely correct code generated by AI models to organizations in need of marketing copy. In short, any organization that needs to produce clear written materials potentially stands to benefit. Organizations can also use generative AI to create more technical materials, such as higher-resolution versions of medical images. And with the time and resources saved here, organizations can pursue new business opportunities and the chance to create more value.

We’ve seen that developing a generative AI model is so resource intensive that it is out of the question for all but the biggest and best-resourced companies. Companies looking to put generative AI to work have the option to either use generative AI out of the box or fine-tune them to perform a specific task. If you need to prepare slides according to a specific style, for example, you could ask the model to “learn” how headlines are normally written based on the data in the slides, then feed it slide data and ask it to write appropriate headlines.

What are the limitations of AI models? How can these potentially be overcome?

Because they are so new, we have yet to see the long tail effect of generative AI models. This means there are some inherent risks  involved in using them—some known and some unknown.

The outputs generative AI models produce may often sound extremely convincing. This is by design. But sometimes the information they generate is just plain wrong. Worse, sometimes it’s biased (because it’s built on the gender, racial, and myriad other biases of the internet and society more generally) and can be manipulated to enable unethical or criminal activity. For example, ChatGPT won’t give you instructions on how to hotwire a car, but if you say you need to hotwire a car to save a baby, the algorithm is happy to comply. Organizations that rely on generative AI models should reckon with reputational and legal risks involved in unintentionally publishing biased, offensive, or copyrighted content.

These risks can be mitigated, however, in a few ways. For one, it’s crucial to carefully select the initial data used to train these models to avoid including toxic or biased content. Next, rather than employing an off-the-shelf generative AI model, organizations could consider using smaller, specialized models. Organizations with more resources could also customize a general model based on their own data to fit their needs and minimize biases. Organizations should also keep a human in the loop (that is, to make sure a real human checks the output of a generative AI model before it is published or used) and avoid using generative AI models for critical decisions, such as those involving significant resources or human welfare.

It can’t be emphasized enough that this is a new field. The landscape of risks and opportunities  is likely to change rapidly in coming weeks, months, and years. New use cases are being tested monthly, and new models are likely to be developed in the coming years. As generative AI becomes increasingly, and seamlessly, incorporated into business, society, and our personal lives, we can also expect a new regulatory climate  to take shape. As organizations begin experimenting—and creating value—with these tools, leaders will do well to keep a finger on the pulse of regulation and risk.

Articles referenced include:

  • " Implementing generative AI with speed and safety ,” March 13, 2024, Oliver Bevan, Michael Chui , Ida Kristensen , Brittany Presten, and Lareina Yee
  • “ Beyond the hype: Capturing the potential of AI and gen AI in tech, media, and telecom ,” February 22, 2024, Venkat Atluri , Peter Dahlström , Brendan Gaffey , Víctor García de la Torre, Noshir Kaka , Tomás Lajous , Alex Singla , Alex Sukharevsky , Andrea Travasoni , and Benjamim Vieira
  • “ As gen AI advances, regulators—and risk functions—rush to keep pace ,” December 21, 2023, Andreas Kremer, Angela Luget, Daniel Mikkelsen , Henning Soller , Malin Strandell-Jansson, and Sheila Zingg
  • “ The economic potential of generative AI: The next productivity frontier ,” June 14, 2023, Michael Chui , Eric Hazan , Roger Roberts , Alex Singla , Kate Smaje , Alex Sukharevsky , Lareina Yee , and Rodney Zemmel
  • “ What every CEO should know about generative AI ,” May 12, 2023, Michael Chui , Roger Roberts , Tanya Rodchenko, Alex Singla , Alex Sukharevsky , Lareina Yee , and Delphine Zurkiya
  • “ Exploring opportunities in the generative AI value chain ,” April 26, 2023, Tobias Härlin, Gardar Björnsson Rova , Alex Singla , Oleg Sokolov, and Alex Sukharevsky
  • “ The state of AI in 2022—and a half decade in review ,” December 6, 2022,  Michael Chui ,  Bryce Hall ,  Helen Mayhew , Alex Singla , and Alex Sukharevsky
  • “ McKinsey Technology Trends Outlook 2023 ,” July 20, 2023,  Michael Chui , Mena Issler,  Roger Roberts , and  Lareina Yee  
  • “ An executive’s guide to AI ,” Michael Chui , Vishnu Kamalnath, and Brian McCarthy
  • “ What AI can and can’t do (yet) for your business ,” January 11, 2018,  Michael Chui , James Manyika , and Mehdi Miremadi

This article was updated in April 2024; it was originally published in January 2023.

A green apple split into 3 parts on a gray background. Half of the apple is made out of a digital blue wireframe mesh.

Want to know more about generative AI?

Related articles.

High population density abstract city - stock photo

The data dividend: Fueling generative AI

Multicolored light trails moving at high speed and radiating out from a single point.

Don’t wait—create, with generative AI

IMAGES

  1. Law And Morals Definition Example

    definition essay morals

  2. The Definition of Morality: Morals and Laws

    definition essay morals

  3. Kant's Fundamental Principles of the Metaphysic of Morals Essay

    definition essay morals

  4. Moral Values Essay

    definition essay morals

  5. The Definition of Morality

    definition essay morals

  6. Essay on Morality

    definition essay morals

VIDEO

  1. Definition Essay || What is Definition?|| BBS 1st Year English || Patterns for College Writing

  2. 10 Lines Essay on Moral Values//English Essay/Moral Values

  3. How to pronounce the word Fable

  4. write an essay on moral education in english || essay on moral education || role of moral education

  5. ESSAY ETHICS & MORALITY

  6. Definition Essay (10/24/23)

COMMENTS

  1. The Definition of Morality

    The topic of this entry is not—at least directly—moral theory; rather, it is the definition of morality.Moral theories are large and complex things; definitions are not. The question of the definition of morality is the question of identifying the target of moral theorizing. Identifying this target enables us to see different moral theories as attempting to capture the very same thing.

  2. Morality: Definition, Theories, and Examples

    Morality refers to the set of standards that enable people to live cooperatively in groups. It's what societies determine to be "right" and "acceptable.". Sometimes, acting in a moral manner means individuals must sacrifice their own short-term interests to benefit society.

  3. Kant's Moral Philosophy

    1. Aims and Methods of Moral Philosophy. The most basic aim of moral philosophy, and so also of the Groundwork, is, in Kant's view, to "seek out" the foundational principle of a "metaphysics of morals," which Kant understands as a system of a priori moral principles that apply the CI to human persons in all times and cultures. Kant pursues this project through the first two chapters ...

  4. Morality

    Morality (from Latin moralitas 'manner, character, proper behavior') is the differentiation of intentions, decisions and actions between those that are distinguished as proper (right) and those that are improper (wrong). [1] Morality can be a body of standards or principles derived from a code of conduct from a particular philosophy, religion ...

  5. Ethics

    The term ethics may refer to the philosophical study of the concepts of moral right and wrong and moral good and bad, to any philosophical theory of what is morally right and wrong or morally good and bad, and to any system or code of moral rules, principles, or values. The last may be associated with particular religions, cultures, professions, or virtually any other group that is at least ...

  6. Moral Philosophy

    Moral Philosophy. Moral philosophy is the branch of philosophy that contemplates what is right and wrong. It explores the nature of morality and examines how people should live their lives in relation to others. Moral philosophy has three branches. One branch, meta-ethics , investigates big picture questions such as, "What is morality ...

  7. Morality

    guilt. right and wrong. morality, the moral beliefs and practices of a culture, community, or religion or a code or system of moral rules, principles, or values. The conceptual foundations and rational consistency of such standards are the subject matter of the philosophical discipline of ethics, also known as moral philosophy.

  8. Ethics vs. Morals: What's the Difference?

    In general, morals are considered guidelines that affect individuals, and ethics are considered guideposts for entire larger groups or communities. Ethics are also more culturally based than morals. For example, the seven morals listed earlier transcend cultures, but there are certain rules, especially those in predominantly religious nations ...

  9. 5.1: Moral Philosophy

    5.1.1 The Language of Ethics. Ethics is about values, what is right and wrong, or better or worse. Ethics makes claims, or judgments, that establish values. Evaluative claims are referred to as normative, or prescriptive, claims. Normative claims tell us, or affirm, what ought to be the case.

  10. Ethics and Morality

    Morality, Ethics, Evil, Greed. To put it simply, ethics represents the moral code that guides a person's choices and behaviors throughout their life. The idea of a moral code extends beyond the ...

  11. Morals

    Morals. Morals are the prevailing standards of behavior that enable people to live cooperatively in groups. Moral refers to what societies sanction as right and acceptable. Most people tend to act morally and follow societal guidelines. Morality often requires that people sacrifice their own short-term interests for the benefit of society.

  12. The Psychology of Morality: A Review and Analysis of Empirical Studies

    Morality and Social Order. Moral principles indicate what is a "good," "virtuous," "just," "right," or "ethical" way for humans to behave (Haidt, 2012; Haidt & Kesebir, 2010; Turiel, 2006).Moral guidelines ("do no harm") can induce individuals to display behavior that has no obvious instrumental use or no direct value for them, for instance, when they show empathy ...

  13. Moral Philosophy: Explanation and Examples

    Definition of Moral Philosophy Moral philosophy, often called ethics, is like a compass for right and wrong actions. Imagine you're at a fork in the road and each direction leads to a different action. Moral philosophy is your guide, helping you figure out which direction to go. The first simple definition of moral philosophy is this: it's a set of tools that help us choose the best path ...

  14. The Definition of Morality

    The term "morality" can be used either. descriptively to refer to a code of conduct put forward by a society or, some other group, such as a religion, or. accepted by an individual for her own behavior or. normatively to refer to a code of conduct that, given specified conditions, would be put forward by all rational persons.

  15. What's the Difference Between Morality and Ethics?

    Both morality and ethics loosely have to do with distinguishing the difference between "good and bad" or "right and wrong.". Many people think of morality as something that's personal and normative, whereas ethics is the standards of "good and bad" distinguished by a certain community or social setting. For example, your local ...

  16. Introduction

    Summary. Since the ancients, philosophers, theologians, and political actors have pondered the relationship between the moral realm and the political realm. Complicating the long debate over the intersection of morality and politics are diverse conceptions of fundamental concepts: the right and the good, justice and equality, personal liberty ...

  17. 200 Ethical Topics for Your Essay by GradesFixer

    Ethical Issues Definition. Ethical issues refer to situations where a decision, action, or policy conflicts with ethical principles or societal norms. These dilemmas often involve a choice between competing values or interests, such as fairness vs. efficiency, privacy vs. security, or individual rights vs. collective good.

  18. Integrity: What it is and Why it is Important

    Integrity is about "moral" norms and values, those that refer to what is right or wrong, good or bad. The features also refer to a general consent with relevance for everyone in the same circumstances. That relates to "valid" moral values and norms. In sum, morality and ethics refer to what is right or wrong, good or bad.

  19. What is Morality? Essay

    Morality in its basic definition, is the knowledge between what is right and what is wrong. In Joan Didion's essay, "On Morality," she uses examples to show how morality is used to justify actions and decisions by people. She explains that morality can have a profound effect on the decisions that people chose to make.

  20. Moral Responsibility

    Possession of moral competence—the ability to recognize and respond to moral considerations—is often taken to be a condition on moral responsibility. Wolf's (1987) story of JoJo illustrates this proposal. JoJo was raised by an evil dictator and becomes the same sort of sadistic tyrant that his father was.

  21. Moral Definition Essay

    Morality in its basic definition, is the knowledge between what is right and what is wrong. In Joan Didion's essay, "On Morality," she uses examples to show how morality is used to justify actions and decisions by people. She explains that morality can have a profound effect on the decisions that people chose to make.

  22. The Moral Limits of Bankruptcy Law

    Guest Essay. The Moral Limits of Bankruptcy Law. June 4, 2024. ... the new bankruptcy laws also expanded the definition of "creditor" to include people allegedly injured by the business. Yet ...

  23. The Psychology of Morality: A Review and Analysis of Empirical Studies

    Morality indicates what is the "right" and "wrong" way to behave, for instance, that one should be fair and not unfair to others (Haidt & Kesebir, 2010).This is considered of interest to explain the social behavior of individuals living together in groups ().Results from animal studies (e.g., de Waal, 1996) or insights into universal justice principles (e.g., Greenberg & Cropanzano ...

  24. What Is ChatGPT? Everything You Need to Know

    ChatGPT is an artificial intelligence ( AI) chatbot that uses natural language processing to create humanlike conversational dialogue. The language model can respond to questions and compose various written content, including articles, social media posts, essays, code and emails. These are some uses for natural language processing.

  25. What is ChatGPT, DALL-E, and generative AI?

    It's clear that generative AI tools like ChatGPT (the GPT stands for generative pretrained transformer) and image generator DALL-E (its name a mashup of the surrealist artist Salvador Dalí and the lovable Pixar robot WALL-E) have the potential to change how a range of jobs are performed. The full scope of that impact, though, is still ...