Prosocial Moral Reasoning Measure (PROM)
( )
Accounting Specific Defining Issues Test (ADIT)
( )
Revised Moral Authority Scale (MAS-R)
( )
Moral/Conventional Distinction Task
( ; )
Moral Emotions Task
( )
Moral Judgment Test (MJT)
( )
Note. A single publication can contain multiple scales.
An overview of the research designs that were coded in this way (see Supplementary Table A , final column) first reveals that a substantial proportion of these studies (185 of 419 studies examined; 44%) used correlational designs to examine, for instance, which traits people associate with particular targets or how self-reported beliefs, convictions, principles, or norms relate to self-stated intentions. Of the studies using an experimental design, a substantial number (91 studies; about 22%) examined the impact of some situational prime intended to activate specific goals, rules, or experiences. Furthermore, a substantial number of studies examined the impact of manipulating specific target characteristics (51 studies; 12%) or moral concerns (51 studies; 12%). However, experimental studies examining the impact of specific social norms (31 studies; 7%) or a group-based participant identity were relatively rare (four studies; less than 1%). This suggests that the socially shared nature of moral guidelines is not systematically addressed in this body of research.
The types of responses typically examined in these studies can be captured by looking in more detail at the nature of the scales, tests, tasks, and questionnaires that were used. Our manual content analysis yielded 38 different scales, tests, tasks, and questionnaires that were used in 91 of the 419 studies examined (see Table 1 ). We clustered these according to their nature and intent, which yielded four distinct categories. We found seven different measures (used in 27 studies; 30%) that rely on hypothetical moral dilemmas , where people have to weigh different moral principles against each other (e.g., stealing from one person to help another person), and indicate what should be done in these situations. We found 11 additional measures (used in 12 studies; 13%) consisting of lists of traits or behaviors (e.g., honesty, helpfulness) that can be used to indicate the general character/personality type of the self or a known other (friend, family member). Here, we included measures such as the HEXACO Personality Inventory (HEXACO-PI; Lee & Ashton, 2004 ) and the moral identity scale ( Aquino & Reed, 2002 ). Third, we found 11 different measures (used in 31 studies; 34%) that assess the endorsement of abstract moral rules (e.g., “do no harm”). A representative example is the Moral Foundations Questionnaire ( Graham et al., 2011 ), which distinguishes between statements indicating concern for “individualizing” principles (harm/care, fairness) and “binding” principles (loyalty, authority, purity). Fourth, we found nine different measures (used in 20 studies; 22%) aiming to capture people’s position on specific moral issues (e.g., “it is important to tell the truth”; “it is ok for employees to take home a few office supplies”). We also included in this category different lists of behaviors (for instance, the Morally Debatable Behaviors Scale [MDBS]; Katz, Santman, & Lonero, 1994 ) that focus on the endorsement of behaviors considered relevant to morality (e.g., corruption, violence, discrimination, or misrepresentation).
Importantly, all four clusters of measures we found to rely on self-reported preferences and stated character traits or intentions, describing overall tendencies and general behavioral guidelines. However, it is less evident that such measures can be used to understand how people will actually behave in real-life situations, where they may have to choose which of different competing guidelines to apply or where it is unclear how the general principles they endorse translate to a specific act or decision in that context.
Our manual coding of the different dependent measures that were used (see Supplementary Table B , final column) reveals that the majority of measures aimed to capture either general moral principles that people endorse (72 of 445 measures coded; 16%) or their moral evaluations of specific individuals, groups, or companies (72 measures; 16%). In addition, a substantial proportion of studies examined people’s positions on specific issues, such as abortion, gossiping, or specific political convictions (61 measures; 14%). Substantial numbers of measures assessed the perceived implications of one’s moral principles (48 measures; 11%) or the willingness to be cooperative or truthful in hypothetical situations (44 measures; 10%). Notably, a relatively small proportion of measures actually tried to capture cooperative or cheating behavior in experimental or real-life situations (51 measures; 12%). Similarly, empathy with others and moral emotions such as guilt, shame, and disgust were assessed in 15% (67) of the measures that were coded. Thus, the majority of measures used focuses on “thoughts” relating to morality, as these capture abstract principles, overall judgments, or hypothetical intentions, while much less attention has been devoted to examining behavioral displays or emotions characterizing the actual “experiences” people have in relation to these “thoughts.”
Thus, this initial examination of empirical evidence available in studies on morality published from 2000 through 2013 suggests that the three key theoretical principles we have extracted from relevant theoretical perspectives on morality are not systematically reflected in the research that has been carried out. Instead, it seems that “moral tendencies” are typically defined independently of the social context, specific norms, or the identity of others who may be affected by the (im)moral behavior. Furthermore, general and self-reported tendencies or preferences are often taken at face value without testing them against actual behavioral displays or emotional experiences. Finally, empirical studies have prioritized the examination of all kinds of “thoughts” relating to morality over attempts to connect these to actual moral “experiences.” Thus, this initial examination of the literature seems to reveal a mismatch between the empirical approach that is typically taken and leading theoretical perspectives—that emphasize the socially shared nature of moral guidelines, the self-justifying nature of moral reasoning, and the importance of emotional experiences.
As others have noted before us (e.g., Abend, 2013 ), this initial assessment of studies carried out suggests that the empirical breadth of past morality research is constrained in that some approaches appear to be favored at the expense of others. Studies often rely on highly artificial paradigms or scenarios ( Chadwick, Bromgard, Bromgard, & Trafimow, 2006 ; Eriksson, Strimling, Andersson, & Lindholm, 2017 ). They examine hypothetical reasoning or focus on a few specific decisions or actions that may rarely present themselves in everyday life, such as deciding about the course of a runaway train ( Bauman, McGraw, Bartels, & Warren, 2014 ; Graham, 2014 ) or eating one’s dog ( Haidt, Koller, & Dias, 1993 ; Mooijman & Van Dijk, 2015 ). This does not capture the wide variety of contexts in which moral choices have to be made (for instance, whether or not to sell a subprime mortgage to achieve individual performance targets), and it is not evident whether and how this limits the conclusions that can be drawn from such work (for similar critiques, see Crone & Laham, 2017 ; Graham, 2014 ; Hofmann et al., 2014 ; Lovett, Jordan, & Wiltermuth, 2015 ).
Our conclusion so far is that researchers in social psychology have displayed a considerable interest in examining topics relating to morality. However, it is not self-evident how the multitude of research topics and issues that are addressed in this literature can be organized. This is why we set out to organize the available research in this area into a limited set of meaningful categories by content-analyzing the publications we found to identify studies examining similar research questions. In the “Method” section, we provide a detailed explanation of the procedure and criteria we used to develop our coding scheme and to classify studies as relating to one of five research themes we extracted in this way. We now consider the nature of the research questions addressed within each of these themes and the rationales typically provided to study them, to specify how different research questions that are examined are seen to relate to each other. We visualize these hypothesized relations in Figure 1 .
The psychology of morality: connections between five research themes.
Researchers in this literature commonly cite the ambition to predict, explain, and influence Moral Behavior as their focal guideline for having an interest in examining some aspect of morality (see also Ellemers, 2017 ). We therefore place research questions relating to this theme at the center of Figure 1 . Questions about behavioral displays that convey the moral tendencies of individuals or groups fall under this research theme. These include research questions that address implicit indicators of moral preferences or cooperative choices, as well as more deliberate displays of helping, cheating, or standing up for one’s principles.
Many researchers claim to address the likely antecedents of such moral behaviors that are located in the individual as well as in the (social) environment. Here, we include research questions relating to Moral Reasoning , which can reflect the application of abstract moral principles as well as specific life experiences or religious and political identities that people use to locate themselves in the world (e.g., Cushman, 2013 ). This work addresses moral standards people can adhere to, for instance, in the decision guidelines they adopt or in the way they respond to moral dilemmas or evaluate specific scenarios.
We classify research questions as referring to Moral Judgments when these address the dispositions and behaviors of other individuals, groups, or companies in terms of their morality. These are considered as relevant indicators of the reasons why and conditions under which people are likely to display moral behavior. Research questions addressed under this theme consider the characteristics and actions of other individuals and groups as examples of behavior to be followed or avoided or as a source of information to extract social norms and guidelines for one’s own behavior (e.g., Weiner, Osborne, & Rudolph, 2011 ).
We distinguish between these two clusters to be able to separate questions addressing the process of moral reasoning (to infer relevant decision rules) from questions relating to the outcome in the form of moral judgments (of the actions and character of others). However, the connecting arrow in Figure 1 indicates that these two types of research questions are often discussed in relation to each other, in line with Haidt’s (2001) reasoning that these are interrelated mechanisms and that moral decision rules can prescribe how certain individuals should be judged, just as person judgments can determine which decision rules are relevant in interacting with them.
We proceed by considering research questions that relate to the psychological implications of moral behavior. The immediate affective implications of one’s behavior, and how this reveals one’s moral reasoning as well as one’s judgments of others, are addressed in questions relating to Moral Emotions ( Sheikh, 2014 ). These are the emotional responses that are seen to characterize moral situations and are commonly used to diagnose the moral implications of different events. Questions we classified under this research theme typically address feelings of guilt and shame that people experience with regard to their own behavior, or outrage and disgust in response to the moral transgressions of others.
Finally, we consider research questions addressing self-reflective and self-justifying tendencies associated with moral behavior. Studies aiming to investigate the moral virtue people afford to themselves and the groups they belong to, and the mechanisms they use for moral self-protection, are relevant for Moral Self-Views . Under this research theme, we subsume research questions that address the mechanisms people use to maintain self-consistency and think of themselves as moral persons, even when they realize that their behavior is not in line with their moral principles (see also Bandura, 1999 ).
Even though research questions often consider moral emotions and moral self-views as outcomes of moral behaviors and theorize about the factors preceding these behaviors, this does not imply that emotions and self-views are seen as the final end-states in this process. Instead, many publications refer to these mechanisms of interest as being iterative and assume that prior behaviors, emotions, and self-views also define the feedback cycles that help shape and develop subsequent reasoning and judgments of (self-relevant) others, which are important for future behavior. The feedback arrows in Figure 1 indicate this.
Our main goal in specifying how different types of research questions can be organized according to their thematic focus in this way is to offer a structure that can help monitor and compare the empirical approaches that are typically used to advance existing insights into different areas of interest. The relations depicted in Figure 1 represent the reasoning commonly provided to motivate the interest in different types of research questions. The location of the different themes in this figure clarifies how these are commonly seen to connect to each other and visualizes the (sometimes implicit) assumptions made about the way findings from different studies might be combined and should lead to cumulative insights. In the sections that follow, we will examine the empirical approaches used to address each of these clusters of research questions to specify the ways in which results from different types of studies actually complement each other and to identify remaining gaps in the empirical literature.
An important feature of our approach is that we do not delineate research questions in terms of the specific moral concerns, guidelines, principles, or behaviors they address. Instead, we take a functionalist perspective in considering which mechanisms relevant to people’s thoughts and experiences relating to morality are examined to draw together the empirical evidence that is available. For each of the research themes described above, we therefore consider the empirical approaches that have been taken by identifying the nature of relevant functions or mechanisms that have been examined. This will help document the evidence that is available to support the notion that morality matters for the way people think about themselves, interact with others, live and work together in groups, and relate to other groups in society. In considering the different functions morality may have, we distinguish between four levels at which mechanisms in social psychology are generally studied (see also Ellemers, 2017 ; Ellemers & Van den Bos, 2012 ).
All the ways in which people consider, think, and reason by themselves to determine what is morally right refer to intrapersonal mechanisms. Even if these considerations are elicited by social norms or reflect the behavior observed in others, it is important to assess the extent to which they emerge as guiding principles for individuals to be used in their further reasoning, for their judgments of the self and others, for their behavioral displays, or for the emotions they experience. Thus, such intrapersonal mechanisms are relevant for questions relating to each of the five research themes we examine.
The way people relate to others, respond to their moral behaviors, and connect to them tap into interpersonal mechanisms. Again we note that such mechanisms are relevant for research questions in all five research themes, as relations with others can inform the way people reason about morality, the way they judge other individuals or groups, the way they behave, as well as the emotions they experience and the self-views they have.
The role of moral concerns in defining group norms, the tendency of individuals to conform to such norms, and their resulting inclusion versus exclusion from the group all indicate intragroup mechanisms relevant to morality. Considering how groups influence individuals is relevant for our understanding of the way people reason about morality and the way they judge others. It also helps us understand the moral behavior individuals are likely to display (for instance, in public vs. private situations), the emotions they experience in response to the transgression of specific moral rules by themselves or different others, and the self-views they develop about their morality.
The tendency for social groups to endorse specific moral guidelines as a way to define their distinct identity, disagreements between groups about the nature or implications of important values, or moral concerns that stem from conflicts between groups in society all refer to intergroup mechanisms relevant to morality. Here too, examination of such mechanisms is relevant to research questions in each of the five research themes we distinguish. These may inform the tendency to interpret the prescription to be “fair” differently, depending on the identity of the recipients of such fairness, which helps understand people’s moral reasoning and the way they judge the morality of others. Intergroup relations may also help understand the tendency to behave differently toward members of different groups, as well as the emotions and self-views relating to such behaviors.
In sum, we argue that each of these four levels of analysis offers potentially relevant approaches to understand the mechanisms that can shape people’s moral concerns and their judgments of others. Mechanisms at all four levels can also affect moral behavior and have important implications for the emotions people experience and the self-views they hold. Reviewing whether and how empirical research has addressed relevant mechanisms at these four levels thus offers a better understanding of how morality operates in the social regulation of individual behavior (see also Carnes, Lickel, & Janoff-Bulman, 2015 ; Ellemers, 2017 ; Janoff-Bulman & Carnes, 2013 ).
The functionalist perspective we have outlined above is central to how we conceptualize morality in this review. We built a database containing research that is relevant for this review by including all studies in which the authors indicated their research design or measures to speak to issues relating to morality. Thus, we do not limit ourselves to the examination of specific guidelines or behaviors as representing key features of morality, but consider the broad range of situations that can be interpreted in terms of their moral implications (see also Blasi, 1980 ). We argue that many different principles or behaviors can acquire moral overtones, and our main interest is to examine what happens when these are considered as indicating the morally “right” versus “wrong” way to behave in a particular situation. We think this latter aspect reflects the essence of theoretical accounts that have emphasized the ways in which morality and moral judgments regulate the behavior of individuals living in groups ( Rai & Fiske, 2011 ; Tooby & Cosmides, 2010 ). As indicated above, this implies that—given the abstract nature of universal moral values—the specific behavior that is seen as moral can shift, depending on the social context ( Haidt & Graham, 2007 ; Haidt & Kesebir, 2010 ; Rai & Fiske, 2011 ), as well as the relevant norms or features that characterize distinct social groups ( Giner-Sorolla, 2012 ; Greene, 2013 ). Shared moral standards go beyond other behavioral norms in that they are used to define whether an individual can be considered a virtuous and “proper” group member, with social exclusion as the ultimate sanction ( Tooby & Cosmides, 2010 ; see also Ellemers & Van den Bos, 2012 ). In the remainder of this review, we will examine the empirical approaches to examining morality in social psychology from this functionalist perspective:
By considering the empirical literature in this way, we seek to determine whether and how relevant theoretical perspectives on human morality and the types of research questions they raise are reflected in empirical studies carried out. In doing this, we will assess to what extent this work addresses the role of shared identities in the development of moral guidelines, takes into account the limits of self-reported individual dispositions as proxies for moral behaviors, and considers the interplay between moral principles, guidelines, and convictions as “thoughts,” on one hand, and actual behaviors and emotions as “experiences,” on the other.
The data collection was carried out entirely online using the WoS engine. Information was derived from three databases: the Science Citation Index Expanded (SCI-EXPANDED, 1945-present), the Social Sciences Citation Index (SSCI, 1956-present), and the Arts & Humanities Citation Index (A&HCI, 1975-present). These database choices were determined by user account access. The category criterion was set to “Psychology Social.” The search query was “moral*” whereby the results listed all empirical and review articles featuring the word “moral” within the source’s title, keywords, or abstract.
The publications initially found in this way were manually screened to determine whether they should be included in our review of empirical studies on morality. Criteria to include a publication in the set accordingly were (a) that it was an English-language publication, (b) that it had been published in a peer-reviewed journal, (c) that it contained an original report of qualitative or quantitative empirical data (either in a correlational or an experimental design), and (d) that it contained a manipulation or a measure that the authors indicated as relevant to morality.
The complete set of studies examined here was collected in three waves (see Appendix 1, in Supplementary materials ). Each wave consisted of an electronic search using the procedure and inclusion criteria detailed above. The publications that came up in the electronic search were first screened to remove any review or theory papers that did not report original data. The empirical publications that were retained were assessed for relevance to our research question by checking whether the study or studies reported actually included a manipulation or measure that was identified by the authors as relating to morality.
The initial search was done in 2014 and included all publications that had appeared in 2000 through 2013, of which 419 met our inclusion criteria. A second wave of data collection was carried out in 2016 and 2017 to add two more years of empirical publications that had appeared in 2014 and 2015. This yielded 221 additional publications that were included in the set. The data collection was completed with a third wave of data collection conducted in 2018. Here, the same procedure was used to add 275 empirical studies that had been published in 2016 and 2017. In this third wave of data collection, we also searched for publications that had appeared before 2000 and were listed in WoS. This yielded 372 additional studies published from 1940 through 1999. Together, these three waves of data collection yielded a total number of 1,278 studies on morality published from 1940 through 2017 that we collected for this review (see Appendix 2, in Supplementary materials ).
We note that complete records of main publication details are only available from 1981 onward, and complete full-text records of publications in WoS are only available from 1996 onward. This is why statistical trends analyses will only be conducted for studies published from 1981 onward, and full bibliometric analyses can only be carried out for the main body of 989 studies on morality published from 1996 through 2017 for which complete publication details are digitally available.
Coding procedure and interrater reliability.
During the first wave of data collection, a coding scheme was jointly developed by the two first authors. Different coders used this scheme to code groups of publications in different waves of data collection. This was decided by determining the main prediction examined and inspecting the study design and measures that were used. In each phase of data coding, ambiguous cases were flagged, and publication details were further examined and discussed with other coders to reach a joint decision on the most appropriate classification. Each time this occurred, the coding scheme was further specified.
After completion of the third wave of data collection, interrater reliability was determined for the full database included in this review. The codes assigned by five different coders in the first and second wave of data collection, and by six additional coders in the third wave of data collection, were checked by the second group of six coders. An online random number generator was used to randomly select 20 entries for six subsets of years examined (1940 through 2017) that contained about 200 publications each. This resulted in 120 entries (roughly 10% of all publications included) sampled to assess interrater reliability. Each group of 20 entries was then assigned to a second coder and coded in an empty file. Only after completing the 20 entries did the second coder compare their codings with the original codings. The overall interrater agreement was good. For the levels of analysis at which morality was examined, coders were in agreement for 84% of the entries coded. When determining how to classify the main research question under one of the research themes, coders agreed on 84.3% of the entries.
For each entry, we inspected the study design and measures that were used to assess the level at which the mechanism under investigation was located. We distinguish four levels which mirror the categories that are commonly used to characterize different types of mechanisms addressed in social psychological theory (e.g., in textbooks): (a) research on intrapersonal mechanisms, which studies how a single individual considers, evaluates, or makes decisions about rules, objects, situations, and courses of action; (b) research on interpersonal mechanisms, which examines how individuals perceive, evaluate, and interact with other individuals; (c) research on intragroup mechanisms, investigating how people perceive, evaluate, and respond to norms or behaviors displayed by other members of the same group, work or sports team, religious community, or organization; and (d) research on intergroup mechanisms, focusing on how people perceive, evaluate, and interact with members of different cultural, ethnic, or national groups. We also include here research that explicitly aims to examine how members of distinct group differ from each other in how they consider morality.
Interrater agreement was 74% for intrapersonal mechanisms, 83% for interpersonal mechanisms, 92% for intragroup mechanisms, and 88% for intergroup mechanisms.
For each entry, we decided what was the main goal of the research question that was addressed. At the first wave of data collection, the first two authors listed all the keywords provided by the authors of studies included and decided how these could be classified into the five research themes we distinguish in our model. We used this as a starting point to develop our coding scheme, in which ambiguities were resolved through deliberation, as specified above. In this case, coders were instructed to choose a single theme that represented the main focus of the research question in each of the entries included (which could contain multiple studies). Cases where coders thought multiple research themes might be relevant were flagged and further studied and discussed with other coders to determine the primary focus of the research question. Interrater agreement was 68% for moral reasoning, 89% for moral behavior, 84% for moral judgment, 87% for moral self-views, and 95% for moral emotions.
Here, we included all research questions that try to capture the moral guidelines people endorse. These include questions about what people consider to be morally right by considering their ideas of what “good” people are generally like or questions about what guidelines people endorse to indicate what a moral person should do. Some researchers aim to examine which choices people think should be made in hypothetical dilemmas and vignettes, asking about people’s positions on specific issues (e.g., gay adoption, killing bugs for science), or wish to assess which values are guiding principles in their life (e.g., fairness, purity). Under this theme, we also classified research questions aiming to examine how moral choices and decisions may differ, depending on specific concerns or situational goals that are activated implicitly (e.g., clean vs. dirty environment) or explicitly (e.g., long-term vs. short-term implications). We note that some of the research questions we included under this theme are labeled by their authors as being about “moral judgment,” as they use this term more broadly than we do. However, in our delineation of the different types of research questions—and in our coding scheme for the five thematic clusters we distinguish—we reserve the term moral judgments for a specific set of research questions, which address the way in which people judge the morality of a another individual or group . Research questions investigating people’s judgments about the general morality of a particular decision or course of action—which capture one’s own moral guidelines—fall under the theme of “moral reasoning” in our coding scheme.
Under this research theme, we classify all research questions addressing ways in which we evaluate the morality of other individuals or groups. We include research questions examining how the general character of specific individuals is evaluated in terms of perceived closeness of the target to the self or overall positivity/negativity of the target (e.g., in terms of likeability, familiarity, or attractiveness). We also consider under this theme research questions aiming to uncover how people assign moral traits (honesty etc.) or moral responsibility to the individual for the behavior described (guilty, intentionally inflicting harm, deserving of punishment). Similarly, we include research questions addressing the judgments of group targets (existing social groups, companies, communities) in terms of overall positivity/negativity, specific moral traits (e.g., trustworthiness), negative emotions raised, or implicit moral judgments implied in lexical decisions. In this cluster, we also consider research questions addressing the perceived severity of behaviors described, wondering whether people think it merits punishment, or affecting the level of empathy versus dehumanization they experience toward the victims of moral transgressions.
Here, we include research questions addressing self-reported past behavior or behavioral intentions, as well as reports of (un)cooperative behavior in real life (e.g., volunteering, donating money, helping, forgiving, citizenship) or deceitful behavior in experimental contexts (e.g., cheating, lying, stealing, gossiping). We also include questions addressing implicit indicators of moral behavior (e.g., word completion tendencies, speech pattern analysis, handwipe choices). Research questions under this theme consider these behavioral reports as expressing internalized personal norms, convictions, or beliefs, in relation to indicators of “moral atmosphere,” descriptive or injunctive team or group norms, family rules, or moral role models. We also include under this theme research questions that address moral behavior in relation to situational concerns (e.g., moral rule reminders, cognitive depletion) or specific virtues (e.g., care vs. courage).
This theme includes research questions in which emotions are considered in response to recollections of real-life events, behaviors, and dilemmas, including significant historical or political events. We also include research questions examining whether such emotions (after being evoked with experimental procedures) can induce participants to display morally questionable behavior (e.g., in a computer game, in response to a provocation by a confederate) or when prompted with situational primes (e.g., pleasant or abhorrent pictures, odors, faces, or transgressive scenarios). Research questions addressing emotional responses people experience in relation to morally relevant issues or situations (guilt, shame, outrage, disgust) are also included under this theme.
We classified under this research theme all research questions that address the way different aspects of people’s self-views relate to each other (e.g., personality characteristics with self-stated inclinations to display moral behavior), as well as research questions addressing the way experimentally induced behavioral primes, reminders of past (individual or group level) moral transgressions, or the moral superiority of others relate to people’s self-views. This research theme includes research questions addressing personality inventories or trait lists of moral characteristics (e.g., honesty, fairness), as well as self-stated moral motivations or moral ideals (e.g., do not harm) that participants can either explicitly claim as self-defining or implicitly (by examining implicit associations with the self or response times). In addition, we include questions addressing the stated willingness to display moral or immoral behavior (e.g., lie, cheat, help others, donate money or blood), which is also used to indicate the occurrence of moral justifications or moral disengagement to maintain a moral self-view.
Temporal trends and impact development.
The data on relevant publications included in this review were linked to the bibliometric WoS database present at the Centre for Science and Technology Studies (CWTS) at Leiden University ( Moed, De Bruin, & Van Leeuwen, 1995 ; Van Leeuwen, 2013 ; Waltman, Van Eck, Van Leeuwen, Visser, & Van Raan, 2011a , 2011b ). At the time these analyses were prepared, the CWTS in-house database contained relevant indicators for records covering the period 1981 through 2017 (see Appendix 3, in Supplementary materials ).
We identified two types of seminal publications. First, we assessed which (theoretical or empirical) publications outside our set (excluding methodological publications) are most frequently cited in the publications we examined. Second, we determined which of the empirical publications within our set have received an outstanding number of citations, within the field of morality research, as well as in the wider environment (the general WoS database).
In both cases, the analysis of seminal papers was conducted in three steps. First, we detected publications that were highly cited within this set of studies on morality and recorded in which research theme they were located. Second, within each research theme, we focused on the top 25 most highly cited publications from outside the set and—reflecting the smaller number of publications to choose from—the top 10 most highly cited publications within the set of studies on morality. We then identified how many citations these had received in the publications included in this review to determine a top three of seminal papers outside this set and a top three of seminal papers within this set, for each of the five research themes represented. We also examined how frequently these seminal papers were cited in the wider context of the whole WoS database.
We used VOSviewer as a tool ( Van Eck & Waltman, 2010 , 2014 , 2018 ) for mapping and clustering ( Waltman, Van Eck, & Noyons, 2010 ) to visualize the content structure in the descriptions of empirical research on morality that we selected for this review. The analysis determines co-occurrences of so-called noun phrase groups in the titles and abstracts of the publications included in the analysis. Because full records of titles and abstracts are only available for studies published from 1996 onward, this analysis could only be conducted for the set of studies published from 1996 through 2017. Co-occurrences of noun phrase groups are indicated as clusters in a two-dimensional space where (a) closeness (vs. distance) between words indicates their relatedness, (b) larger font size of terms generally indicates a higher frequency of occurrence, and (c) shared color codes indicate stronger interrelations. We use these clusters to indicate the empirical approaches described in the titles and abstracts of studies included in this review and relate these to the different types of research questions we classified into five themes.
When we compare trends in publication rates over time, we see that in social psychology publications have increased from about 1,500 per year in 1981 to 4,000 per year since 2014. The absolute numbers in publications on morality included in our review are much lower: Here, we found 10 publications per year in 1981, increasing to over 100 per year since 2014. Thus, the absolute number of publications on morality research remains relatively small compared with the whole field of social psychology. Yet, in comparison, the increase is much steeper for publications on morality, when both trends are indexed relative to the number observed in 1981 (see Figure 2 ). The regression coefficient is considerably larger for publications on morality (0.27) than for publications on social psychology (0.04). The R 2 further indicates that a linear trend explains 85% of the overall increase observed in publications on social psychology, while the trend in studies on morality is less well captured with a linear equation ( R 2 = .54). Indeed, the increase in the number of publications on morality that were published from 2005 onward is much steeper than before, with a regression coefficient of 1.22 and an R 2 for this linear trend of .9.
Indexed trends and regression coefficients for social psychology as a field and morality as a specialism, WoS, 1981-2017.
Note. WoS = Web of Science.
When we assess the impact of the studies on morality included in our review, we see the average impact of these publications, the journals in which they are published, and the percentage of top-cited publications going up consistently (see Figure 3 ). These field-normalized scores show that the impact of studies on morality is clearly above the average in the field, since 2005. At the same time, there is a steady decrease in the percentage of uncited papers, as well as the proportion of self-citations, and increasing collaboration between authors from different countries (see supplementary materials ).
Trends in impact scores in morality, WoS, 1981-2017, indicating the average normalized number of citations (excluding self-citations; mncs), the average normalized citation score of the journals in which these papers are published (mnjs), and the proportion of papers belonging to the top 10% in the field where they were published (pp_top_perc).
When we distinguish between the types of research questions addressed, this reveals that across the board, there is a disproportionate interest in research questions relating to moral reasoning (χ 2 = 502.19, df = 4, p < .001). In fact this is the most frequently examined research theme throughout the period examined and has yielded between 35 and 60 publications per year during the past few years. Research questions relating to moral judgments were initially examined less frequently, but from 2013 onward with 30 to 40 publications per year this research theme approaches similar levels of research activity as moral reasoning. The steady stream of publications examining questions relating to moral behavior peaked around 2014 when more than 30 publications were devoted to this research theme, but subsequently this has dropped down to roughly 20 publications per year. Publications on research questions relating to moral emotions and moral self-views have increased during the past few years; however, these remain relatively less examined overall, with around 10 publications per year addressing each of these themes. When we compare how these themes developed since the interest of researchers in examining morality increased so rapidly after 2005, we clearly see these differential trends. During this period, the number of studies addressing moral reasoning increases more quickly than studies on moral judgments, as well as—in decreasing order—moral behavior, moral self-views, and moral emotions (see Figure 4 ).
Comparative trends in the development of research themes in morality research, 2005-2017.
In a similar vein, we assessed trends visible in the intrapersonal, interpersonal, intragroup, and intergroup levels of mechanisms examined in the studies included in our review. Overall, the interest in these different types of mechanisms is not distributed evenly (χ 2 = 688.43, df = 3, p < .001). Most of the studies included in this review have addressed intrapersonal mechanisms relating to morality, and the relative preference for examining mechanisms relevant to morality at the intrapersonal level has only increased during the past years. The number of studies since 2005 examining intragroup mechanisms show a steep linear trend that accounts for the majority of variance observed (regression coefficient: 6.35, R 2 = .78). Although interpersonal mechanisms were initially less examined, the increased research interest in morality since 2005 is also visible in the number of studies that have addressed such mechanisms (regression coefficient: 3.09, R 2 = .85). However across the board, the examination of intragroup mechanisms remains relatively rare in this literature, with less than 10 studies per year addressing such issues. Here, the regression coefficient is much lower (0.59) and matches the observed variance less well ( R 2 = .64). The examination of intergroup mechanisms is only slightly more popular; however, a linear trend (with a regression coefficient of 0.76) does not explain this trend very well ( R 2 = .25).
When we assess this per research theme (see Figure 5 ), we see that the strong emphasis on intrapersonal mechanisms that is visible across all research themes is less pronounced in research questions addressing moral judgments (χ 2 = 249.48, df = 12, p < .001). In research on moral judgments, the interest in interpersonal mechanisms is much larger. In fact this research theme accounts for the majority of the studies in our review that examine interpersonal mechanisms. The interest in intragroup mechanisms is very rare across the board. It is perhaps most clearly visible in research questions relating to moral behavior. The interest in intergroup mechanisms is relatively small, but more or less the same across the five research themes we examined.
Number of studies addressing mechanisms at different levels of analysis, specified per research theme, 1940 – 2017.
In the seminal publications outside the set (see Table 2 ), one publication comes up as a top three seminal paper in more than one research theme. This is the publication by Haidt (2001) in which he develops his theory on moral intuition. Clearly, this publication has been highly influential in developing this area of research. It has also been extremely well cited in the WoS database more generally and can be seen as an important development that prompted the increased interest in research on morality during the past 10 to 15 years. However, besides this one paper, there is no overlap between the five research themes in the top three seminal publications that characterize them. This substantiates our reasoning that different clusters of research questions can be distinguished and underlines the validity of the criteria we used to classify the studies reviewed into these five themes.
Top-three Seminal Papers for Each Research Theme, Published Outside the Set.
Rank in research theme | Number of citations in data set | Authors | Journal | Title | Publication year | Number of citations in WoS | mncs | mnjs |
---|---|---|---|---|---|---|---|---|
Moral reasoning | ||||||||
1 | 59 | Haidt, J. | The emotional dog and its rational tail: A social intuitionist approach to moral judgment | 2001 | 1994 | 52.59 | 10.37 | |
2 | 36 | Jost, J. T., Glaser, J., Kruglanski, A. W., & Sulloway, F. J. | Political conservatism as motivated social cognition | 2003 | 1238 | 34.77 | 9.55 | |
3 | 35 | Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., & Cohen, J. D. | An fMRI investigation of emotional engagement in moral judgment | 2001 | 1360 | 32.18 | 13.44 | |
Moral judgments | ||||||||
1 | 41 | Haidt, J. | The emotional dog and its rational tail: A social intuitionist approach to moral judgment | 2001 | 1994 | 52.59 | 10.37 | |
2 | 29 | Fiske, S. T., Cuddy, A. J. C., & Glick, P. | Universal dimensions of social cognition: Warmth and competence | 2007 | 790 | 22.25 | 5.44 | |
3 | 20 | Gray, K., Young, L, Waytz, A. | Mind perception is the essence of morality | 2012 | 154 | 10.97 | 4.95 | |
Moral behavior | ||||||||
1 | 29 | Mazar, N., Amir, O., & Ariely, D. | The dishonesty of honest people: A theory of self-concept maintenance | 2008 | 543 | 23.69 | 2.16 | |
2 | 24 | Blasi, A. | Bridging moral cognition and moral action: A critical review of the literature | 1980 | 594 | 24.15 | 7.41 | |
3 | 17 | Ajzen, I. | The theory of planned behavior | 1991 | 14495 | 327.35 | 8.49 | |
Moral emotions | ||||||||
1 | 17 | Tangney, J. P., Stuewig, J., & Mashek, D. J. | Moral emotions and moral behavior | 2007 | 605 | 21.51 | 13.71 | |
2 | 16 | Baumeister, R. F., Stillwell, A. M., & Heatherton, T. F. | Guilt: An interpersonal approach | 1994 | 631 | 20.55 | 10.69 | |
3 | 14 | Tangney, J. P., Miller, R. S., Flicker, L., Barlow, D. H. | Are shame, guilt and embarrassment distinct emotions? | 1996 | 460 | 7.02 | 3.42 | |
Moral self-views | ||||||||
1 | 11 | Zhong, C. B., & Liljenquist, K. | Washing away your sins: Threatened morality and physical cleansing | 2006 | 323 | 9.69 | 10.26 | |
2 | 10 | Haidt, J. | The emotional dog and its rational tail: A social intuitionist approach to moral judgment | 2001 | 1994 | 52.59 | 10.37 |
Note. The rank order within each theme is specified according to the number of citations within the data set examined, which not always corresponds to the total number of citations in WoS. We consider publications as seminal to research on morality when they attract at least 10 citations within the data set examined. As a result of this criterion, we only identified two external papers that were seminal to research on moral self-views. WoS = Web of Science.
Going through the five themes and their top three seminal papers additionally revealed that there are two empirical studies that have been highly influential in this literature. These are not included in our set because they were not published in a psychology journal and hence did not meet our inclusion criteria. In fact, part of the appeal in citing the fMRI study by Greene et al. (2001) in research on moral reasoning or the physical cleansing study by Zhong and Liljenquist (2006) in research on moral self-views may be that these were published in the extremely coveted journal Science —which is not a regular outlet for researchers in social psychology. Indeed, there has been some concern that these high visibility publications—and the media attention they attracted—have led multiple researchers to adopt this same methodology for further studies, perhaps hoping to achieve similar success ( Bauman et al., 2014 ; Graham, 2014 ; Mooijman & Van Dijk, 2015 ). The drawback of this publication strategy is that this may have led many researchers to continue examining different conditions affecting trolley dilemma and handwipe choices, instead of broadening their investigations to other issues relating to morality ( Hofmann et al., 2014 ; Lovett et al., 2015 ).
In the research on moral reasoning , besides Haidt’s (2001) theory on moral intuition and the fMRI study by Greene et al. (2001) discussed above, the third highly cited review paper addresses political ideologies. This publication by Jost, Glaser, Kruglanski, and Sulloway (2003) reports a meta-analysis examining how individual differences (e.g., authoritarianism, need for closure) correlate with conservative ideologies across 88 research samples in 12 countries. The relationship between moral reasoning and political ideologies is also an important topic in empirical work in this research theme. Indeed, the empirical publication that is most often cited in the WoS database (see Table 3 ) reports a series of studies that connects the primacy of different moral foundations (e.g., fairness, harm, authority) to liberal versus conservative political views of specific individuals ( Graham, Haidt, & Nosek, 2009 ). The high visibility and impact of the work of John Haidt and his collaborators in research on moral reasoning are further evidenced by the other two empirical publications that come up as most highly cited in our review of this research theme. These report data used for the development and validation of the Moral Foundations Questionnaire ( Graham et al., 2011 ) and research revealing cultural differences in the issues people consider moral and the way they respond to them ( Haidt et al., 1993 ).
Top-three Seminal Papers, Published Within the Set, for Each Research Theme.
Rank in research theme | Number citations in data set | Authors | Journal | Title | Publication year | Number citations in WoS | mncs | mnjs |
---|---|---|---|---|---|---|---|---|
Moral reasoning | ||||||||
1 | 129 | Graham, J., Haidt., J., & Nosek, B. A. | Liberals and conservatives rely on different sets of moral foundations | 2009 | 671 | 32.03 | 3.11 | |
2 | 51 | Haidt, J., Koller, S. H., Dias, M. G. | Affect, culture and morality, or is it wrong to eat your dog? | 1993 | 447 | 11.54 | 3.76 | |
3 | 95 | Graham, J., Nosek, B. A., Haidt, J., Iyer, R., Koleva, S., & Ditto, P. H. | Mapping the moral domain | 2011 | 354 | 23.22 | 3.14 | |
Moral judgments | ||||||||
1 | 45 | Schnall, S., Haidt, J., Clore, G. L., & Jordan, A. H. | Disgust as embodied moral judgment | 2008 | 384 | 16.13 | 1.77 | |
2 | 24 | Reeder, G. D., & Spores, J. M. | The attribution of morality | 1983 | 109 | 2.51 | 2.52 | |
3 | 31 | Goodwin, J. P., Piazza, J., & Rozin, P. | Moral character predominates in person perception and evaluation | 2014 | 80 | 13.36 | 2.43 | |
Moral behavior | ||||||||
1 | 54 | Bandura, A., Barbaranelli, C., Caprara, G. V., & Pastorelli, C. | Mechanisms of moral disengagement in the exercise of moral agency | 1996 | 527 | 11.62 | 3.63 | |
2 | 30 | Monin, B., & Miller, D. T. | Moral credentials and the expression of prejudice | 2001 | 294 | 5.98 | 3.24 | |
3 | 20 | Gino, F., Schweitzer, M. E., Mead, N. L., & Ariely, D. | Unable to resist temptation: How self-control depletion promotes unethical behavior | 2011 | 161 | 12.24 | 1.98 | |
Moral emotions | ||||||||
1 | 52 | Rozin, P., Lowery, L., Imada, S., & Haidt, J. | The CAD triad hypothesis: A mapping between three moral emotions (contempt, anger, disgust) and three moral codes (community, autonomy, divinity) | 1999 | 484 | 11.19 | 3.52 | |
2 | 17 | Tybur, J. M., Lieberman, D., & Griskevicius, V. | Microbes, mating, and morality: Individual differences in three functional domains of disgust | 2009 | 225 | 10.29 | 3.11 | |
3 | 26 | Horberg, E. J., Oveis, C., Keltner, D., & Cohen, A. B. | Disgust and the moralization of purity | 2009 | 144 | 6.88 | 3.11 | |
Moral self-views | ||||||||
1 | 96 | Aquino, K., & Reed, A. | The self-importance of moral identity | 2002 | 561 | 12.00 | 3.01 | |
2 | 63 | Leach, C. W., Ellemers, N., & Barreto, M. | Group virtue: The importance of morality (vs. competence and sociability) in the positive evaluation of in-groups | 2007 | 233 | 7.26 | 3.09 | |
3 | 15 | Ford, M. R., & Lowery, C. R. | Gender differences in moral reasoning: A comparison of the use of justice and care orientations | 1986 | 87 | 1.40 | 3.49 |
Note. The rank order within each theme is specified according to the total number of citations in WoS, which not always corresponds to the number of citations within the data set examined. WoS = Web of Science.
Research on moral judgments essentially examines the assignment of good versus bad intentions to others, for instance, based on their observed behaviors. An influential theoretical model guiding work in this area argues that people’s perceived intentions and abilities form two key dimensions in social impression formation ( Fiske, Cuddy, & Glick, 2007 ). In addition, many researchers in this area have referred to the work of Gray et al. (2012 , see Table 2 ) who consider the intentional perpetration of interpersonal harm—which requires the assignment of mental capacities to others—as a hallmark of human morality. Among the empirical studies examining these issues, the classic research by Reeder and Spores (1983) , which examines how situational information affects the perceived morality of individual actors, has become a seminal publication. A more recent study highly cited within this research theme was conducted by G. P. Goodwin, Piazza, and Rozin (2014) on the primacy of morality in person perception (see Table 3 ). The influence of Haidt’s (2001) seminal publication on moral intuition in this research theme is visible in a frequently cited study by Haidt and colleagues on the role of disgust as a form of embodied moral judgment ( Schnall et al., 2008 ; see Table 3 ).
In moral behavior , the most highly cited theory papers emphasize the connection between conceptualizations of the moral self and displays of moral behavior. In addition to the classic review paper arguing for this connection ( Blasi, 1980 ), many studies in this research theme refer to the different strategies people can use to maintain their self-concept of being a moral person, even if they are not immune to moral lapses ( Mazar et al., 2008 ). Seminal studies within this research theme reveal the implications of the connection between moral self-views and moral behaviors, which is in line with relations between research themes visualized in Figure 1 . Accordingly, the most frequently cited publications reveal that even well-meaning individuals can display unethical behavior as their self-control becomes depleted ( Gino, Schweitzer, Mead, & Ariely, 2011 ). In addition, research elucidates the different strategies people can use to disengage from their moral lapses ( Bandura et al., 1996 ). The possible implications are demonstrated empirically, for instance, in work showing that people freely express prejudice once they have established their moral credentials ( Monin & Miller, 2001 ).
In the research theme on moral emotions , the most highly cited theory papers focus on the experience of guilt and shame as relevant self-condemning emotions, indicating how people reflect upon and experience moral transgressions associated with the self . These exemplify the social implications of moral behavior and are generally considered uniquely diagnostic for human morality ( Baumeister, Stillwell, & Heatherton, 1994 ; Tangney, Miller, Flicker, & Barlow, 1996 ; Tangney et al., 2007 ). However, the most highly cited empirical publications drawing from these theoretical perspectives all address disgust as a response, indicating that other individuals or situational contexts are considered impure and should be avoided ( Horberg, Oveis, Keltner, & Cohen, 2009 ; Rozin, Lowery, Imada, & Haidt, 1999 ; Tybur, Lieberman, & Griskevicius, 2009 ).
Finally, the studies on moral self-views comprise a relatively small and dispersed research theme, which is not characterized by a specific theoretical perspective. This is also exemplified by the fact that we only found two papers external to the set that met our criteria for being considered seminal. Researchers working on this theme most often cite the study of Zhong and Liljenquist (2006) , suggesting that people engage in symbolically cleansing acts to alleviate threats to their moral self-image. In addition, the seminal paper by Haidt (2001) is frequently cited by publications in this research theme. Empirical publications on moral self-views that have attracted many citations also from outside the morality literature include a validation study of the moral identity scale ( Aquino & Reed, 2002 ), a series of studies documenting the importance of morality for people’s group-based identities ( Leach, Ellemers, & Barreto, 2007 ), and a classic study on gender differences in moral self-views ( Ford & Lowery, 1986 ).
We examined the interrelations and clusters of research approaches in the studies reviewed, on the basis of titles and abstracts for 989 studies in our set, published in 1996 through 2017 (see Figure 6 ). The first cluster, containing 107 interrelated terms (indicated in red— Experiments and actions ), contains studies examining a variety of actions and their consequences in experimental research. The second cluster contains 70 terms (indicated in orange— Individual and group differences ) capturing studies on personality and individual differences as well as differences between social groups in correlational research. The third cluster connects 48 terms (indicated in pink— Rule endorsement ) referring to studies on justice and fairness, authority, and moral foundations. The fourth cluster contains 26 terms (indicated in turquoise— Harm perpetrated ) indicating responses to violation and harm. The fifth cluster contains seven terms (indicated in purple— Norms and intentions ) referring to norms and deliberate intentions in planned behavior.
Publications on morality, 1996-2017.
Note. Clustering and interrelations based on content analysis of publication titles and abstracts.
These clusters help us characterize the studies conducted within each of the research themes we distinguish in this review. We assess this by examining overlay “heat maps” indicating the density of studies within each research theme (ranging from low—blue to yellow—high) by projecting them on the clusters of research approaches outlined above (see supplementary materials ).
The overlay map for research on moral reasoning connects clusters of research relating to individual and group differences (orange) and rule endorsement (pink). However, studies on moral reasoning have largely neglected to examine how such reasoning relates to actions in experimental contexts (red), harm perpetrated (turquoise), or norms and intentions (purple). Studies on moral judgments by contrast mainly involve experiments and examine actions (red) as well as harm perpetrated (turquoise). However, research addressing questions on moral judgments has been less concerned about examining individual and group differences (orange), rule endorsement (pink), or norms and intentions (purple). Research on moral behavior has most frequently addressed norms and intentions (purple), and to a lesser extent experiments and actions (red) and individual and group differences (orange). Researchers in this area have not systematically examined rule endorsement (pink) or harm perpetrated (turquoise). The research on moral emotions is mostly carried out in relation to harm perpetrated (turquoise), which is examined in terms of experiments and actions (red), rather than individual and group differences (orange). Rule endorsement (pink) and norms and intentions (purple) are rarely taken into account. The research on moral self-views connects approaches addressing individual and group differences (orange), experiments and actions (red), and harm perpetrated (turquoise), but is less concerned with rule endorsement (pink) or norms and intentions (purple).
The quantitative analyses reported above have allowed us to specify the overall characteristics of the studies included in our review, in terms of their most influential publications as well as most frequently used research approaches. We will now consider how the nature of the research questions addressed in the studies reviewed and the empirical approaches that were used affect current insights on the psychology of morality.
This is by far the most popular research theme in the empirical literature on morality, and this preference has only intensified over the years. Research based on Haidt and Graham’s (2007) moral foundations theory has established that conservatives in the United States are more likely to show support for civil rights restrictions ( Crowson & DeBacker, 2008 ), to have a prevention focus ( Cornwell & Higgins, 2013 ), and to perceive moral clarity ( Schlenker, Chambers, & Le, 2012 ) than liberals. This not only predicts their political voting behavior and candidate preferences ( Skitka & Bauman, 2008 ) but also relates to more general tendencies in how individuals relate to others, as indicated by their social dominance orientation, authoritarianism ( Federico, Weber, Ergun, & Hunt, 2013 ), or parenting styles ( McAdams et al., 2008 ).
However, research on this theme also reveals how the moral principles people endorse relate to their life experiences, family roles, and position in society. For instance, exposure to war ( Haskuka, Sunar, & Alp, 2008 ) or abusive/dysfunctional family relations ( Caselles & Milner, 2000 ) impedes moral reasoning. More generally, many studies have shown that the moral judgments people make depend on their age, gender (e.g., Kray & Haselhuhn, 2012 ; Skoe, Cumberland, Eisenberg, Hansen, & Perry, 2002 ), parental status, education, multicultural experiences ( Lin, 2009 ), war experiences, family experiences, or religious status ( Simpson, Piazza, & Rios, 2016 ).
While this work attests to the power and resilience of moral convictions, at the same time, there is an abundance of evidence that people are not very consistent in their moral reasoning. Indeed, it has clearly been demonstrated that moral reasoning also depends on the way a moral dilemma is framed or specific concerns that are (implicitly) primed. Such primes can make salient the monetary cost of their decisions (e.g., Irwin & Baron, 2001 ), the intentions and goals of the actors involved, the harm done as a result of their actions ( Sabini & Monterosso, 2003 ), or specific events in history ( Lv & Huang, 2012 ). But also more subtle and implicit cues can have far-reaching effects for moral reasoning. For instance, the moral acceptability of the same course of action differs depending on whether people are implicitly prompted to focus on their head (vs. their heart; Fetterman & Robinson, 2013 ), on cleanliness ( Zhong, Strejcek, & Sivanathan, 2010 ), on approach versus avoidance ( Broeders, Van Den Bos, Müller, & Ham, 2011 ; Janoff-Bulman, Sheikh, & Hepp, 2009 ; Moore, Stevens, & Conway, 2011 ), on the present versus the future, or on own learning versus the education of others ( Tichy, Johnson, Johnson, & Roseth, 2010 ).
In sum, the accumulated research on moral reasoning has led to two types of conclusions. First, it has been extensively documented that different social roles and life experiences can have a long-term impact on the way people reason about morality and the moral principles they prioritize. Second, more immediate situational cues also affect moral reasoning and moral decisions. Both these conclusions from studies on moral reasoning complement philosophical analyses as well as evolutionary accounts emphasizing the objective survival value of adhering to specific principles or guidelines.
Studies on moral judgments generally attest to the fact that information about morality weighs more heavily in determining overall impressions of others than diagnostic information pertaining to behavioral domains such as competence or sociability (e.g., S. Chen, Ybarra, & Kiefer, 2004 ). This is the case for evaluations of individuals, as well as for groups and organizations. Information about morality is seen as being more predictive of behavior in a range of situations ( Pagliaro, Ellemers, & Barreto, 2011 ) and more likely to reflect on other members of the same group ( Brambilla, 2012 ). However, people find it easy to accept lapses or shortcomings as indicating moral decline, while they require more evidence to be convinced of people’s moral improvement ( Klein & O’Brien, 2016 ). Furthermore, the relative importance people attach to specific features may differ, depending, for instance, on the cultural context (e.g., Chinese vs. Western) in which this is assessed ( F. F. Chen, Jing, Lee, & Bai, 2016 ; X. Chen & Chiu, 2010 ).
Inferences about people’s good intentions—presumably indicating their morality—are often derived from features indicating agreeableness and communality. Individuals are seen as moral when they can make agentic motives compatible with communal motives, for instance, by displaying self-control, honesty, reliability, other-orientedness, and dependability ( Frimer, Walker, Lee, Riches, & Dunlop, 2012 ). Whether this is perceived to be the case also depends on situational cues such as the harm done to others (e.g., Guglielmo & Malle, 2010 ), the benefit to the self ( Inbar, Pizarro, & Cushman, 2012 ), or the perceived intentionality of the behavior that has led to such outcomes (e.g., Greitemeyer & Weiner, 2008 ; Reeder, Kumar, Hesson-McInnis, & Trafimow, 2002 ).
Other target characteristics (such as their social status or their national, religious, cultural, or sexual identity; e.g., Cramwinckel, van den Bos, van Dijk, & Schut, 2016 ), as well as contextual guidelines (e.g., instructing people to focus on the action vs. the person; duties vs. ideals; appearance vs. behavior of the target) may also color the way research participants interpret and value concrete information about specific targets ( Heflick, Goldenberg, Cooper, & Puvia, 2011 ). Even unrelated contextual cues may have such effects, for instance, when information is presented on a black-and-white background ( Zarkadi & Schnall, 2013 ) or when research participants are positively or negatively primed with a specific odor, mood induction, or room temperature (e.g., Schnall et al., 2008 ).
In addition, judgments of other individuals and groups also depend on the physical and psychological closeness of these targets to the self (e.g., Cramwinckel, van Dijk, Scheepers, & van den Bos, 2013 ; Haidt, Rosenberg, & Hom, 2003 ). Self-anchoring, self-distancing, and self-justifying effects can all be raised when moral judgments about others can be seen to reflect upon own social class or race, one’s personal convictions, the salience of specific social roles (e.g., as a parent, Eibach, Libby, & Ehrlinger, 2009 ; as a subordinate, Bauman, Tost, & Ong, 2016 ), or any group memberships that is seen as self-defining (e.g., Iyer, Jetten, & Haslam, 2012 ). Related concerns can lead people to protect just-world beliefs ( Gray & Wegner, 2010 ) by dehumanizing stigmatized targets (e.g., Cameron, Harris, & Payne, 2016 ; Riva, Brambilla, & Vaes, 2016 ), increasing their physical distance from them, pointing to moral failures they or other group members have displayed in the past, or referring to “natural” differences that justify differential treatment (e.g., Kteily, Hodson, & Bruneau, 2016 ).
In sum, even if people are strongly inclined to evaluate the moral stature of others they encounter, research in this area reveals that the morality of other individuals and groups is largely in the eye of the beholder. In general, people find it easier to acknowledge the moral questionability of specific behaviors, when these are perpetrated by an individual or group that is more distant from the self. Self-protective mechanisms can also lead people to reduce the moral standing of victims of immoral behavior or alleviate the blame placed on perpetrators.
Studies on moral behavior have often addressed the interplay between individual moral guidelines, on one hand, and social norms, on the other. This is examined, for instance, in studies on moral rebels and moral courage—those who stand up for their own principles ( Sonnentag & McDaniel, 2013 )—as well as moral entrepreneurs and people engaged in moral exporting—those who actively seek to convince others of their own moral principles ( Peterson, Smith, Tannenbaum, & Shaw, 2009 ). Research shows that the strength of personal moral beliefs, attitudes, or convictions can make people resilient against social pressures ( Brezina & Piquero, 2007 ; Hornsey, Majkut, Terry, & McKimmie, 2003 ; Langdridge, Sheeran, & Connolly, 2007 ). However, in domains where personal moral convictions are less strong, moral norms (indicated by team atmosphere or principled leadership) can also overrule individual concerns (e.g., Fernandez-Dols et al., 2010 ). At the same time, it has been documented that social pressures can tempt people either to behave less morally (e.g., M. A. Barnett, Sanborn, & Shane, 2005 ) or to display more group-serving (instead of selfish) behavior (e.g., Osswald, Greitemeyer, Fischer, & Frey, 2010 ), depending on what these norms prescribe ( Ellemers, Pagliaro, Barreto, & Leach, 2008 ).
Research has also revealed that once their moral standing is affirmed, people more easily fall prey to “moral licensing” tendencies. This can even happen vicariously. For instance, it has been demonstrated that people are more likely to display prejudice and bias in hiring decisions after having seen that other members of their group have hired an ethnic minority applicant for a vacant position ( Kouchaki, 2011 ). Yet, positive emotional states resulting from immoral behavior (such as “cheater’s high”; Ruedy, Moore, Gino, & Schweitzer, 2013 , or “hubristic pride,” for example, Bureau, Vallerand, Ntoumanis, & Lafreniere, 2013 ) occur only rarely. Instead, most studies show that people find it aversive to realize they have behaved immorally and have documented different compensatory strategies that can be displayed (e.g., Bandura, Caprara, Barbaranelli, Pastorelli, & Regalia, 2001 ). For instance, confronting people with moral lapses (of themselves and others) impairs the recall, cognitive salience, and perceived applicability of moral rules (“moral disengagement”; Bandura, 1999 ; Fiske, 2009 ). When caught in a moral transgression, people emphasize that this behavior does not reflect their true intention or identity ( Conway & Peetz, 2012 ) or speculate that others are likely to do even worse (“moral hypocrisy”; Valdesolo & DeSteno, 2007 ; Valdesolo & DeSteno, 2008 ).
In sum, research on moral behavior demonstrates that people can be highly motivated to behave morally. Yet, personal convictions, social rules and normative pressures from others, or motivational lapses may all induce behavior that is not considered moral by others and invite self-justifying responses to maintain moral self-views.
The intensity of emotional responses to the moral acts of the self and others has been shown to depend on the nature of the situation (importance of the moral dilemma, distance in time, resulting from action vs. inaction; Kedia & Hilton, 2011 ), as well as on specific characteristics of the victim or target of morally questionable acts (e.g., perceived vulnerability, physical proximity; Dijker, 2010 ). These include factors relating to the self (experience of pride; Camacho, Higgins, & Luger, 2003 ), to the social situation (social validation of action perpetrated), or to the victim of the transgression (dubious moral character; Jiang et al., 2011 ). All these situational characteristics may buffer people against the emotional costs of witnessing or perpetrating immoral acts.
Research has further examined the antecedents and implications of specific emotions. This has revealed that disgust can elicit (symbolic) cleansing behaviors ( Gollwitzer & Melzer, 2012 ) and is raised in response to various health cues (e.g., relating to taste sensitivity— Skarlicki, Hoegg, Aquino, & Nadisic, 2013 —sexuality, or pathogens). However, such disgust is not necessarily related to morality ( Tybur et al., 2009 ). Other studies have addressed moral anger, which has been associated with the tendency to aggress against others (protest, Cronin, Reysen, & Branscombe, 2012 ; scapegoating and retribution, Rothschild, Landau, Molina, Branscombe, & Sullivan, 2013 ) or attempts to restore moral order (e.g., Pagano & Huo, 2007 ).
In this literature, guilt and/or shame emerge as self-reflective emotions that uniquely indicate the felt moral implications of actions perpetrated by the self (or others that imply the self, for example, ingroup members). Shame and guilt each have their specific properties and effects (e.g., Sheikh & Janoff-Bulman, 2010 ; Smith, Webster, Parrott, & Eyre, 2002 ). Shame is more clearly associated with the Behavioral Inhibition System, related to public exposure, blushing, and (in problem populations) anxiety and substance abuse. Guilt relates more clearly to the Behavioral Activation System and is related to private beliefs, empathy, and (in problem populations) religious activities. Nevertheless, both shame and guilt have been found to relate specifically to justice violations rather than other types of negative experiences (e.g., Agerström, Björklund, & Carlsson, 2012 ). Furthermore, the experience of guilt and/or shame is associated with endorsing victim compensation and support and reparation efforts (e.g., Pagano & Huo, 2007 ) but does not necessarily elicit other forms of prosocial behavior (e.g., De Hooge, Nelissen, Breugelmans, & Zeelenberg, 2011 ).
In sum, both the intensity and the nature of emotions reported indicate the extent to which people experience situations encountered by themselves and others as having moral implications and as requiring action to enact moral guidelines or redress past injustices. The secondary, uniquely human, and self-reflective emotions of guilt and shame appear to be particularly important in this process.
In this literature, “concern for others,” derived from self-proclaimed levels of agreeableness or communion, are seen to indicate people’s moral character. Accordingly, much of the research on moral self-views has assessed self-proclaimed levels of honesty/humility or warmth/care (contained, for instance, in Lee and Ashton’s (2004) HEXACO-PI or Aquino and Reed’s (2002) “moral identity” scale). Individuals who combine a focus on agency and goal achievement with expressions of communion and care for others are seen as “moral exemplars” (e.g., Frimer, Walker, Dunlop, Lee, & Riches, 2011 ). When such moral behavior is displayed by others, this can also increase people’s confidence in their own ability to act morally (e.g., Aquino, McFerran, & Laven, 2011 ).
Different studies have established that self-reported character traits correlate with accounts of delinquency, unethical business decisions, or forgiveness provided by research participants (e.g., Cohen, Panter, Turan, Morse, & Kim, 2013 ). In addition, the moral self-views people report have been found to converge with actual behavioral displays (e.g., cheating vs. helping others) during experimental tasks in the lab (e.g., Stets & Carter, 2011 ). However, results from this research also suggest that people deliberately use such acts to communicate their good moral intentions, for instance, by donating money after lying ( Mulder & Aquino, 2013 ) or demonstrating that they resist pressure from others to behave immorally ( Carter, 2013 ).
Unfortunately, this tendency to self-present as being morally good can also prevent people from acknowledging their moral lapses. Indeed, after behaving in ways that violate moral standards (violence, delinquency, unethical decision making), people have been found to display a range of moral disengagement strategies. These include placing the event at a more distant point in time or describing it in more abstract terms ( Lammers, 2012 ), rationalizing one’s behavior by invoking a more distant moral purpose ( Aquino, Reed, Thau, & Freeman, 2007 ), or dehumanizing those who suffered from it ( Monroe, 2008 ). In a similar vein, actions that call into question the moral integrity and standards of one’s ingroup have been found to invite negative attitudes (prejudice), emotions (outrage), and behaviors (intolerance) directed toward the outgroup (e.g., Täuber & Zomeren, 2013 ).
In sum, this literature suggests that people reflect on their moral character and how they present this in their self-descriptions as well as in acts they can use to convey their moral intentions. However, the available evidence shows this may primarily lead them to preserve moral self-regard instead of making them improve or prevent morally questionable behaviors. Indeed, the focus on communality and concern for others as indicators of moral character may be too broad to provide sufficient guidance on how to act morally in specific situations.
The past years have witnessed a marked increase in the interest of (social) psychologists in “morality” as a topic for empirical research. Our bibliometric analysis reveals the increasing maturity of this area of scientific inquiry, in terms of amount of research effort invested and relative impact. Yet, overviews that are still often cited are by now outdated in terms of the studies covered ( Blasi, 1980 , reviewing 71 studies) or have tended to focus on specific issues or research themes (e.g., Bauman et al., 2014 ).
Substantial knowledge has accumulated about the way people think about morality; however, we know much less about how this affects their moral behavior . We draw this conclusion based on the observation that by far most of the published studies in our review have addressed issues relating to moral reasoning—what people consider right and wrong ways to behave. Furthermore, many researchers have examined the judgments we make about the moral behaviors of other individuals and groups. Of course, these are important research themes in their own right. However, part of the interest of social psychological researchers in the topic of morality stems from the fact that moral reasoning and moral judgments of others are seen to inform the choices people make in their own moral behaviors, as is also visualized in Figure 1 . Yet, we see that studies on moral reasoning and moral judgments have tended to focus on a limited number of specific research questions, methodologies, and approaches, which are not clearly connected to each other or to other research themes.
As a result, current insights on moral reasoning mostly pertain to relatively abstract principles (such as “fairness”) that people can subscribe to, as well as individual differences in which moral guidelines they endorse. The concrete implications of these general principles for specific situations remain less considered. Research on moral judgments complements this by addressing people’s situational experiences, for instance, resulting from concrete choices or behaviors displayed by others. However, these more specific judgments are not systematically traced back to the general moral principles that might inform them or the (dis)agreement that may exist about how to prioritize these.
Research on moral behavior and moral self-views has examined a broader range of issues and is less bound to specific research paradigms and approaches. Accordingly, researchers examining these topics have been more successful in connecting different clusters of research—validating the central role assigned to such research questions in Figure 1 . Nevertheless, overall these integrative empirical approaches have received much less interest from researchers examining issues in morality and have remained relatively dispersed. In fact, we were unable to clearly identify a seminal theoretical approach that has guided research on moral self-views. We suspect this may be a side-effect of some highly visible research paradigms and successful measures that are cited and followed up by many researchers.
A second conclusion relates to the choices researchers have made in directing their efforts to examine different issues relating to morality. Our classification of this body of research into distinct themes addressed and types of mechanisms examined has allowed us to quantify and characterize these choices. The comparison of studies carried out to address different research themes revealed that a large part of this literature is relatively limited in terms of the questions raised and the type of methodologies that are used. As a result, the concrete value of the detailed knowledge we have accumulated about moral reasoning and moral judgments as antecedent conditions for moral behavior unfortunately has remained hypothetical. That is, emerging insights into the way people think about morality and moral behavior have not systematically been followed through by assessing how broader guidelines and principles actually inform behavior, emotions, and self-views. Instead, these latter types of studies are relatively rare. Similarly, the literature reviewed here yields relatively little insight into the way behavior, emotions, and self-views feed back into the development of people’s moral reasoning over time. Nor does this body of work systematically address how people’s own experiences affect their judgments of others. These process-oriented and integrative questions constitute promising avenues for future research.
Our decision to classify published studies in terms of the level of analysis adopted has additionally revealed that the mechanisms examined (e.g., how the moral principles people subscribe to relate to the moral intentions they report) are mostly located at the intrapersonal level. In addition, there is a considerable body of research that examines interpersonal mechanisms in particular in studies examining how these relate to the impressions we form of others. However, much less research effort has been devoted to examining how people may come to share the same moral values or how members of different groups in society respond to each other’s moral value endorsements. Yet, the studies that adopt such an approach have clearly established that intragroup mechanisms can and do play a role, also in the moral reasoning individuals develop. Furthermore, research has shown that individuals adapt the moral principles they prioritize, depending on group identities and salient concerns these prescribe. Bicultural individuals, for instance, have been found to shift between prioritizing autonomy or community concerns in their moral reasoning, depending on which of their cultural identities is more salient in the situation they encounter ( Fu, Chiu, Morris, & Young, 2007 ).
Because studies taking this type of approach are so rare, our understanding of when and how people converge toward shared moral views, how they influence each other in adapting their moral convictions, and how social sanctions and rewards are used to make individuals adhere to shared moral norms has largely remained uncharted territory. Yet, these latter types of questions are those that guide the public debate on morality—and are often cited as a source of inspiration by researchers in this area. Similarly, relatively few researchers have addressed intergroup mechanisms, even though their relevance—for instance, for moral reasoning—is revealed in work showing that group memberships define the “moral circles” in which people are afforded or denied deservingness of moral treatment (e.g., Olson, Cheung, Conway, Hutchison, & Hafer, 2011 ; see also Ellemers, 2017 ).
The relative neglect of intragroup and intergroup mechanisms in this literature is all the more striking because different theoretical approaches—that are frequently cited by researchers working on morality—emphasize that moral principles are considered so important because they indicate shared notions about “right” and “wrong” that regulate the behavior of individuals. Indeed, prominent approaches to morality commonly acknowledge that general moral principles such as the “golden rule” can be interpreted differently in different contexts or by groups of people who translate these into specific behavioral guidelines (e.g., Churchland, 2011 ; Giner-Sorolla, 2012 ; Greene, 2013 ; Haidt & Graham, 2007 ; Haidt & Kesebir, 2010 ; Harvey & Callan, 2014 ). This is also the key message of the seminal study on moral reasoning by Haidt et al. (1993) . Such group-specific interpretations of the same universal values also help to explain why conflicts about moral issues are so stressful and difficult to resolve (see also Ellemers, 2017 ; Ellemers & Van der Toorn, 2015 ). Yet, researchers have only recently begun to examine these issues more systematically (e.g., Rom & Conway, 2018 ).
Thus, the imbalance observed in research themes addressed and levels of analysis at which relevant mechanisms have been examined reveal an important discrepancy between empirical research on morality and leading theoretical approaches that emphasize the importance of morality for group life and for individuals living together in communities (e.g., Gert, 1988 ; Janoff-Bulman & Carnes, 2013 ; Rai & Fiske, 2011 ; Tooby & Cosmides, 2010 ). As a result, we know a lot about intrapersonal and some about interpersonal considerations relating to morality, but have relatively little insight into the social functions of morality (see also Ellemers & Van den Bos, 2012 ) that also incorporate relevant mechanisms pertaining to intragroup dynamics and intergroup processes.
A third conclusion emerging from this review is that there is a disjoint between seminal theoretical approaches to human morality and empirical work that is carried out. Our identification of seminal publications revealed that the theoretical perspectives that we have used to derive key characteristics of human morality are also the ones that are frequently cited by researchers in this area. However, closer inspection of the research included in our review reveals that the studies these researchers conducted do not systematically address or reflect the key features characterizing foundational theoretical approaches. This is visible in different ways.
To begin with, the notion that shared identities shape the development of specific moral guidelines, which in turn inform the behavioral regulation of individuals living in social groups, is a key feature identified by different approaches seeking to understand the psychology of morality. Yet, cluster analysis of the studies carried out to examine this reveals that empirical approaches tend to focus either on the identification of general principles and individuals who endorse them or on the impact of specific norms and how these affect the choices people make in concrete realities. However, they mostly do this while neglecting to examine how moral norms pertaining to specific behaviors can be traced to general moral principles. Yet, the ambiguity in translating abstract moral principles into specific behavioral guidelines is where the action is. This is what causes disagreement between individuals or groups endorsing diverging interpretations of the same moral rule. This ambiguity also provides the leeway for people to redeem their moral self after moral transgressions by selectively choosing which specific behaviors are diagnostic for their broader moral intentions and which are not.
Furthermore, the emotional burden of moral experiences and the impact this has on subsequent moral reasoning and moral judgments are strongly emphasized in different perspectives that are seen as influential in this literature (e.g., Blasi, 1980 ; Haidt, 2001 ). Notably, the emotions that are seen as distinctive for human morality (shame and guilt) refer to explicitly self-reflective states. The experience of these particular emotions helps people to identify the moral implications of their judgments and behaviors, and the anticipation of these emotions supports efforts to regulate their behavior accordingly. Here too there is a disjoint between what theoretical perspectives emphasize and what empirical studies examine. That is, across the board, moral emotions constitute the least frequently examined research theme. Furthermore, even the studies that do address moral emotions do not always tap into these uniquely human and self-reflective moral emotions. Instead, there seems to be a preference for research paradigms that focus on the emergence of disgust. While this allows researchers to use implicit measures to assess physical or symbolic distancing of the self from aversive situations, other studies have noted that the stimuli examined in this way may not necessarily have moral overtones. As a result, the added value of such work for understanding the emotional implications of moral situations or charting the role of emotions in the regulation of one’s own moral behavior is limited.
Highly influential approaches that are very frequently cited in the studies reviewed (most notably, Blasi, 1980 ; Haidt, 2001 ) emphasize the importance of connecting “thoughts” and cognitions to “experiences” and actions. Yet, we conclude that the clusters of research that emerge are located in a space where these emerge as opposite extremes. Most studies either address general principles, overall guidelines, or abstract preferences in rule endorsement or focus on concrete experiences and actions, without connecting the two. Furthermore, the role of moral emotions in relation to moral judgments, moral reasoning, moral behaviors, and moral self-views remains underexamined in this literature.
A fourth conclusion emerging from our review resonates with concerns expressed by Augusto Blasi, more than 35 years ago. That is, he noted that researchers examining moral cognition (including information, norms, attitudes, values, reasoning, and judgments) ultimately aim to understand the role that different elements play in creating moral action. At the same time, he concluded that the designs and measures used in the 71 studies he reviewed actually did not allow researchers to substantially advance their understanding of the issues they aimed to examine and accused them of “intellectual laziness” (p. 9) in failing to provide a clearly articulated theoretical rationale for relations examined.
In our review examining more than 1,000 empirical studies that were published since, we still see similar concerns emerging. In fact, there is a marked reliance on self-reports, explicit judgments or choices, and self-stated behavioral intentions, and we found very few examples of studies using implicit indicators of moral concerns or (psycho)physiological measures. This is unfortunate, in view of the far-reaching social implications of moral choices and moral behaviors, causing self-presentational concerns and defensive responses to guide the deliberate responses of research participants (see also Ellemers, 2017 ).
Furthermore, the empirical measures generally used largely rely on self-reports of general dispositions or overall preferences and intentions. This does not reflect current theoretical insights on the prevalence of defensive and self-justifying mechanisms in the way people think about the moral behaviors of themselves and others. It is also not in line with the results of empirical studies reviewed here, documenting how strategic self-presentation, biased judgments, and other self-defensive responses can be raised by various types of situational features that may be incidental and unrelated to the moral issue at hand. In light of the empirical evidence demonstrating various types of bias in each of the research themes examined, it is difficult to understand why so many researchers still rely on measures that capture individual differences or general tendencies and assume these have predictive value across situations.
Even though studies documenting factors that may induce biased judgments call into question the predictive value of standardized measures of morality, we do think it is theoretically meaningful to establish these situational variations. The crucial implication of these findings is that seemingly unimportant or irrelevant situational features can have far-reaching implications for real-life moral decisions. This knowledge can be used to redesign relevant conditions, for instance, at work, to support employees who feel they need to blow the whistle ( Keenan, 1995 ) or to help sales persons decide how to deal with customer interests ( Kurland, 1995 ).
We devote this final section of our review to promising avenues that researchers have started to pursue, which offer concrete examples of how to connect different strands of research and examine additional levels of analysis that may inspire future researchers. Even though we have criticized the lack of integration between the different research themes examined, some of the seminal studies in our review stand out in that they are also frequently cited in another theme than where they were classified. This is the case for the seminal study by Graham et al. (2009) on moral reasoning, the work of Bandura et al. (1996) on moral disengagement, and the work by Leach et al. (2007) on the importance of morality for group identities. This attests to the fact that at least some of the studies reviewed here have successfully connected different themes in research on morality.
This tendency seems to be followed up in some recent studies we found. For instance, several researchers have begun to investigate how general principles in moral reasoning relate to concrete behaviors in specific situations. These include studies revealing relations between the endorsement of abstract moral principles to donations people make to different causes (migrants, medical research, international aid; Nilsson, Erlandsson, Vastfjall, 2016 ). Similarly, endorsement of general moral principles or values has been related to specific behaviors in experimental games (trust game, thieves game; Clark, Swails, Pontinen, Boverman, Kriz, & Hendricks, 2017 ; Kistler, Thöni, & Welzel, 2017 ). This has yielded more insight into how abstract principles relate to specific behaviors and has demonstrated which principles are relevant in which situations. For instance, actions requiring the exercise of self-control were found to relate to “binding” moral foundations in particular ( Mooijman, Meindl, et al., 2018 ).
Another promising avenue for future research is charted by researchers who have begun to address the role of emotions in guiding other responses relating to morality. This includes work demonstrating how individual differences in emotion regulation affect moral reasoning ( Zhang, Kong, & Li, 2017 ). Furthermore, it has been shown that interventions that alter emotional responses can affect moral behaviors (e.g., Jackson, Gaertner, & Batson, 2016 ; see also Yip & Schweitzer, 2016 ). Others have shown that understanding the experience of guilt and shame in response to harm done to others helps predict subsequent self-forgiving and self-punishing responses ( Griffin, Moloney, Green, et al., 2016 ).
The overreliance on intrapersonal and interpersonal mechanisms in the study of morality has been noted before (see also Ellemers, 2017 ; Ellemers, Pagliaro, & Barreto, 2013 ). Recent research has begun to document a number of intragroup mechanisms that are relevant to increase our understanding of moral behavior. This includes work showing the reluctance of groups to include individuals in particular when their morality is called into question ( Van der Lee, Ellemers, Scheepers, & Rutjens, 2017 ). Recent studies also document the ways in which shared social identities and group-specific moral norms may affect moral reasoning ( Gao, Chen, & Li, 2016 ), affect moral behaviors, and overrule individual convictions as people seek to receive respect from other ingroup members ( Bizumic, Kenny, Iyer, Tanuwira, & Huxey, 2017 ; Mooijman, Hoover, Lin, Ji, & Dehghani, 2018 ).
Depending on the nature of the group and the moral norms these endorse, this can have positive as well as negative implications ( Pulfrey & Butera, 2016 ; Renger, Mommert, Renger, & Simon, 2016 ; Stoeber & Hotham, 2016 ; Stoeber & Yang, 2016 ). The relevance and everyday implications of these phenomena are also documented in studies examining the emergence of moral conformity on social media ( Kelly, Ngo, Chituc, Huettel, & Sinnott-Armstrong, 2017 ) or the way international experiences and exposure to multiple moral norms in different foreign countries can elicit moral relativism ( Lu, Quoidbach, Gino, Chakroff, Maddux, & Galinsky, 2017 ).
Furthermore, the overreliance on U.S. samples and political ideologies is now beginning to be complemented by studies examining how moral concerns may be similar or different in different cultural and political contexts (e.g., Nilson, & Strupp-Levitsky, 2016 ). Recent work has compared the moral foundations endorsed by Chinese versus U.S. samples ( Kwan, 2016 ), has examined this among Muslims in Turkey (Yilmaz, Harma, & Bakçekapili, & Cesur, 2016), and has made other intercultural comparisons ( Stankov & Lee, 2016a , 2016b ; Sullivan, Stewart, Landau, Liu, Yang, & Diefendorf, 2016 ). This helps understand that some moral concerns emerge consistently across different cultural contexts, and the macro-level cultural values and corruption indicators that characterize them ( Mann, Garcia-Rada, Hornuf, Tafurt, & Ariely, 2016 ). However, it has also revealed that different political systems (in Finland, Kivikangas, Lönnquist, & Ravaja, 2017 ), cultural values (in India, Clark, Bauman, Kamble, & Knowles, 2017 ), or relations between social groups (in Lebanon and Morocco, Obeid, Argo, & Ginges, 2017 ) may raise different moral concerns and behaviors than are commonly observed in the United States (see also Haidt et al., 1993 ).
The increased interest of psychological researchers in issues relating to morality was prompted at least partly by societal developments during the past years. These have raised questions from the general public and made available research funds to address issues relating to civic conduct, ethical leadership, and moral behavior in various professional contexts ranging from finance and sports, to community care and science. Therefore, we think it is relevant to consider how the body of evidence that is currently available speaks to these issues.
A recurring theme in this literature, which also explains some of the difficulties encountered by empirical researchers, relates to what we will refer to as the “paradox of morality.” That is, from all the research reviewed here, it is clear that most people have a strong desire to be moral and to appear moral in the eyes of (important) others. The paradox is that the sincere motivation to do what is considered “right” and the strong aversion to being considered morally deficient can make people untruthful and unreliable as they are reluctant to own up to moral lapses or attempt to compensate for them. Paradoxically too, those who care less about their moral identity may actually be more consistent in their behavior and more accurate in their self-reports as they are less bothered by appearing morally inadequate. As a result, all the research that reveals self-defensive responses when people are unable to live up to their own standards or those of others, or when they are reminded of their moral lapses, implies that there is limited value in relying on people’s self-stated moral principles or moral ideals to predict their real-life behaviors.
On an applied note, this paradox of morality also clarifies some of the difficulties of aiming for moral improvement by confronting people with their morally questionable behaviors. Such criticism undermines people’s moral self-views and likely raises guilt and shame. This in turn elicits self-defensive responses (justifications, victim blaming, moral disengagement) in particular among those who think of themselves as endorsing universal moral guidelines prescribing fairness and care. Furthermore, questioning people’s moral viewpoints easily raises moral outrage and aggression toward others who think differently. This is also visible in studies examining moral rebels and moral courage (those who stand up for their own principles) or moral entrepreneurship and moral exporting (those who actively seek to convince others of their own moral principles). While the behavior of such individuals would seem to deserve praise and admiration as exemplifying morality, it also involves going against other people’s convictions and challenging their values, which is not always welcomed by these others. All these responses stand in the way of behavioral improvement. Instead of focusing on people’s explicit moral choices to make them adapt their behavior, it may therefore be more effective to nudge them toward change by altering goal primes, situational features, or decision frames.
We have noted above that it would be misleading to think that morality can be captured as an individual difference that has predictive value across situations. Yet, this is the conclusion that is often implicitly drawn and also informs many of the attempts to monitor and guard moral behavior in practice. For instance, in many businesses, the standard response to integrity incidents or moral transgressions is to sanction or expel specific individuals and to make newcomers pass assessment tests and take pledges. The research reviewed here suggests that attempts to guard moral behavior, for instance at work, may be more effective when these also take into account contextual features, for instance, by critically assessing organizational norms, team climates, or leadership behaviors that have allowed for such behavior to emerge.
The overreliance on intrapersonal analyses and individual moral judgments easily masks that individual moral standards are defined in relation to group norms. Whether individuals are considered to do what is “good” or “bad” depends on how their moral standards relate to what the group deems (in)appropriate. Indeed, we have seen that what is considered “immoral” behavior by some might be seen as morally adequate or even desirable by others. For instance, collective interests and limits to the circle of care may lead individuals to show loyalty to the moral guidelines of their own group while placing others outside their circle of care. Bolstering people’s sense of community and common identity or appealing to their altruism and empathy may therefore not necessarily resolve moral issues. Instead, this may just as well increase biased decision making or intensify intergroup conflicts on what is morally acceptable behavior. The current emphasis of many studies on individual differences and the focus on finding out how to suppress selfishness or how to avoid cheating may mask such group-level concerns.
During the past years, many researchers have examined questions relating to the psychology of morality. Our main conclusion from the studies reviewed here is that these have yielded insights that are unbalanced, neglect some key features of human morality specified in influential theoretical perspectives, and are not well integrated. The current challenge for theory development and research in morality therefore is to consider the complexity and multifaceted nature of the psychological antecedents and implications of moral behavior and to connect different mechanisms—instead of studying them in isolation.
Author Contributions: The division of tasks and responsibilities between the authors was as follows: N.E. designed the study; developed the coding scheme; coded and interpreted studies published from 2000 through 2017; supervised the further data collection, analyses, and preparation of tables and figures; and prepared text for the introduction, method, results, and discussion. J.V.d.T. designed the study, helped develop the stcoding scheme and coded studies published from 2000 through 2017, and revised text for the introduction, method, results, and discussion. Y.P. collected and interpreted studies published from 2000 through 2013 and prepared the database emerging from the first wave of data collection for further coding and analysis. T.v.L. conducted the bibliometric analyses, prepared figures and statistics reporting these analyses, and prepared text describing the method and results of the bibliometric analyses.
Authors’ Note: This research was made possible by a Netherlands Organization for Scientific Research (NWO) SPINOZA grant and a National Institute of Advanced Studies (NIAS) Fellowship grant awarded to the first author and an NWO RUBICON grant awarded to the second author. We thank Jamie Breukel, Nadia Buiter, Kai van Eekelen, Piet Groot, Miriam Hoffmann-Harnisch, Martine Kloet, Jeanette van der Lee, Marleen van Stokkum, Esmee Veenstra, Melissa Vink, and Erik van Wijk for their assistance in completing the database and preparing materials for the article.
Declaration of Conflicting Interests: The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding: The author(s) received no financial support for the research, authorship, and/or publication of this article.
Supplemental Material: Supplemental material is available online with this article.
Definition of moral philosophy.
Moral philosophy, often called ethics, is like a compass for right and wrong actions. Imagine you’re at a fork in the road and each direction leads to a different action. Moral philosophy is your guide, helping you figure out which direction to go.
The first simple definition of moral philosophy is this: it’s a set of tools that help us choose the best path when making decisions. This isn’t just about following rules, but it’s about understanding why we feel certain actions are correct and others are not, and how our decisions affect everyone involved.
The second definition is: moral philosophy is about figuring out how to live well together. This means we look at the big picture of what our actions mean and how they can help us create a peaceful world where we treat each other kindly.
There are many ways to think about what is right and wrong. Here are three major types:
Here are some real-life situations where moral philosophy comes into play:
Moral philosophy is vital because it gives us a framework to think about our decisions and their impacts. Imagine tossing a pebble into a pond. The ripples spread far and wide, just like the effects of our choices. By using moral philosophy, we help to ensure the ripples we make in the world spread kindness and fairness, touching our families, friends, and even strangers in positive ways.
For the average person, moral philosophy helps us figure out how to act in tough situations. It’s like a guidebook for living a good life. Let’s say you’re in a group project and someone isn’t doing their part. Moral philosophy can help you decide the best way to handle it, so the project succeeds, and everyone is treated fairly. It helps us build a world where everyone can succeed and be happy.
Thousands of years ago, smart people from different parts of the world started talking about the right way to live. Think of people like Confucius in China, the Buddha in India, and philosophers in Greece; they all explored life’s big questions and shared their knowledge . Thanks to their early thoughts on ethics, we still learn from their wisdom on how to be good today.
People often disagree on some parts of moral philosophy, and here are a few examples:
As new challenges arise with things like technology and environmental issues, moral philosophy keeps changing. We have ongoing conversations that help us continue to learn and improve our understanding.
Moral philosophy is connected to many other subjects. Here are some that share its principles:
In conclusion, moral philosophy assists us in deeply considering our actions and lives. It guides us towards fairness and goodness, so we can build a world where we all have a chance to flourish. By learning different angles like consequentialism, deontology, and virtue ethics, and thinking about connected subjects like politics and justice, we become better equipped to serve the common good, making thoughtful choices that benefit everyone.
Generally, the terms ethics and morality are used interchangeably, although a few different communities (academic, legal, or religious, for example) will occasionally make a distinction. In fact, Britannica’s article on ethics considers the terms to be the same as moral philosophy. While understanding that most ethicists (that is, philosophers who study ethics) consider the terms interchangeable, let’s go ahead and dive into these distinctions.
(Read Peter Singer's Britannica entry on ethics.)
Both morality and ethics loosely have to do with distinguishing the difference between “good and bad” or “right and wrong.” Many people think of morality as something that’s personal and normative, whereas ethics is the standards of “good and bad” distinguished by a certain community or social setting. For example, your local community may think adultery is immoral, and you personally may agree with that. However, the distinction can be useful if your local community has no strong feelings about adultery, but you consider adultery immoral on a personal level. By these definitions of the terms, your morality would contradict the ethics of your community. In popular discourse, however, we’ll often use the terms moral and immoral when talking about issues like adultery regardless of whether it’s being discussed in a personal or in a community-based situation. As you can see, the distinction can get a bit tricky.
It’s important to consider how the two terms have been used in discourse in different fields so that we can consider the connotations of both terms. For example, morality has a Christian connotation to many Westerners, since moral theology is prominent in the church. Similarly, ethics is the term used in conjunction with business , medicine, or law . In these cases, ethics serves as a personal code of conduct for people working in those fields, and the ethics themselves are often highly debated and contentious. These connotations have helped guide the distinctions between morality and ethics.
Ethicists today, however, use the terms interchangeably. If they do want to differentiate morality from ethics , the onus is on the ethicist to state the definitions of both terms. Ultimately, the distinction between the two is as substantial as a line drawn in the sand.
Online ordering is currently unavailable due to technical issues. We apologise for any delays responding to customers while we resolve this. For further updates please visit our website: https://www.cambridge.org/news-and-insights/technical-incident Due to planned maintenance there will be periods of time where the website may be unavailable. We apologise for any inconvenience.
We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings .
Published online by Cambridge University Press: 04 August 2010
Since the ancients, philosophers, theologians, and political actors have pondered the relationship between the moral realm and the political realm. Complicating the long debate over the intersection of morality and politics are diverse conceptions of fundamental concepts: the right and the good, justice and equality, personal liberty and public interest. Divisions abound, also, about whether politics should be held to a higher moral standard at all, or whether, instead, pragmatic considerations or realpolitik should be the final word. Perhaps the two poles are represented most conspicuously by Aristotle and Machiavelli. For Aristotle, the proper aim of politics is moral virtue: “politics takes the greatest care in making the citizens to be of a certain sort, namely good and capable of noble actions.” Thus, the statesman is a craftsman or scientist who designs a legal system that enshrines universal principles, and the politician's task is to maintain and reform the system when necessary. The science of the political includes more than drafting good laws and institutions, however, since the city-state must create a system of moral education for its citizens. In marked contrast, Machiavelli's prince exalted pragmatism over morality, the maintenance of power over the pursuit of justice. Machiavelli instructed that “a prince, and especially a new prince, cannot observe all those things which are considered good in men, being often obliged, in order to maintain the state, to act against faith, against charity, against humanity, and against religion.”
Save book to kindle.
To save this book to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle .
Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Find out more about the Kindle Personal Document Service .
To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox .
To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive .
Home — Blog — Topic Ideas — 200 Ethical Topics & Questions to Debate in Essay
Ethical topics and questions are essential for stimulating thoughtful discussions and deepening our understanding of complex moral landscapes. Ethics, the study of what is right and wrong, underpins many aspects of human life and societal functioning. Whether you're crafting an essay or preparing for a debate, delving into ethical issues allows you to explore various perspectives and develop critical thinking skills.
Ethical issues encompass a wide range of dilemmas and conflicts where individuals or societies must choose between competing moral principles. Understanding what are ethical issues involves recognizing situations that challenge our values, behaviors, and decisions. This article provides a thorough guide to ethical topics, offering insights into current ethical issues, and presenting a detailed list of questions and topics to inspire your writing and debates.
Ethical issues refer to situations where a decision, action, or policy conflicts with ethical principles or societal norms. These dilemmas often involve a choice between competing values or interests, such as fairness vs. efficiency, privacy vs. security, or individual rights vs. collective good. Ethical issues arise in various fields, including medicine, business, technology, and the environment. They challenge individuals and organizations to consider the moral implications of their actions and to seek solutions that align with ethical standards. Understanding ethical issues requires an analysis of both the potential benefits and the moral costs associated with different courses of action.
Writing an ethics essay involves more than just presenting facts; it requires a thoughtful analysis of moral principles and their application to real-world scenarios. Understanding ethical topics and what constitutes ethical issues is essential for crafting a compelling essay. Here’s a guide to help you address current ethical issues effectively:
By following this guide, you will be able to write an ethics essay that not only presents facts but also offers a deep and nuanced analysis of ethical topics.
Choosing the right research topic in ethics can be challenging, but it is crucial for writing an engaging and insightful essay. Here are some tips:
When writing an ethics essay, it is essential to adopt a formal and objective style. Clarity and conciseness are paramount, as the essay should avoid unnecessary jargon and overly complex sentences that might obscure the main points. Maintaining objectivity is crucial; presenting arguments without bias ensures that the discussion remains balanced and fair. Proper citations are vital to give credit to sources and uphold academic integrity.
Engaging the reader through a logical flow of ideas is important, as it helps sustain interest and facilitates a better understanding of the ethical topics being discussed. Additionally, the essay should be persuasive, making compelling arguments supported by evidence to effectively convey the analysis of moral issues. By following these guidelines, the essay will not only be informative but also impactful in its examination of ethical dilemmas.
Exploring ethical topics is crucial for students to develop critical thinking and moral reasoning. Here is a comprehensive list of ethical questions for students to discuss and debate. These topics cover a wide range of issues, encouraging thoughtful discussion and deeper understanding.
Ethical topics and questions are a rich field for exploration and discussion. Examining these issues, we can better understand the moral principles that guide our actions and decisions. Whether you're writing an essay or preparing for a debate, this comprehensive list of ethical topics and questions will help you engage with complex moral dilemmas and develop your critical thinking skills.
We use cookies to personalyze your web-site experience. By continuing we’ll assume you board with our cookie policy .
Academic tools.
Making judgments about whether a person is morally responsible for their behavior, and holding others and ourselves responsible for actions and the consequences of actions, is a fundamental and familiar part of our moral practices and our interpersonal relationships.
The judgment that a person is morally responsible for their behavior involves—at least to a first approximation—attributing certain powers and capacities to that person, and viewing their behavior as arising, in the right way, from the fact that the person has, and has exercised, these powers and capacities. Whatever the correct account of the powers and capacities at issue (and canvassing different accounts is one task of this entry), their possession qualifies an agent as morally responsible in a general sense: that is, as one who may be morally responsible for particular exercises of agency. Normal adult human beings may possess the powers and capacities in question, and other agents (such as non-human animals and very young children) are generally taken to lack them.
To hold someone responsible involves—again, to a first approximation—responding to that person in ways that are made appropriate by the judgment that they are morally responsible. These responses often constitute instances of moral praise or moral blame (though there may be reason to allow for morally responsible behavior that is neither praiseworthy nor blameworthy: see McKenna 2012, 16–17 and M. Zimmerman 1988, 61–62). Blame is a response that may follow on the judgment that a person is morally responsible for behavior that is wrong or bad, and praise is a response that may follow on the judgment that a person is morally responsible for behavior that is right or good. (See Menges 2017 for an account that emphasizes the independence of blame from judgments about blameworthiness.)
The attention in the philosophical literature given to blame far exceeds that given to praise. One reason for this is that blameworthiness, unlike praiseworthiness, is often taken to involve liability to sanction. Thus, articulating the conditions on blameworthiness may seem the more pressing matter. Perhaps for related reasons, there is a richer language for expressing blame than praise (Watson [1996]2004, 283), and “blame” finds its way into idioms for which there is no ready parallel employing “praise”: compare “ S is to blame for x ” and “ S is to praise for x .” Note, as well, that “holding responsible” is not a neutral expression: it typically arises in blaming contexts (Watson [1996]2004, 284).
Additionally, there may be asymmetries in the contexts in which praise and blame are appropriate: private blame is more familiar than private praise (Coates and Tognazzini 2013b), and while minor wrongs may reasonably earn blame, minimally decent behavior seems insufficient for praise (Eshleman 2014). Finally, the widespread assumption that praiseworthiness and blameworthiness are at least symmetrical in terms of the capacities they require has also been questioned (Nelkin 2008, 2011; Wolf 1980, 1990). Like most work on moral responsibility, this entry will focus largely on the negative side of the phenomenon; for more, see the entry on blame .
In everyday speech, one hears references to “moral responsibility” where the point is to indicate the presence of an obligation. Someone may say that “the United States has a moral responsibility to assist Ukraine,” where this means that the United States ought to adopt certain policies or take certain actions. This entry, however, is concerned not with accounts that specify people’s responsibilities in the sense of obligations, but rather with accounts of whether a person bears the right relation to their actions to be properly held accountable for them.
Moral responsibility should also be distinguished from causal responsibility. We may assign causal responsibility to someone for an outcome that they have caused, and we may also judge the person morally responsible for having caused the outcome. But the powers and capacities that are required for moral responsibility are not identical with an agent’s causal powers, so we cannot always infer moral responsibility from an assignment of causal responsibility. A young child can cause an outcome while failing to fulfill the general requirements on moral responsibility, and even agents who fulfill the general requirements on moral responsibility may explain or defend their behavior in ways that call into question their moral responsibility for outcomes for which they are causally responsible. Suppose that S causes an explosion by flipping a switch: the fact that S had no reason to expect such an outcome may call into question their moral responsibility (or at least their blameworthiness) for the explosion without calling into question their causal contribution to it. (For discussion of moral responsibility for causal outcomes, see §3.5 .)
Having distinguished different senses of “responsibility,” the word will be used in what follows to refer to “moral responsibility” in the sense specified above.
For a long time, the bulk of philosophical work on moral responsibility was conducted in the context of debates about free will and the threat that determinism might pose to free will. A largely unquestioned assumption was that free will is required for moral responsibility, and the central questions had to do with the ingredients of free will and with whether their possession is compatible with determinism. Recently, however, the literature on moral responsibility has addressed issues that are of interest independently of worries about determinism. Much of this entry will deal with these latter aspects of the moral responsibility debate. However, it will be useful to begin with issues at the intersection of concerns about free will and moral responsibility.
2.1 forward-looking accounts, 2.2.1 “freedom and resentment”, 2.2.2 criticisms of strawson’s approach, 2.2.3 resentment and blame, 2.3 reasons-responsiveness views, 3.1.1 attributability versus accountability, 3.1.2 attributionism, 3.1.3 answerability, 3.2 the moral competence condition on responsibility, 3.3 conversational approaches to responsibility, 3.4 standing to hold responsible, 3.5 responsibility for outcomes, 3.6 skepticism about responsibility, 3.7 moral luck and responsibility, 3.8 ultimate responsibility, 3.9 personal history and manipulation, 3.10 the epistemic condition on responsibility, other internet resources, related entries.
What power do responsible agents exercise over their actions? One (partial) answer is that the relevant power is a form of control, and, in particular, a form of control such that the agent could have done otherwise than to perform the action in question. This captures one standard notion of free will, and one of the central issues in debates about free will has been about whether possession of it (free will, in the ability-to-do-otherwise sense) is compatible with causal determinism (or with, for example, divine foreknowledge—see the entry on foreknowledge and free will ).
If causal determinism obtains, then the occurrence of every event (including events involving human deliberation, choice, and action) was made inevitable by—because it was causally necessitated by—the facts about the past (and about the laws of nature) prior to the event. Under these conditions, the facts about the present, and about the future, are uniquely fixed by the facts about the past (and about the laws of nature): given these earlier facts, the present and the future can unfold in only one way. For more, see the entry on causal determinism .
If free will requires the ability to do otherwise, then it is easy to see why free will may be incompatible with causal determinism. One way of getting at this incompatibilist worry is to focus on the way in which performance of a given action by an agent should be up to the agent if they have the sort of free will required for moral responsibility. As the influential Consequence Argument has it (Ginet 1966; van Inwagen 1983, 55–105), the truth of determinism entails that an agent’s actions are not really up to the agent since they are the unavoidable consequences of things over which the agent lacks control. Here is an informal summary of this argument from Peter van Inwagen’s An Essay on Free Will (1983):
If determinism is true, then our acts are the consequences of the laws of nature and events in the remote past. But it is not up to us what went on before we were born, and neither is it up to us what the laws of nature are. Therefore, the consequences of these things (including our present acts) are not up to us. (1983: 16)
For an important argument that the Consequence Argument conflates different senses in which the laws of nature are not up to us, see Lewis (1981). For more on incompatibilism, see the entries on free will , arguments for incompatibilism , and incompatibilist (nondeterministic) theories of free will , as well as Clarke (2003).
Compatibilists maintain that free will and moral responsibility are compatible with determinism. Versions of compatibilism have been defended since ancient times. The Stoics—Chryssipus, in particular—argued that the truth of determinism does not entail that human actions are entirely explained by factors external to agents; thus, human actions are not necessarily explained in a way that is incompatible with praise and blame (see Bobzien 1998 and Salles 2005 for Stoic views on freedom and determinism). Similarly, philosophers in the Modern period (such as Hobbes and Hume) distinguished the general way in which our actions are necessitated if determinism is true from the specific instances of necessity sometimes imposed on us by everyday constraints on behavior (e.g., coercive pressures or physical impediments that make it impossible to act as we would like). The difference is that the necessity involved in determinism is compatible with agents acting as they choose: even if S ’s behavior is causally determined, it may be behavior that S chose to perform. And perhaps the ability that matters for free will (and responsibility) is just the ability to act as one chooses, which seems to require only the absence of external constraints and not the absence of determinism.
This compatibilist tradition was carried into the 20 th century by logical positivists such as Ayer (1954) and Schlick ([1930]1966). Here is how Schlick expressed a central compatibilist insight in 1930 (drawing, in particular, on Hume):
Freedom means the opposite of compulsion; a man is free if he does not act under compulsion , and he is compelled or unfree when he is hindered from without…when he is locked up, or chained, or when someone forces him at the point of a gun to do what otherwise he would not do. (1930 [1966: 59])
Since deterministic causal pressures do not always force one to “do what otherwise he would not do,” freedom—at least of the sort specified by Schlick—is compatible with determinism.
A related compatibilist strategy, influential in the early and mid-20 th century, was to offer a conditional analysis of the ability to do otherwise (Ayer 1954, Moore 1912; for earlier expressions, see Hobbes [1654]1999 and Hume [1748]1978). As noted above, even if determinism is true, agents may often act as they choose; it is also compatible with determinism that an agent who performed act A (on the basis of their choice to do so) might have performed a different action on the condition that the agent had chosen to perform the other action. Even if a person’s actual behavior is causally determined by the actual past, it may be that if the past had been suitably different (if the person’s desires, intentions, and choices had been different), then they would have acted differently. Perhaps this is all that the ability to do otherwise comes to.
However, this compatibilist picture is open to serious objections. It might be granted that an ability to act as one sees fit is valuable, and perhaps related to the type of freedom at issue in the free will debate, but it does not follow that this is all that possession of free will comes to. People who have certain desires as a result of indoctrination, brainwashing, or psychopathology may act as they choose, but their possession of free will and moral responsibility may be questioned. (For more on the relevance of such factors, see §3.2 and §3.9 .) The conditional analysis also seems open to the following counterexample. It might be true that an agent who performs act A would have omitted A if they had so chosen, but it might also be true that the agent in question suffers from an overwhelming compulsion to perform act A . The conditional analysis suggests that the agent in question retains the ability to do otherwise than A , but given their compulsion, it seems clear that they lack this ability (Chisholm 1964, Lehrer 1968, van Inwagen 1983).
Despite the above objections, the compatibilist project described so far has had lasting influence. The fact that determined agents can act as they see fit is still an important inspiration for compatibilists, as is the fact that determined agents may have acted differently in counterfactual circumstances. For more, see the entry on compatibilism . For recent accounts related to and improving upon early compatibilist approaches, see Fara (2008), M. Smith (2003), and Vihvelin (2004); for criticism of these accounts, see Clarke (2009).
Compatibilists have also argued that moral responsibility does not require the ability to do otherwise. If this is right, then determinism would not threaten responsibility by ruling out access to alternatives (though it might threaten responsibility in other ways: see van Inwagen 1983, 182–88 and Fischer and Ravizza 1998, 151–168). In an influential 1969 paper, Harry Frankfurt offers examples meant to show that an agent can be morally responsible for an action even if he could not have done otherwise. Versions of these examples are often called Frankfurt cases or Frankfurt examples . In the basic form of the example, an agent, Jones, considers a certain action. Another agent, Black, would like to see Jones perform this action and, if necessary, Black can make Jones perform it by intervening in Jones’s deliberative processes. However, as things transpire, Black does not intervene in Jones’s decision making since he can see that Jones will perform the action on his own. Black does not intervene to ensure Jones’s action, but he could have and would have had Jones showed some sign that he would not perform the action on his own. Therefore, Jones could not have done otherwise , yet he seems responsible for his behavior since he does it on his own.
There are questions about whether Frankfurt’s example really shows that Jones couldn’t have done otherwise and that he is morally responsible. How can Black be certain whether Jones would perform the action on his own? There seems to be a dilemma here. Perhaps determinism obtains in the universe of the example, and Black sees some sign that indicates the presence of factors that causally ensure that Jones will behave in a particular way. But in this case, incompatibilists are unlikely to grant that Jones is morally responsible since they believe that moral responsibility is incompatible with determinism. On the other hand, perhaps determinism is not true in the universe of the example, but then it is not clear that the example excludes alternatives for Jones: if Jones’s behavior isn’t causally determined, then perhaps he can do otherwise. For objections to Frankfurt’s original example along these lines, see Ginet (1996) and Widerker (1995); for defenses of Frankfurt, see Fischer (2002; 2010); and for refined versions of Frankfurt’s example, meant to clearly deny Jones access to alternatives, see Mele and Robb (1998), Hunt (2000), and Pereboom (2000; 2001, 18–28). For a valuable collection on this topic, see Widerker and McKenna (2006).
In response to such criticisms, Frankfurt has said that his example was intended mainly to draw attention to the fact “that making an action unavoidable is not the same thing as bringing it about that the action is performed” (2006, 340; emphasis in original). In particular, while determinism may make an agent’s action unavoidable, it does not follow that the agent acts only because determinism is true: it may also be true that the agent acts a certain way because they want to. The point of his original example, Frankfurt suggests, was to draw attention to the significance that the actual causes of an agent’s behavior can have independently of whether the agent might have done something else. Frankfurt concludes that “[w]hen a person acts for reasons of his own … the question of whether he could have done something else instead is quite irrelevant” for the purposes of assessing responsibility (2006, 340). A focus on the actual causes that lead to behavior, as well as investigation into when an agent can be said to act on their own reasons, has characterized a great deal of work on responsibility since Frankfurt’s essay.
Forward-looking approaches to moral responsibility justify responsibility practices by focusing on the beneficial consequences that can be obtained by engaging in these practices. This approach was influential in the earlier parts of the 20 th century (as well as before), had fallen out of favor by the closing decades of that century, and has recently been the subject of renewed interest.
Forward-looking perspectives emphasize one of the points discussed in the previous section: an agent’s being subject to determinism does not entail that they are subject to constraints that force them to act independently of their choices. If this is true, then, regardless of the truth of determinism, it may be useful to offer certain incentives to agents—to praise and blame them—in order to encourage them to make certain future choices and thus to secure positive behavioral outcomes.
According to some articulations of the forward-looking approach, to be a responsible agent is simply to be an agent whose motives, choices, and behavior can be shaped in this way. Thus, Schlick argued that
The question of who is responsible is the question concerning the correct point of application of the motive …. in this its meaning is completely exhausted; behind it lurks no mysterious connection between transgression and requital…. It is a matter only of knowing who is to be punished or rewarded, in order that punishment and reward function as such—be able to achieve their goal. (1930 [1966: 61]; emphasis in original)
According to Schlick, the goals of punishment and reward have nothing to do with the past: the idea that punishment “is a natural retaliation for past wrong, ought no longer to be defended in cultivated society” ([1930]1966, 60; emphasis in original). Instead, punishment ought to be “concerned only with the institution of causes, of motives of conduct …. Analogously, in the case of reward we are concerned with an incentive” ([1930]1966, 60; emphasis in original).
J. J. C. Smart (1961) also defended a well-known forward-looking approach to responsibility. Smart claimed that to blame someone for their behavior is simply to assess the behavior negatively (to “dispraise” it) while simultaneously ascribing responsibility for the behavior to the agent. And, for Smart, an ascription of responsibility merely involves taking an agent to be such that they would have omitted the behavior if they had been provided with a motive to do so. Whatever sanctions may follow an ascription of responsibility are administered with eye to giving an agent a motive to refrain from such behavior in the future.
Smart’s approach has its contemporary defenders (Arneson 2003), but many have found it lacking. R. Jay Wallace argues that an approach like Smart’s “leaves out the underlying attitudinal aspect of moral blame” (Wallace 1996, 56, emphasis in original; see the next subsection for more on blaming attitudes). According to Wallace, the attitudes involved in blame are “backward-looking and focused on the individual agent who has done something morally wrong” (Wallace 1996, 56). But a forward-looking approach, with its focus on bringing about desirable outcomes “is not directed exclusively toward the individual agent who has done something morally wrong, but takes account of anyone else who is susceptible to being influenced by our responses” (Wallace 1996, 56; emphasis added). In exceptional cases, a focus on beneficial outcomes may provide grounds for treating as blameworthy those who are known to be innocent (Smart 1973). This feature of some forward-looking approaches has led to particularly strong criticism.
Recent efforts have been made to develop partially forward-looking accounts of responsibility that evade some of the criticisms mentioned above. These accounts justify our general system of responsibility practices by appeal to its suitability for fostering moral agency and the acquisition of capacities required for such agency. Most notable in this regard is Manuel Vargas’s “agency cultivation model” of responsibility (2013; also see Jefferson 2019 and McGeer 2015). Recent conversational accounts of responsibility ( §3.3 ) also have a forward-looking component insofar as they regard those with whom one might have fruitful moral interactions as candidates for responsibility. Some responsibility skeptics have also emphasized the forward-looking benefits of certain responsibility practices. Derk Pereboom—who rejects desert-based blame—has argued that some conventional blaming practices can be maintained (even after ordinary notions of blameworthiness have been left behind) insofar as these practices are grounded in “non-desert invoking moral desiderata” such as “protection of potential victims, reconciliation to relationships both personal and with the moral community more generally, and moral formation” (2014, 134; also see Caruso 2016, Caruso and Pereboom 2022, Levy 2012, Milam 2016). (For more on skepticism about responsibility, see §3.6 and the entry on skepticism about moral responsibility .)
P. F. Strawson’s 1962 paper, “Freedom and Resentment,” is the inspiration for a great deal of contemporary work on responsibility, especially the work of compatibilists. Strawson focuses on the emotions—the reactive attitudes—that play a fundamental role in our practices of holding one another responsible. He suggests that attending to the logic of these emotional responses yields an account of what it is to be open to praise and blame that need not invoke the incompatibilist’s conception of free will.
Part of the novelty of Strawson’s approach is its emphasis on the “importance that we attach to the attitudes and intentions towards us of other human beings” ([1962]1993, 48) and on “how much it matters to us, whether the actions of other people … reflect attitudes towards us of goodwill, affection, or esteem on the one hand or contempt, indifference, or malevolence on the other” ([1962]1993, 49). For Strawson, our practices of holding others responsible are largely responses to these things: that is, “to the quality of others’ wills towards us” ([1962]1993, 56).
To get a sense of the importance of quality of will for our interpersonal relations, note the difference in your response to one who injures you accidentally as compared to how you respond to one who does you the same injury out of “contemptuous disregard” or “a malevolent wish to injure [you]” (P. Strawson [1962]1993, 49). The second case is likely to arouse a type and intensity of resentment that would not reasonably be felt in the first case. Corresponding points may be made about gratitude: you would likely not have the same feelings of gratitude toward a person who benefits you accidentally as you would toward one who does so out of concern for your welfare.
According to Strawson, the tendency to respond with reactive attitudes to another’s display of good or ill will involves imposing on the other a demand for moral respect and due regard ([1962]1993, 63). Thus, among the circumstances that mollify a person’s negative reactive attitudes are those which show that—perhaps despite initial appearances—the demand for due regard has not been ignored or flouted. When someone explains that the injury they caused you was entirely unforeseen and accidental, they indicate that their regard for your welfare was not insufficient and that they are, therefore, not an appropriate target of the attitudes involved in blame.
An agent who excuses themselves from blame in the above way is not calling into question their status as a generally responsible agent: they are still open to the demand for due regard and liable, in principle, to reactive responses. Other agents, however, may be inapt targets for blame and the reactive emotions precisely because they are not legitimate targets of a demand for regard. In these cases, an agent is not excused from blame, they are exempted from it: it is not that their behavior is discovered to have been non-malicious, but rather that they are recognized as one of whom better behavior cannot reasonably be demanded. (The widely-used terminology in which the above contrast is drawn—“excuses” versus “exemptions”—is due to Watson [1987]2004).
For Strawson, the most important group of exempt agents includes those who are, at least for a time, significantly impaired for normal interpersonal relationships. These agents may be children, or psychologically impaired like the “schizophrenic” (P. Strawson [1962]1993, 51). Alternatively, exempt agents may simply be “wholly lacking … in moral sense” (P. Strawson [1962]1993, 58), perhaps because they suffered from “peculiarly unfortunate … formative circumstances” (P. Strawson [1962]1993, 52). These agents are not candidates for the range of responses involved in our personal relationships because they do not participate in these relationships in the right way for such responses to be sensibly applied to them. Rather than taking up interpersonally-engaged attitudes (that presuppose a demand for respect) toward exempt agents, we take an objective attitude toward them. Such an agent may be regarded merely as “an object of social policy,” something “to be managed or handled or cured or trained” (P. Strawson [1962]1993, 52).
Strawson’s perspective has an important compatibilist upshot. For one thing, Strawson claims that our “commitment to participation in ordinary interpersonal relationships is … too thoroughgoing and deeply rooted for us to take seriously the thought that” the truth of determinism entails that such relationships do not, or should not, exist ([1962]1993, 54); but being involved in these relationships “precisely is being exposed to the range of reactive attitudes” that constitute our responsibility practices ([1962]1993, 54). So, regardless of the truth of determinism, we cannot give up—not entirely at least—these ways of engaging with one another. Strawson also insists that the truth of determinism would not show that human beings generally occupy excusing or exempting conditions. It would not follow from the truth of determinism “that anyone who caused an injury either was quite simply ignorant of causing it or had acceptably overriding reasons for” doing so (P. Strawson [1962]1993, 53; emphasis in original); nor would it follow “that nobody knows what he’s doing or that everybody’s behaviour is unintelligible in terms of conscious purposes or that everybody lives in a world of delusion or that nobody has a moral sense” (P. Strawson [1962]1993, 59).
Strawson argues that learning that determinism is true would not raise general concerns about our responsibility practices. This is because the truth of determinism would not show that human beings are generally abnormal in a way that would call into question their openness to the reactive attitudes: “it cannot be a consequence of any thesis which is not itself self-contradictory that abnormality is the universal condition” (P. Strawson [1962]1993, 54). But it has been noted that while the truth of determinism might not suggest universal abnormality, it may well show that normal human beings are morally incapacitated in a way that is relevant to our responsibility practices (Russell 1992, 298–301). Strawson’s claims that we are too deeply and naturally committed to our reactive-attitude-involving practices to give them up, and that doing so would irreparably distort our moral lives, have also been questioned (Nelkin 2011, 42–45; G. Strawson 1986, 84–120; Watson [1987]2004, 255–58).
A different objection emphasizes the response-dependence of Strawson’s account: that is, the way it explains an agent’s responsibility in terms of the responses that characterize a given community’s responsibility practices, rather than in terms of independent facts about whether the agent is responsible. This feature of Strawson’s approach invites the following reading:
In Strawson’s view, there is no such independent notion of responsibility that explains the propriety of the reactive attitudes. The explanatory priority is the other way around: It is not that we hold people responsible because they are responsible; rather, the idea ( our idea) that we are responsible is to be understood by the practice, which itself is not a matter of holding some propositions to be true, but of expressing our concerns and demands about our treatment of one another. (Watson [1987]2004, 222; emphasis in original; see Bennett 1980 for a related, non-cognitivist interpretation of Strawson’s approach)
Strawson’s approach would be particularly problematic if, as the above reading might suggest, it entails that a group’s responsibility practices are—as they stand and however they stand—beyond criticism simply because they are that group’s practices (Fischer and Ravizza 1993, 18).
But there is something to be said from the other side of the debate. It may seem obvious that people are appropriately held responsible only if there are independent facts about their responsibility status. But as Wallace argues, it can be difficult “to make sense of the idea of a prior and thoroughly independent realm of moral responsibility facts” that is separate from our practices and yet to which our practices must answer (1996, 88). For Wallace, giving up on practice-independent responsibility facts doesn’t mean giving up on facts about responsibility; rather, “we must interpret the relevant facts [about responsibility] as somehow dependent on our practices of holding people responsible” (1996, 89). Such an interpretation requires an investigation into our practices, and what emerges most conspicuously, for Wallace, is the degree to which our responsibility practices are organized around a fundamental commitment to fairness (1996, 101). Wallace develops this commitment to norms of fairness into an account of the conditions under which people are appropriately held morally responsible (1996, 103–109). (For a more recent defense of the response-dependent approach to responsibility, see Shoemaker 2017b; for criticism of such approaches, see Todd 2016.)
Due to Strawson’s influence, philosophers often now think of blameworthiness as centrally involving an agent’s being an appropriate object of certain emotions, particularly resentment. (For accounts that focus instead on the appropriateness of guilt, see Carlsson 2017, Clarke 2016, and Duggan 2018, as well as some of the essays in Carlsson 2022).
Emotions seem to have, in some way or other, a representational component, and whether an emotion is fitting in a given context can be assessed, at least in part, in terms of its representational accuracy. So, for example, the emotion of fear may represent its object as dangerous and an episode of fear may be fitting if the object of that emotion is in fact dangerous. (For more, see the entry on emotion .) It is possible, then, to give an account of blameworthiness in terms of the fittingness of resentment, which will involve giving an account of how resentment represents its object. Recent efforts along these lines include Graham (2014), Rosen (2015), and Strabbing (2019), all of whom take resentment to involve certain thoughts, and the fittingness of resentment to depend on the accuracy of these thoughts. As Rosen puts it, “[f]or X to be morally blameworthy for A just is for it to be appropriate to resent X for A , or in other words, for the thoughts implicit in resentment … to be true” (2015, 72). See D’Arms (2022) for criticism of Rosen’s approach. D’Arms and his co-author Jacobson (2023) hold that emotional fittingness is generally not a matter of some thought being true, it is rather a matter of correct appraisal, though they do conceive of resentment as involving certain thoughts since it is a cognitive “sharpening” of a more basic emotion kind such as anger (2023, 109 note 6).
For Graham, the thought involved in resentment is that the object of blame “has violated a moral requirement of respect” (2014, 408); for Rosen, it is that “[i]n doing A , X showed an objectionable pattern of concern” (2015, 77); for Strabbing, “the following thought partly constitutes resentment: in doing A, S expressed insufficient good will” (2019, 3127). But Rosen and Strabbing find additional thoughts to also be part of resentment. For Rosen, resentment involves not just the thought that another has acted with an objectionable pattern of concern, it also includes the “ the retributive thought ” that the other deserves to suffer for acting as they did (2015, 83; emphasis in original). This will rule out resentment and blame in the case of an agent who violates a moral requirement but who “lacked the capacity to recognize and respond to the reasons for complying with it” since it would be, Rosen claims, unfair to sanction such an agent (2015, 84). (See Wallace 1996 and Watson [1987]2004 for other accounts that impose a fairness condition on resentment in view of its supposed sanctioning nature.) Strabbing argues that resentment is constituted not just by the thought that another showed insufficient good will but also by the thought that the other “could have acted with a better quality of will” (2019, 3129). Again, this will make resentment unfitting in the case of some agents who fail to show proper concern for others.
There is disagreement about whether wrongdoers who faultlessly acquire a commitment to flawed moral values—perhaps as a result of cultural context—are open to blame (for more, see §3.2 , §3.10 ). These wrongdoers may behave permissibly according to their own culturally-supported values, yet they may also act with an objectionable quality of will. Rosen’s and Strabbing’s accounts would explain why resentment might be inappropriate in the case of such wrongdoers: it may be unfair to sanction them or to expect them to act with a better quality of will. On the other hand, if the cognitive content of resentment is narrower than Rosen and Strabbing suggest—if, for example, it involves merely an attribution of ill will—then resentment may be fitting in some of these cases. Alternatively, it may be possible to distinguish between varieties of resentment: there may be a resentment-like emotion partly constituted by relatively narrow cognitive content (i.e., the thought that another acted with ill will), and a distinct resentment-like emotion partly constituted by the broader cognitive content suggested by Rosen and Strabbing. In this case, the wrongdoers in question may be open to a type of resentment that represents them simply as wrongdoers, but not to a more complex type of resentment; see Hieronymi (2014) and Talbert (2014) for suggestions like this.
As noted in §1 , a lasting influence of Frankfurt’s work was to draw attention to the actual causes of agents’ behavior, and particularly to whether an agent acted for their own reasons. Reasons-responsiveness approaches have been particularly attentive to these issues. These approaches ground responsibility by reference to agents’ capacities for being appropriately sensitive to the rational considerations that bear on their actions. Interpreted broadly, reasons-responsiveness approaches include a diverse collection of views: Brink and Nelkin (2013), Fischer and Ravizza (1998), McKenna (2013), Nelkin (2011), Sartorio (2016), Wallace (1996), and Wolf (1990). Fischer and Ravizza’s Responsibility and Control (1998) is the most influential articulation of this approach.
Fischer and Ravizza take Frankfurt cases ( §1 ) to show that access to alternatives is not necessary for moral responsibility. Rather, what is required is “guidance control,” which is manifested when an agent guides their behavior in a particular direction, and regardless of whether it was open to them to guide their behavior differently (Fischer and Ravizza 1998, 29–34).
If a person’s behavior is brought about by hypnosis or genuinely irresistible urges, then they may not be morally responsible for their behavior because they do not reflectively guide it in the way required for responsibility (Fischer and Ravizza 1998, 35). More specifically, an agent in the above circumstances is not likely to be responsible because he “is not responsive to reasons—his behavior would be the same, no matter what reasons there were” (Fischer and Ravizza 1998, 37). Thus, Fischer and Ravizza characterize possession of guidance control as dependent on responsiveness to reasons. In particular, guidance control depends on whether the psychological mechanism that issues in an agent’s behavior is responsive to reasons. (Guidance control also requires that an agent owns the mechanism on which they act. According to Fischer and Ravizza, this requires placing historical conditions on responsibility; see §3.9 .)
Fischer and Ravizza’s focus on mechanisms is motivated by the following reasoning. In a Frankfurt case, an agent is responsible for an action even though their action is ensured by external factors. But the presence of these external factors means that the agent in a Frankfurt case would have acted the same no matter what reasons they were confronted with. So, the responsible agent in a Frankfurt scenario is not responsive to reasons. Fischer and Ravizza’s solution to this problem is to argue that while the agent in a Frankfurt case may not be responsive to reasons, the agent’s mechanism—“the process that leads to the relevant upshot [i.e., the agent’s action]”—may well be responsive to reasons (1998, 38). In other words, the agent’s generally-specified psychological mechanism might have responded (under counterfactual conditions) to considerations in favor of omitting the action that the agent performed. Fischer and Ravizza thus conclude that “relatively clear cases of moral responsibility”—those in which an agent is not hypnotized, etc.—are distinguished by the fact that “an agent exhibits guidance control of an action insofar as the mechanism that actually issues in the action is his own, reasons-responsive mechanism” (1998, 39).
But how responsive to reasons does an agent’s mechanism need to be? Fischer and Ravizza argue that moderate (as opposed to strong or weak) reasons responsiveness is required for guidance control (1998, 69–85). A mechanism that is moderately responsive to reasons may not be receptive to every sufficient reason to act in a certain way, but it will exhibit “an understandable pattern of (actual and hypothetical) reasons-receptivity” (Fischer and Ravizza 1998, 71; emphasis in original). Such a pattern will indicate that an agent understands “how reasons fit together” and that, for example, “acceptance of one reason as sufficient implies that a stronger reason must also be sufficient” (Fischer and Ravizza 1998, 71). In addition, the desired pattern of regular receptivity to reasons will include receptivity to a range of moral considerations (Fischer and Ravizza 1998, 77; see Todd and Tognazzini 2008 for criticism of Fischer and Ravizza’s articulation of this condition.) This will rule out attributing moral responsibility to non-moral agents.
Fischer and Ravizza’s account has generated a great deal of attention and criticism. Some critics focus on the contrast Fischer and Ravizza draw between the capacity for receptivity to reasons and the capacity for reactivity to reasons (McKenna 2005, Mele 2006a, Watson 2001). Others are dissatisfied with their focus on the powers of mechanisms as opposed to agents. This has led some authors to develop agent-based reasons-responsiveness accounts that address the concerns that led Fischer and Ravizza to their mechanism-based approach (Brink and Nelkin 2013, McKenna 2013, Sartorio 2016).
3.1 the “faces” of responsibility.
Do our responsibility practices accommodate distinct forms of moral responsibility? Interest in this question stems from a debate between Susan Wolf and Gary Watson. Among other things, Wolf’s important 1990 book, Freedom Within Reason , offers a critical discussion of “Real Self” theories of responsibility. On these views, a person is responsible for behavior that is attributable to their real self, and “an agent’s behavior is attributable to the agent’s real self … if she is at liberty (or able) both to govern her behavior on the basis of her will and to govern her will on the basis of her valuational system” (Wolf 1990, 33). A responsible agent is, therefore, not simply moved by their strongest desires; rather, they are moved by desires that the agent endorses insofar as the desires are in conformity either with the agent’s values or with their higher-order desires. Wolf’s central example of a Real Self View is Watson (1975). (In an earlier paper, Wolf 1987 characterizes Watson 1975, Frankfurt 1971, and Taylor 1976 as offering “deep self views.” For more on real-self/deep-self views, see §3.9 ; for a recent presentation of a real-self view, see Sripada 2016.)
According to Wolf, Real Self views can explain why people acting under the influence of hypnosis or compulsive desires are not responsible (1990, 33). Since these agents are unable to govern their behavior on the basis of their valuational systems, they are alienated from their behavior in a way that undermines responsibility. But for Wolf it is a mark against Real Self views that they are silent on the topic of how agents came to be the way they are. An agent’s real self might be the product of a traumatic upbringing, and Wolf argues that this would give us reason to question the “agent’s responsibility for her real self” and thus her responsibility for the present behavior that issues from that self (1990, 37; emphasis in original). For an account of an agent with such an upbringing, see Wolf’s (1987) fictional example of JoJo; see Watson ([1987]2004) for a related discussion of the convicted murderer Robert Alton Harris. (For discussion of JoJo, see §3.2 ; for discussion of the relevance of personal history for present responsibility see §3.9 .)
Wolf suggests that when a person’s real self is the product of childhood trauma (or similar factors), then that person is potentially responsible for their behavior only in a superficial sense that merely attributes bad actions to the agent’s real self (1990, 37–40). However, Wolf argues that ascriptions of moral responsibility go deeper than such attributions can reach:
When … we consider an individual worthy of blame or of praise, we are not merely judging the moral quality of the event with which the individual is so intimately associated; we are judging the moral quality of the individual herself in some more focused, noninstrumental, and seemingly more serious way. (1990, 41)
This deeper form of assessment requires more than that an agent is “able to form her actions on the basis of her values,” it also requires that “she is able to form her values on the basis of what is True and Good” (Wolf 1990, 75). This latter ability may be limited in an agent whose real self is the product of pressures (such as a traumatic upbringing) that have impaired their moral competence. (For more on moral competence, see §3.2 .)
In his response to Wolf, Watson ([1996]2004) agrees that some approaches to responsibility—i.e., self-disclosure views (a phrase Watson borrows from Benson 1987)—focus narrowly on whether behavior is attributable to an agent. But Watson denies that these attributions constitute a merely superficial form of assessment. Behavior that is attributable to an agent because it issues from their valuational system often discloses something interpersonally and morally significant about the agent’s “fundamental evaluative orientation” (Watson [1996]2004, 271). Thus, ascriptions of responsibility in this responsibility-as-attributability sense are “central to ethical life and ethical appraisal” (Watson [1996]2004, 263).
However, Watson agrees with Wolf that there is more to responsibility than attributing actions to agents. In addition, we hold agents responsible for their behavior, which “is not just a matter of the relation of an individual to her behavior” (Watson [1996]2004, 262). When we hold responsible, we also “demand … certain conduct from one another and respond adversely to one another’s failures to comply with these demands” (Watson [1996]2004, 262). The moral demands, and potential for adverse treatment, associated with holding others responsible are part of our accountability (as opposed to attributability) practices, and these features of accountability raise issues of fairness that do not arise in the context of determining whether behavior is attributable to an agent (Watson [1996]2004, 273; also see material in §2.2.3 ). Therefore, conditions may apply to accountability that do not apply to attributability: perhaps “accountability blame” should be—as Wolf suggested—moderated in the case of an agent whose “squalid circumstances made it overwhelmingly difficult to develop a respect for the standards to which we would hold him accountable” (Watson [1996]2004, 281).
So, on Watson’s account, there is responsibility-as-attributability, and when an agent satisfies the conditions on this form of responsibility, behavior is properly attributed to the agent as reflecting morally important features of the agent’s self. But there is also responsibility-as-accountability, and when an agent satisfies the conditions on this form of responsibility, which requires more than the correct attribution of behavior, they can be held accountable for that behavior in the ways that characterize moral blame.
It has become common for the views of several authors to be described (with varying degrees of accuracy) as instances of “attributionism”; see Levy (2005) for the first use of this term. These authors include Adams (1985), Arpaly (2003), Hieronymi (2004), Scanlon (1998, 2008), Sher (2006, 2009), A. Smith (2005, 2008), Schlossberger (2021), and Talbert (2012a). Attributionists take moral responsibility assessments to be concerned with whether an action (omission, character trait, or belief) is attributable to an agent for the purposes of moral assessment, where this usually means that the action (or omission, etc.) reflects the agent’s “judgment sensitive attitudes” (Scanlon 1998), “evaluative judgments” (A. Smith 2005), or, more generally, the agent’s “moral personality” (Hieronymi 2008).
Attributionism resembles the self-disclosure views mentioned by Watson (see the previous subsection) insofar as both focus on the way that a responsible agent’s behavior discloses morally significant features of the agent’s self. However, attributionists are interested in more than specifying the conditions for what Watson calls responsibility-as-attributability. Attributionists take themselves to give conditions for holding agents responsible in Watson’s accountability sense. (See the previous subsection for the distinction between accountability and attributability.)
According to attributionism, fulfillment of attributability conditions is sufficient for holding agents accountable for their behavior. This means that attributionism rejects conditions on moral responsibility that would excuse agents if their characters were shaped under adverse conditions (Scanlon 1998, 278–85), or if the thing for which the agent is blamed was not under their control (Sher 2006b and 2009, A. Smith 2005), or if the agent can’t be expected to recognize the moral status of their behavior (Scanlon 1998, 287–290; Talbert 2012a). Attributionists reject these conditions on responsibility because morally significant behavior is attributable to agents that do not fulfill them. Attributionists have also argued that blame may profitably be understood as a form of moral protest (Hieronymi 2001, A. Smith 2013, Talbert 2012a); part of the appeal of this move is that moral protests may be legitimate in cases in which the above conditions are not met.
Some argue that attributionists are wrong to reject the conditions on responsibility mentioned in the last paragraph (Levy 2005, 2011; Shoemaker 2011, 2015; Watson 2011). It has also been argued that the attributionist account of blame is too close to mere negative appraisal (Levy 2005; Wallace 1996, 80–1; Watson 2002). In addition, Scanlon (2008) has been criticized for failing to take negative emotions such as resentment to be central to the phenomenon of blame (Wallace 2011, Wolf 2011; the criticism could also be applied to Sher 2006). For overviews of attributionism, see Schlossberger (2021) and Talbert (2022).
Building on the distinction between attributability and accountability ( §3.1.1 ), David Shoemaker (2011 and 2015) introduces a third form of responsibility: answerability. On Shoemaker’s view, attributability-responsibility assessments respond to facts about an agent’s character, accountability-responsibility responds to an agent’s degree of regard for others, and answerability-responsibility responds to an agent’s evaluative judgments. A. Smith (2015) and Hieronymi (2008 and 2014) use “answerability” to refer to a view more like the attributionist perspective described in the previous subsection, and Pereboom (2014) has used the term to indicate a form of responsibility more congenial to responsibility skeptics.
Possession of moral competence—the ability to recognize and respond to moral considerations—is often taken to be a condition on moral responsibility. Wolf’s (1987) story of JoJo illustrates this proposal. JoJo was raised by an evil dictator and becomes the same sort of sadistic tyrant that his father was. JoJo is happy to be the sort of person that he is, and he is moved by precisely the desires (e.g., to imprison and torture his subjects) that he wants to be moved by. Thus, JoJo fulfills important conditions on responsibility (see, in particular, the discussion of structural accounts of responsibility in §3.9 ), however, Wolf argues that it may be unfair to hold JoJo responsible for his objectionable behavior.
JoJo’s upbringing plays an important role in Wolf’s argument, but only because it left JoJo unable to appreciate the wrongfulness of his behavior. It is JoJo’s impaired moral competence that does the real excusing work, and similar conclusions of non-responsibility should be drawn about others whom we think “could not help but be mistaken about their [bad] values” (Wolf 1987, 57).
Many join Wolf in arguing that impaired moral competence (perhaps on account of one’s upbringing or other environmental factors) undermines moral responsibility (Benson 2001, Fischer and Ravizza 1998, Fricker 2010, Levy 2003, Russell 1995 and 2004, Wallace 1996, Watson [1987]2004). Part of what motivates this conclusion is the thought that it can be unreasonable to expect morally-impaired agents to avoid wrongful behavior, and that it is therefore unfair to expose these agents to the harm of moral blame (also see §2.2.3 and §3.1.1 ). For detailed development of the moral competence requirement on responsibility in terms of considerations of fairness, see Wallace (1996); also see Kelly (2013), Levy (2009), and Watson ([1987]2004). For rejection of the claim that blame is unfair in the case of morally-impaired agents, see several of the defenders of attributionism mentioned in §3.1.2 .
The moral competence condition on responsibility can also be motivated by the suggestion that impaired agents are not able to commit wrongs that have the sort of moral significance to which blame would be an appropriate response. While morally-impaired agents can fail to show appropriate respect for others, these failures do not necessarily constitute the kind of flouting of moral norms that grounds blame (Watson [1987]2004, 234). In other words, a failure to respect others, is not always an instance of blame-grounding disrespect for others, since the latter (but not the former) requires the ability to comprehend the norms that one violates (Levy 2007, Shoemaker 2011; for a reply, see Talbert 2012b).
Conversational theories of responsibility construe elements of our responsibility practices as moves in a moral conversation.
Several prominent versions of the conversational approach develop P. F. Strawson’s suggestion ( §2.2.1 ) that the negative reactive attitudes involved in blame are expressions of a demand for moral regard. Considerations about moral competence ( §3.2 ) are relevant here. Watson argues that a demand “presumes,” as a condition on the intelligibility of expressing it, “understanding on the part of the object of the demand” ([1987]2004, 230). Therefore, since, “[t]he reactive attitudes are incipiently forms of communication,” they are intelligibly expressed “only on the assumption that the other can comprehend the message,” and since the message is a moral one, “blaming and praising those with diminished moral understanding loses its ‘point’” (Watson [1987]2004, 230; see Watson 2011 for a modification of his original proposal). Wallace argues, similarly, that since responsibility practices are internal to moral relationships that are “defined by the successful exchange of moral criticism and justification…. It will be reasonable to hold accountable only someone who is at least a candidate for this kind of exchange of criticism and justification” (1996, 164).
Michael McKenna’s Conversation and Responsibility (2012) offers the most developed conversational analysis of responsibility. For McKenna, the “moral responsibility exchange” occurs in stages: an initial “moral contribution” of morally salient behavior; the “moral address” of, e.g., blame that responds to the moral contribution; the “moral account” in which the first contributor responds to moral address with, e.g., apology; and so on (2012, 89). Like Wallace and Watson, McKenna notes the way in which a morally-impaired agent will find it difficult “to appreciate the challenges put to her by those who hold [her] morally responsible,” but he also argues that a sufficiently impaired agent cannot even make the first move in a moral conversation (2012, 78). Thus, a morally-impaired agent’s responsibility is called into question not only because they are unable to respond appropriately to moral demands, but also because “she is incapable of acting from a will with a moral quality that could be a candidate for assessment from the standpoint of holding responsible” (McKenna 2012, 78). This is related to Levy’s and Shoemaker’s contention ( §3.2 ) that impairments of moral competence can leave an agent unable to express the type of ill will to which blame responds. By contrast, Watson (2011), allows that significant moral impairment is compatible with the ability to perform blame-relevant wrongdoing, even if such impairment undermines the wrongdoer’s moral accountability for their actions.
For another important account of responsibility in broadly conversational terms, see Shoemaker’s discussion of the sort of moral anger involved in holding others accountable for their behavior (2015, 87–117). For additional defenses and articulations of the conversational approach to responsibility, see Darwall (2006), Fricker (2016), and Macnamara (2015).
It was suggested above that blame may amount to the expression of a moral demand. Macnamara (2013) argues, to the contrary, that blame is not helpfully construed in such terms, and that the prospects for construing praise as a demand are even worse. Macnamara suggests that we should interpret both blame and praise as ways of recognizing the moral significance of behavior, and as calling on the blamed and the praised to express similar recognitions of the quality of their actions. In successful cases, this will involve the target of blame being subject to feelings of guilt or remorse, and the target of praise being subject to feelings of self-approbation. Similarly, Telech (2021), interprets praise not as issuing a demand but rather as issuing an invitation to the praiseworthy person to accept moral credit by jointly (i.e., with the praiser) valuing what was creditworthy in their action.
A number of philosophers have recently investigated the conditions under which one may lack the standing to hold another person morally responsible. With respect to blame, the thought is that a blamer can, for one reason or another, lack the authority to blame even if the one they blame is blameworthy. There is disagreement about whether the authority just mentioned amounts to a right that permits one to blame or whether it also involves a normative power to issue a demand for some appropriate response (e.g., an apology). With respect to the first possibility, standingless blame is pro tanto impermissible because one lacks the right to blame; with respect to the second possibility, standingless blame fails to generate imperatives for the blamee. (For the distinction just mentioned, see Fritz and Miller 2022; for accounts of the normative power involved in this context, see Edwards 2019 and Piovarchy 2020). There is also uncertainty in the literature about whether lack of standing should inhibit only overt blaming responses or whether private blame—which may amount only to a blamer’s being subject to otherwise fitting emotional responses (see §2.2.3 )— can also be ruled out on grounds of lack of standing.
Several conditions on standing to blame have been proposed, but most attention has been given to two: the no-meddling condition (where one has standing to blame only if blame would not amount to an inappropriate intrusion into the affairs of others—see McKiernan 2016 and Seim 2019) and the non-hypocrisy condition (where one has standing to blame only if they can do so non-hypocritically). Of these two conditions, the second has received more attention.
In a case of hypocritical blame, one blames another for violating a norm that they themselves have unrepentantly violated. Wallace (2010) argues that the hypocritical blamer is open to a distinct moral objection that undermines their standing to blame. The basis for this objection is that the hypocritical blamer denies “the presumption of the equal standing of persons” (Wallace 2010, 330). This presumption—constitutive, Wallace argues, of the moral practice in which the hypocritical blamer is engaged—is denied because the hypocritical blamer takes themselves to remain insulated from blame yet does not take the similarly-morally-positioned target of their blame to enjoy the same protection. (Wallace takes the hypocrite to lack standing not just for expressions of blame but also for the private experience of blaming emotions.)
Fritz and Miller (2018) say that the hypocritical blamer has a “differential blaming disposition”: they are disposed to blame another but not themselves, where there is no morally relevant difference that would justify this. This makes hypocritical blame unfair, which provide “a moral reason that counts against blaming” in contexts of hypocrisy (Fritz and Miller 2018, 122). (It could just as well be concluded that the hypocritical blamer has moral reason to blame more rather than less: that is, they have reason to extend their blame to themselves. A hypocritical blamer may regain standing to blame in this way; see Fritz and Miller 2018 and Todd 2019.) For Fritz and Miller, the unfairness of a differential blaming disposition accounts for what is objectionable in hypocritical blame. To motivate the conclusion that the hypocritical blamer lacks standing to blame, they argue that our right to blame others is grounded in the fact that persons are morally equal. Since “hypocrisy involves at least an implicit rejection of the equality of persons” (Fritz and Miller 2018, 125), the hypocritical blamer rejects the very thing that would ground their right to blame, so they lack standing to blame.
Todd (2019) objects to the preceding accounts, arguing that “we cannot derive the non-hypocrisy condition from facts about the equality of persons” (2019, 371). Against Fritz and Miller, Todd argues that reliance on the equality of persons gives an unwelcome result: it entails that a merely inconsistent blamer lacks standing to blame. If A is disposed, for no good reason, to differentially blame B and C , then A has a differential blaming disposition. So does A , like the hypocritical blamer, lose standing to blame B and C ? For his own part, Todd suggests that we may not be able to derive the non-hypocrisy condition from anything more basic (such as considerations about rights or equality), but perhaps we can at least give a partially unifying account of what lack of standing to blame involves. Failure to meet an important subset of standing conditions involves, Todd argues, a blaming agent’s own lack of sufficient commitment to the moral values that the agent blames others for failing to sufficiently respect. For other defenses of this “commitment” view, see Lippert-Rasmussen 2020, Riedener 2019, and Rossi 2018.
In arguing against the non-hypocrisy condition, Bell (2013) notes that “people may … evince a wide variety of moral faults through their blame: they can show meanness, pettiness, stinginess, arrogance, and so on” (2013, 275). But since the arrogant blamer does not clearly lack standing to blame, perhaps we need not conclude that the hypocritical blamer lacks such standing. After all, some of the aims of blame—educating the blamer or providing them with motivation to avoid further wrongdoing—are obtainable even if the one who blames does so hypocritically (Bell 2013, 275). See Fritz and Miller (2018) for a reply to Bell on these points.
King (2019) is also skeptical about a standing condition on blame. He argues (i) that the prospects are dim for giving a plausible account of the right on which standing to blame is supposed to rest, and (ii) that we can appeal to something other than standing to account for what goes wrong in cases of hypocritical and meddling blame. In both cases, the objectionable blamer simply has reason to not blame; rather, they ought to attend to something else (to their own business in the meddling case, to their own faults in the hypocrisy case).
Standing conditions may also apply to praise. Telech (2021) notes that one who lacks an appropriate commitment to the values that a praiseworthy person respects may not be correctly positioned to offer praise: the praiseworthy person may reasonably reject such a praiser’s invitation to accept moral credit (2021, 172). Jeppsson and Brandenburg (2022) argue that hypocritical praise may fail to respect the equality of persons: If A praises B for a type of action that A is not committed to performing, this may indicate that A holds B to a higher standard than the one to which A holds themselves. And what if A is partly responsible for B having to exert themselves in a praiseworthy way? Here, B may rightly ask of A , “Who are you to praise me ?” (Jeppsson and Brandenburg 2022, 671; emphasis in original). Finally, Lippert-Rasmussen (2021) has argued that a person may lack standing to praise themselves when they do so hypocritically—that is, when they would not praise another on the same grounds that they praise themselves.
It’s widely held that moral agents can be responsible not just for actions but also for the causal outcomes of their actions. This can be accounted for by appeal to derivative responsibility : an agent’s responsibility for an outcome may derive from their responsibility for a causally related action. Responsibility for outcomes also involves an epistemic condition: the responsible agent must have been aware of—or at least it must be that they could have and should have been aware of—the likely consequences their actions. (The last point is related to the material in §3.10 ). Carolina Sartorio collects these elements in her Principle of Derivative Responsibility : “If an agent is responsible for X, X causes Y, and the relevant epistemic conditions for responsibility obtain, then the agent is also responsible for Y” (2016, 76). Blameworthiness for outcomes can perhaps be accounted for in a related way: if an agent fulfills the relevant causal and epistemic conditions on responsibility with respect to some outcome, and they fulfill those conditions in a way that makes them blameworthy, then the agent is blameworthy for the outcome. For proposals along these lines, see Sartorio’s Principle of Derivative Blameworthiness (2016, 77) as well as Björnsson (2017b) and Gunnemyr and Touborg (2023).
If an agent can be responsible for an outcome in virtue of some earlier action, can they also be responsible for an outcome in virtue of an omission? But what are omissions? Are they constituted by other actions that an agent performs, or are omissions simply absences? In the latter case, it may be difficult to see how omissions—being absences—can enter into causal relations with events such as outcomes. But even if omissions are not, strictly speaking, causes, they may still be related to outcomes in a way that is sufficient to support responsibility: when someone fails to act, it may be quite pertinent that an outcome occurs that would not have occurred had the agent not omitted the action in question. For development of this idea, see Clarke (2014, Chapter 2) and Sartorio (2016, Chapter 2) as well as the authors they cite, particularly Dowe (2000). For another important account of responsibility for omissions, see Fischer and Ravizza (1998, Chapter 5). Clarke (2014) offers a valuable treatment of many issues associated with omissions; also see the essays in Nelkin and Rickless (2017a).
If responsibility for outcomes partly depends on the obtaining of causal (or related) relationships, then factors that affect judgments about causation may also affect judgments about moral responsibility. For example, if different theories of causation yield different answers to the question of whether an agent caused an outcome, they may also yield different answers to questions about the agent’s responsibility for the outcome (Bernstein 2017). And in cases of group causation, it may be that the addition or subtraction of causal contributors will affect judgments about the degree to which any individual in the group caused the outcome; again, a corresponding effect on judgments about individual responsibility should be expected. (See Bernstein 2017 and Sartorio 2015 for the last point; both authors note that a form of moral luck may be in play here since whether an agent is part of a larger or a smaller group of causal contributors may be beyond the agent’s control; regarding moral luck, see §3.7 ) There may also be cases in which it is simply indeterminate what an agent has caused, and judgments about responsibility in these cases may likewise be indeterminate (Bernstein 2016).
In contrast to the tenor of the discussion so far, Kutz (2000) argues that founding responsibility on causal connections can—at least in cases of group agency—lead to counterintuitive results. Kutz’s central example is the Allied bombing campaign that destroyed the German city of Dresden in WWII (2000, 116–24). Far more bombs and bombers were used in the raid than were required to destroy the city, and each bomber pilot might plausibly claim that their casual contribution made no difference to that outcome. Kutz argues that, for the purposes of assessing individual moral accountability, we should refer not to individual causal contributions but rather to the pilots’ overlapping intentions and attitudes that led them to participate in the raid on Dresden.
Lawson (2013) develops an account similar to Kutz’s; Petersson (2013) objects to Kutz and defends the importance of individual causal contributions for assessing responsibility. Sinnott-Armstrong (2005) and Nefsky (2017) are other important investigations of the problem of how to assess non-difference-making causal contributions. Nefsky argues that an individual can make non-superfluous contributions to preventing or bringing about an outcome even if their contributions do not decide whether the outcome occurs. Gunneymr and Touborg’s (2023) emphasis on the way that individual, non-difference-making causal contributions may increase or decrease the “security” of an outcome is also relevant here. Kaiserman (2024) applies a view developed in Kaiserman (2016) to cases like Kutz’s, arguing that an agent can partly contribute to an outcome even if there is no identifiable part of the outcome that they caused.
Positing responsibility for outcomes may involve a commitment to outcome moral luck ( §3.7 ) because while an agent may control their action, whether that action leads to a certain outcome is typically not entirely within the agent’s control. Skepticism about outcome moral luck may thus lead to skepticism about responsibility and blameworthiness for outcomes. Perhaps agents are never responsible for outcomes but only for their action-explaining motives and intentions, or for exercising their will in a certain way. The same may be true of blameworthiness. Andrew Khoury argues that “the only things that one can be blameworthy for are those things that make one blameworthy,” and for Khoury, it is only the moral quality of our “willings,” and never the outcomes to which these willings may lead, that can make us blameworthy (Khoury 2018, 1363). Also see Graham (2014) and (2017) for important contributions in this vein.
If moral responsibility requires free will and free will requires a type of access to alternatives that is not compatible with determinism (see §1 ), then it follows that if determinism is true, no one is ever morally responsible for their behavior. The above reasoning, and the skeptical conclusion it reaches about responsibility, is endorsed by the hard determinist perspective on free will and responsibility, which was defended historically by Spinoza and d’Holbach (among others) and more recently by Honderich (2002). But given that determinism may well be false, contemporary skeptics about responsibility more often pursue a hard incompatibilist line of argument according to which the kind of free will required for desert-based (as opposed to forward-looking, see §2.1 ) moral responsibility is incompatible with the truth or falsity of determinism (Pereboom 2001, 2014).
Discussion of skeptical positions that do not depend on the truth of determinism can be found in each of the four subsections below. For additional skeptical accounts, see Smilansky (2000), Waller (2011); also see the entry on skepticism about moral responsibility .
A person is subject to moral luck if factors that are not under that person’s control affect the moral assessments to which they are open (Nagel [1976]1979; also see Williams [1976]1981 and the entry on moral luck .)
Can luck affect moral responsibility? Consider an unsuccessful assassin who shoots at their target but misses because their bullet is deflected by a passing bird. This assassin has good outcome moral luck . Because of factors beyond their control; their moral record is better than it might have been: they are not a murderer and not morally responsible for causing anyone’s death. One might think, in addition, that an unsuccessful assassin is less blameworthy than a successful assassin with whom they are otherwise identical, and that the reason for this is just that the successful assassin intentionally killed someone while the unsuccessful assassin did not. (For important recent defenses of moral luck, see Hanna 2014 and Hartman 2017)
On the other hand, one might think that if the two assassins are identical in terms of their values, goals, intentions, and motivations, then the addition of a bit of luck to the unsuccessful assassin’s story cannot ground a deep contrast between the two in terms of their moral responsibility. One way to sustain this position is to argue that moral responsibility is a function solely of internal features of agents, such as their motives and intentions (Graham 2014 and Khoury 2018; also see §3.5 ; see Enoch and Marmor 2007 for the main arguments against moral luck). Of course, the successful assassin is responsible for something (killing a person) for which the unsuccessful assassin is not, but perhaps both are responsible—and presumably blameworthy— to the same degree insofar as it was true of both that they aimed to kill, and that they did so for the same reasons and with the same commitment toward bringing about that outcome (M. Zimmerman 2002 and 2015).
But now consider a different would-be assassin who does not even try to kill anyone, but only because their circumstances did not favor this option. This would-be assassin is willing to kill under favorable circumstances (so they may have had good circumstantial moral luck since they were not in those circumstances). Perhaps the degree of responsibility attributed to the successful and unsuccessful assassins described in the previous paragraph depends not so much on the fact that they both tried to kill as on the fact that they were both willing to kill, and the would-be assassin may share the same degree of responsibility since they share the same willingness to kill. But an account that focuses on what agents would be willing to do under counterfactual circumstances is likely to generate unintuitive conclusions about responsibility since many agents who are typically judged blameless might willingly perform terrible actions under the right circumstances. (M. Zimmerman 2002 and 2015 does not shy away from this consequence, but critics—Hanna 2014, Hartman 2017—have made much of it; see Peels 2015 for a position related to Zimmerman’s that may avoid the unintuitive consequence just mentioned.)
Once luck is taken fully into account, there is reason to worry that responsibility may be generally undermined. Consider constitutive moral luck: luck in how one is constituted in terms of the “inclinations, capacities, and temperament” one finds within oneself (Nagel [1976]1979, 28). Facts about a person’s inclinations, capacities, and temperament explain much—if not all—of that person’s behavior, and if the facts that explain why a person acts as they do are a result of good or bad luck, then perhaps it is unfair to hold them responsible for their behavior. And as Nagel notes, once the full sweep of the various kinds of luck comes into view, “[t]he area of genuine agency” may shrink to nothing since our actions and their consequences “result from the combined influence of factors, antecedent and posterior to action, that are not within the agent’s control” ([1976]1979, 35). If this is right, then perhaps, “nothing remains which can be ascribed to the responsible self, and we are left with nothing but a … sequence of events, which can be deplored or celebrated, but not blamed or praised” (Nagel [1976]1979, 37).
Nagel doesn’t fully embrace a skeptical conclusion about responsibility on the above grounds, but others have done so, most notably, Neil Levy (2011). According to Levy’s “hard luck view,” the encompassing nature of moral luck means “that there are no desert-entailing differences between moral agents” (2011, 10). There are differences between agents in terms of their characters and the good or bad actions and outcomes that they produce, but Levy’s point is that, given the influence of luck in generating these differences, they don’t provide a sound basis for differential treatment of people in terms of moral praise and blame. (See Russell 2017 for a compatibilist account that leads to a variety of pessimism, though not skepticism, on the basis of the concerns about moral luck.)
Galen Strawson’s Basic Argument concludes that “we cannot be truly or ultimately morally responsible for our actions” (1994, 5). (Since the argument targets “ultimate” responsibility, it does not necessarily exclude other forms, such as forward-looking responsibility [ §2.1 ] and, on some understandings, responsibility-as-attributability [ §3.1.1 ].) The argument begins by noting that agents make the choices they do because of what seems choiceworthy to them. (This is related to the discussion of constitutive moral luck in §3.7 .) So, in order to be responsible for their choices, agents must be responsible for the fact that certain things seem choiceworthy to them. But how can agents be responsible for these prior facts about themselves? Wouldn’t this require a prior choice on the part of the agent, one that resulted in their present disposition to see certain ends as choiceworthy? But this prior choice would itself be something for which the agent would be responsible only if the agent is also responsible for the fact that the prior choice seemed choiceworthy to them. A regress looms here, and Strawson claims that it cannot be stopped except by positing an initial act of self-creation on the responsible agent’s part (G. Strawson 1994, 5, 15). But self-creation is impossible, so no one is ever ultimately responsible for their behavior.
A number of replies to this argument are possible. One might simply deny that how a person came to be the way they are matters for present responsibility: perhaps all we need to know in order to judge a person’s responsibility are facts about their present constitution and about how that constitution is related to the person’s present behavior. (For views like this, see the discussion of attributionism [ §3.1.2 ] and the discussion of non-historical accounts of responsibility in the next subsection). Alternatively, one might think that while personal history matters for moral responsibility, Strawson’s argument sets the bar too high (see Fischer 2006; for a reply, see Levy 2011, 5). Perhaps what is needed is not literal self-creation, but simply an ability to enact changes in oneself so as to acquire responsibility for the self that results from these changes (Clarke 2005). A picture along these lines can be found in Aristotle’s suggestion (in Book III of the Nicomachean Ethics ) that one can be responsible for being a careless person if one’s present state of carelessness is the result of earlier choices that one made (also see Moody-Adams 1990).
Roughly in this Aristotelian vein, Robert Kane offers an incompatibilist account of how an agent can be ultimately responsibility for their actions (1996 and 2007). On Kane’s view, for an agent “to be ultimately responsible for [a] choice, the agent must be at least in part responsible by virtue of choices or actions voluntarily performed in the past for having the character and motives he or she now has” (2007, 14; emphasis in original). This position may appear to be open to the regress concerns presented in Strawson’s argument above, but Kane thinks a regress is avoided in cases in which a person’s character-forming choices are undetermined. Since these undetermined choices will have no sufficient causes, there is no relevant prior cause for which the agent must be responsible, so there is no regress problem (Kane 2007, 15–16; see Pereboom 2001, 47–50 for criticism.)
Of particular interest to Kane are potential character-forming choices that occur “when we are torn between competing visions of what we should do or become” (2007, 26). In such cases, if a person sees reasons in favor of either choice that they might make, and the choice that they make is undetermined, then whichever choice they make will have been chosen for their own reasons. According to Kane, when an agent makes this kind of choice, they shape their own character, and since the agent’s choice is not determined by prior causal factors, they are responsible for that choice, for the character shaped by it, and for the character-determined choices that the agent may make in the future.
Accounts such as Levy’s (2011) and G. Strawson’s (1994), described in the two preceding subsections, argue that a person’s present responsibility can depend on facts about the way that person came to be as they are. But non-historical views, such as attributionism ( §3.1.2 ) and the views that Susan Wolf calls “Real Self” theories ( §3.1.1 ), reject this contention. Real Self accounts are sometimes referred to as “structural” or “hierarchical” theories. By whatever name, the basic idea is that an agent is morally responsible insofar as their will has the right structure: in particular, there needs to be an appropriate relationship between the desires that actually move an agent and that agent’s values, or between the desires that move an agent and that agent’s higher-order desires, the latter of which are the agent’s reflective preferences about which desires should move them. (For approaches along these lines, see Dworkin 1987; Frankfurt 1971, 1987; and Watson 1975.)
Harry Frankfurt’s comparison between a willing drug addict and an unwilling addict illustrates important features of his version of the structural approach to responsibility. Both of Frankfurt’s addicts strongly desire to take the drug to which they are addicted and these first-order desires will ultimately move both addicts to take the drug. But the addicts have different higher-order perspectives on their first-order desire to take the drug. The willing addict endorses and identifies with his addictive desire, but the unwilling addict repudiates his addictive desire to such an extent that, when it ends up being effective, Frankfurt says that this addict is “helplessly violated by his own desires” (1971, 12). The willing addict has a kind of freedom that the unwilling addict lacks: they may both act on the desire to take the drug, but insofar as the willing addict is moved by a desire that he endorses, he acts freely in a way that the unwilling addict does not (Frankfurt 1971, 19). A related conclusion about responsibility may be drawn: perhaps the unwilling addict’s addictive desire is alien to him in such a way that his responsibility for acting on it is called into question (for a recent defense of this conclusion, see Sripada 2017).
Frankfurt assumes that an agent’s higher-order desires have the authority to speak for the agent—they reveal (or constitute) the agent’s “real self,” to use Wolf’s language (1990). But if higher-order desires are invoked out of a concern that an agent’s lower-order desires may not speak for the agent, why won’t the same worry recur with respect to higher-order desires? When ascending through the orders of desires, why stop at any particular point? Why not think that appeal to a still higher order is always necessary to reveal where an agent stands? See Watson (1975) for this objection, which partly motivates Watson—in his articulation of a structural approach—to focus on whether an agent’s desires conform with their values , rather than with their higher-order desires.
Even if one agrees with Frankfurt about the structural elements required for responsibility, one might wonder how an agent’s will came to have its particular structure. An objection to Frankfurt’s view notes that the relevant structure might have been put in place by factors that intuitively undermine responsibility, in which case the presence of the relevant structure is not sufficient for responsibility (Fischer and Ravizza 1998, 196–201; Locke 1975). Fischer and Ravizza argue that “[i]f the mesh [between higher- and lower-order desires] were produced by … brainwashing or subliminal advertising … we would not hold the agent morally responsible for his behavior” because the psychological mechanism that produced the behavior would not be, “in an important intuitive sense, the agent’s own ” (1998, 197; emphasis in original). In response to this type of worry, Fischer and Ravizza argue that responsibility has a historical component, which they attempt to capture with their account of how agents can “take responsibility” for the psychological mechanism that produces their behavior (1998, 207–239). (For criticism of Fischer and Ravizza’s account of taking responsibility, see Levy 2011, 103–106 and Pereboom 2001, 120–22; for elaboration and defense of Fischer and Ravizza’s account, see Fischer 2004; for quite different accounts of taking responsibility, see Enoch 2012; Mason 2019, 179–207; and Wolf 2001. For work on the general significance of personal histories for responsibility, see Christman 1991, Vargas 2006, and D. Zimmerman 2003.)
Part of Fischer and Ravizza’s motivation for developing their account of “taking responsibility” was to ensure that agents who have been manipulated in certain ways do not count as responsible on their view. Several examples and arguments featuring the sort of manipulation that worry Fischer and Ravizza have played important roles in the recent literature on responsibility. One of these is Alfred Mele’s Beth/Ann example (1995, 2006b), which emphasizes the difficulties faced by accounts of responsibility that eschew historical conditions. Ann has acquired her preferences and values in the normal way, but Beth is manipulated by a team of neuroscientists so that she now has preferences and values that are identical to Ann’s. After the manipulation, Beth reflectively endorses her new values. Such endorsement might be a sign of the self-governance associated with responsibility, but Mele argues that Beth, unlike Ann, exhibits merely “ersatz self-government” since Beth’s new values were imposed on her (1995, 155). And if other kinds of personal histories similarly undermine an agent’s ability to authentically govern their behavior, then agents with these histories will not be morally responsible. For replies to Mele and general insights into manipulation cases, see Arpaly (2003), King (2013), and Todd (2011); for discussion of issues about personal identity that arise in manipulation cases, see Khoury (2013), Matheson (2014), Shoemaker (2012).
One can take a hard line in Beth’s case (Mckenna 2004). That is, one might note that while Beth acquired her new values in a strange way, everyone acquires their values in ways that are not fully under their control. Indeed, following Galen Strawson’s (1994) line of argument (described in §3.8 ), it might be noted that no one has ultimate control over their values, and even if normal agents have some capacity to address and alter their values, the dispositional factors that govern use of this capacity ultimately result from factors beyond agents’ control. Perhaps, then, Beth is not so easily distinguished from normal agents; perhaps she is just as responsible as they are. But this reasoning can cut both ways: instead of showing that Beth is assimilated into the class of normal, responsible agents, it might show that normal agents are assimilated into the class of non-responsible agents. Derk Pereboom’s four-case argument reasons along these lines (1995, 2001, 2007, 2014). (The “zygote argument” is also relevant here; see Mele 1995, 2006b, and 2008.)
Pereboom’s argument presents four scenarios involving Plum in which Plum kills White while satisfying the conditions on moral responsibility most often proposed by compatibilists (and described in earlier sections of this entry). In Case 1, Plum is “created by neuroscientists, who … manipulate him directly through the use of radio-like technology” (Pereboom 2001, 112). These scientists cause Plum’s reasoning to take a certain path that culminates in Plum deciding to kill White. Pereboom believes that Plum is clearly not responsible for killing White in Case 1 since his behavior was determined by the neuroscientists. In Cases 2 and 3, Plum is causally determined to undertake the same reasoning process as in Case 1, but in Case 2 Plum is merely “programmed” to do so by neuroscientists, and in Case 3 Plum’s reasoning is the result of socio-cultural influences that determine his character. In Case 4, Plum is a normal human being in a causally deterministic universe, and he decides to kill White in the same way as in the previous cases.
Pereboom claims that there is no relevant difference between Cases 1, 2, and 3, so judgments about Plum’s responsibility should be the same in these cases. Plum is not responsible in these cases because his behavior is causally determined by forces beyond his control (Pereboom 2001, 116). But then, Pereboom argues, we should conclude that Plum is not responsible in Case 4 since causal determinism is the defining feature of that case, and the same conclusion should apply to anyone living in a causally deterministic universe.
A possible reply to Pereboom is that the manipulation to which Plum is subjected in Case 1 undermines his responsibility for some other reason besides the fact that it causally determines his behavior. This would stop the generalization of non-responsibility from Case 1 to the subsequent cases. (See Demetriou (Mickelson) 2010, Fischer 2004, Mele 2005; for a response, see Matheson 2016; Pereboom addresses this concern in his 2014 presentation of the argument; also see Shabo 2010). Alternatively, it might be argued, on compatibilist grounds, that Plum is responsible in Case 4 and that this conclusion should be extended to the earlier cases since Plum fulfills the same compatibilist conditions on responsibility in those cases (McKenna 2008).
The four-case argument attempts to show that if determinism is true, then we cannot be the sources of our actions in the way required for moral responsibility. It is, therefore, an argument for incompatibilism rather than for skepticism about moral responsibility. But in combination with Pereboom’s argument that we lack the sort of free will required for responsibility even if determinism is false (2001, 38–88; 2014, 30–70), the four-case argument has emerged as an important motivation for skepticism about responsibility.
There has been a recent surge in interest in the epistemic condition on responsibility (as opposed to the freedom or control condition that is at the center of the free will debate).
Sometimes agents act in ignorance of the bad consequences of their actions, and sometimes their ignorance excuses them from blame. But in other cases, an agent’s ignorance does not excuse them. How can we distinguish the cases where ignorance excuses from those in which it does not? One proposal is that ignorance fails to excuse when the ignorance is itself something for which the agent is to blame. And one proposal for when ignorance is blameworthy is that it issues from a blameworthy benighting act in which an agent culpably impairs, or fails to improve, their epistemic position (H. Smith 1983). In such a case, the agent’s ignorance seems to be their own fault, so it cannot excuse them.
But when is a benighting act blameworthy? Several philosophers, such Levy (2011), Rosen (2004), and M. Zimmerman (1997), have suggested that agents are culpable for benighting acts only when they perform them knowingly. The idea is that ignorance for which one is blameworthy, and that leads to blameworthy unwitting wrongdoing, must have its source in knowing wrongful behavior. So, if someone unwittingly does something wrong, then that person will be blameworthy only if we can explain their lack of knowledge (their “unwittingness”) by reference to something else that the agent knowingly and wrongfully did. Thus, Rosen concludes that “ the only possible locus of original responsibility [for a later unwitting act] is an akratic act …. a knowing sin” (2004, 307; emphasis in original). Similarly, Michael Zimmerman argues that “all culpability can be traced to culpability that involves lack of ignorance, that is, that involves a belief on the agent’s part that he or she is doing something morally wrong” (1997, 418). (In certain structural respects, the argument here resembles Galen Strawson’s skeptical argument in §3.8 )
The above reasoning may apply not just to cases in which a person is unaware of the consequences of their action, but also to cases in which a person is unaware of the moral status of their behavior. A slaveowner, for example, might think that slaveholding is permissible, and so, on the account considered here, they will be blameworthy only if they are culpable for their ignorance about the moral status of slavery, which will require that they ignored evidence about its moral status while knowing that this is something that they should not do (Rosen 2003 and 2004).
These reflections can give rise to a couple forms of skepticism about moral responsibility (and particularly about blameworthiness). One might endorse a form of epistemic skepticism on the grounds that we rarely have insight into whether a wrongdoer knowingly acted wrongly at some suitable point in the history of a given action (Rosen 2004). Alternatively, or in addition, one might endorse a more substantive form of skepticism on the grounds that a great many normal wrongdoers don’t exhibit the sort of knowing wrongdoing supposedly required for responsibility. Perhaps very many wrongdoers don’t know that they are wrongdoers and their ignorance on this score is not their fault since it doesn’t arise from an earlier instance of knowing wrongdoing. In this case, very many ordinary wrongdoers may fail to be responsible for their behavior. (For skeptical conclusions along these lines, see M. Zimmerman 1997 and Levy 2011.)
There is more to the epistemic dimension of responsibility than what is contained in the above skeptical argument, but the argument does bring out a lot of what is of interest in this domain. For one thing, it prominently relies on a tracing strategy. This strategy is used in accounts that feature a person who does not, at the time of action, fulfill control or knowledge conditions on responsibility, but who nonetheless seems responsible for their behavior. In such a case, the agent’s responsibility may be grounded in the fact that their failure to fulfill certain conditions on responsibility is traceable to earlier actions undertaken by the agent when they did fulfill these conditions (also see the discussion of derivative responsibility in §3.5 ). For example, a person may be so intoxicated that they lack control over, or awareness of, their behavior, and yet it may still be appropriate to hold them responsible for their intoxicated behavior insofar as they freely intoxicated themselves. The tracing strategy plays an important role in many accounts of responsibility (see, e.g., Fischer and Ravizza 1998, 49–51), but it has also been subjected to important criticisms (see Vargas 2005; for a reply see Fischer and Tognazzini 2009; for more on tracing, see Khoury 2012, King 2011, and Shabo 2015).
Various strategies for rejecting the above skeptical argument also illustrate stances one can take on the relationship between knowledge and responsibility. These strategies typically involve rejecting the claim that knowing wrongdoing is fundamental to blameworthiness. It has, for example, been argued that it is often morally blameworthy to perform an action when one is merely uncertain whether the action is wrong (see Guerrero 2007; also see Nelkin and Rickless 2017b and Robichaud 2014). Another strategy would be to argue that blameworthiness can be grounded in cases of morally ignorant wrongdoing if it is reasonable to expect the wrongdoer to have avoided their moral ignorance, and particularly if their ignorance is itself caused by the agent’s own epistemic and moral vices (FitzPatrick 2008 and 2017). Relatedly, it might be argued that one who is unaware that they do wrong is blameworthy if they possessed relevant capacities for avoiding their ignorance; this approach may be particularly promising in cases in which an agent’s lack of moral awareness stems from a failure to remember their moral duties (Clarke 2014, 2017 and Sher 2006, 2009; also see Rudy-Hiller 2017). Finally, it might simply be claimed that morally ignorant wrongdoers can harbor, and express through their behavior, objectionable attitudes or qualities of will that suffice for blameworthiness (Arpaly 2003, Björnsson 2017a, Harman 2011, Mason 2015). This approach may be most promising in cases in which a wrongdoer is aware of the material outcomes of their conduct but unaware of the fact that they do wrong in bringing about those outcomes.
For more, see the entry on the epistemic condition for moral responsibility as well as the essays in Robichaud and Wieland (2017).
How to cite this entry . Preview the PDF version of this entry at the Friends of the SEP Society . Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers , with links to its database.
blame | compatibilism | determinism: causal | free will | free will: divine foreknowledge and | incompatibilism: (nondeterministic) theories of free will | incompatibilism: arguments for | luck: moral | moral responsibility: the epistemic condition | responsibility: collective | skepticism: about moral responsibility
I would like to thank Derk Pereboom and Daniel Miller for their helpful comments on drafts of this entry.
Copyright © 2024 by Matthew Talbert < Matthew . Talbert @ fil . lu . se >
Mirror sites.
View this site from another server:
The Stanford Encyclopedia of Philosophy is copyright © 2024 by The Metaphysics Research Lab , Department of Philosophy, Stanford University
Library of Congress Catalog Data: ISSN 1095-5054
Advertisement
Supported by
Guest Essay
By Melissa B. Jacoby
Ms. Jacoby is the author of the forthcoming book “Unjust Debts: How Our Bankruptcy System Makes America More Unequal.”
When Purdue Pharma filed for Chapter 11 bankruptcy in 2019 , it had over a billion dollars in the bank and owed no money to lenders. But it also had the Sacklers, its owners, who were eager to put behind them allegations that they played a leading role in the national opioid epidemic.
The United States Supreme Court is now considering whether the bankruptcy system should have given this wealthy family a permanent shield against civil liability. But there is a bigger question at stake, too: Why is a company with no lenders turning to the federal bankruptcy system in response to accusations of harm and misconduct?
The maker of OxyContin is one in a long line of companies that have turned Chapter 11 into a legal Swiss Army knife, tackling problems that are a mismatch for its rules. Managing costly and sprawling litigation through bankruptcy can be well intentioned. But Chapter 11 was designed around the goal of helping financially distressed businesses restructure loans and other contract obligations.
If companies instead turn to bankruptcy to permanently and comprehensively cap liability for wrongdoing — the objective not only of Purdue Pharma but also of many other entities over recent decades — they can shortchange the rights of individuals seeking accountability for corporate coverups of toxic products and other wrongdoing. And in a country that relies on lawsuits and the civil justice system to deter corporate malfeasance, permanently capping liability using a procedure focused primarily on debt and money could be making us less safe.
In 1978, a bipartisan group of lawmakers enacted sweeping reforms to American bankruptcy law. To enhance economic value and keep viable businesses alive for the benefit of workers and other stakeholders, these changes gave companies more protection and control in bankruptcy. This new bankruptcy code also made it easier to alter the legal rights of creditors during and after bankruptcy without their consent.
To provide more sweeping protection to a distressed but viable company, the new bankruptcy laws also expanded the definition of “creditor” to include people allegedly injured by the business. Yet the rules governing Chapter 11 were drafted primarily with loans and contracts, not large numbers of harmed individuals, in mind.
We are having trouble retrieving the article content.
Please enable JavaScript in your browser settings.
Thank you for your patience while we verify access. If you are in Reader mode please exit and log into your Times account, or subscribe for all of The Times.
Thank you for your patience while we verify access.
Already a subscriber? Log in .
Want all of The Times? Subscribe .
ChatGPT is an artificial intelligence ( AI ) chatbot that uses natural language processing to create humanlike conversational dialogue. The language model can respond to questions and compose various written content, including articles, social media posts, essays, code and emails.
ChatGPT is a form of generative AI -- a tool that lets users enter prompts to receive humanlike images, text or videos that are created by AI.
ChatGPT is similar to the automated chat services found on customer service websites, as people can ask it questions or request clarification to ChatGPT's replies. The GPT stands for "Generative Pre-trained Transformer," which refers to how ChatGPT processes requests and formulates responses. ChatGPT is trained with reinforcement learning through human feedback and reward models that rank the best responses. This feedback helps augment ChatGPT with machine learning to improve future responses.
OpenAI -- an artificial intelligence research company -- created ChatGPT and launched the tool in November 2022. It was founded by a group of entrepreneurs and researchers including Elon Musk and Sam Altman in 2015. OpenAI is backed by several investors, with Microsoft being the most notable. OpenAI also created Dall-E , an AI text-to-art generator.
ChatGPT works through its Generative Pre-trained Transformer, which uses specialized algorithms to find patterns within data sequences. ChatGPT originally used the GPT-3 large language model, a neural network machine learning model and the third generation of Generative Pre-trained Transformer. The transformer pulls from a significant amount of data to formulate a response.
This article is part of
ChatGPT now uses the GPT-3.5 model that includes a fine-tuning process for its algorithm. ChatGPT Plus uses GPT-4 , which offers a faster response time and internet plugins. GPT-4 can also handle more complex tasks compared with previous models, such as describing photos, generating captions for images and creating more detailed responses up to 25,000 words.
ChatGPT uses deep learning , a subset of machine learning, to produce humanlike text through transformer neural networks . The transformer predicts text -- including the next word, sentence or paragraph -- based on its training data's typical sequence.
Training begins with generic data, then moves to more tailored data for a specific task. ChatGPT was trained with online text to learn the human language, and then it used transcripts to learn the basics of conversations.
Human trainers provide conversations and rank the responses. These reward models help determine the best answers. To keep training the chatbot, users can upvote or downvote its response by clicking on thumbs-up or thumbs-down icons beside the answer. Users can also provide additional written feedback to improve and fine-tune future dialogue.
Users can ask ChatGPT a variety of questions, including simple or more complex questions, such as, "What is the meaning of life?" or "What year did New York become a state?" ChatGPT is proficient with STEM disciplines and can debug or write code. There is no limitation to the types of questions to ask ChatGPT. However, ChatGPT uses data up to the year 2021, so it has no knowledge of events and data past that year. And since it is a conversational chatbot, users can ask for more information or ask it to try again when generating text.
ChatGPT is versatile and can be used for more than human conversations. People have used ChatGPT to do the following:
Unlike other chatbots, ChatGPT can remember various questions to continue the conversation in a more fluid manner.
Businesses and users are still exploring the benefits of ChatGPT as the program continues to evolve. Some benefits include the following:
Some limitations of ChatGPT include the following:
Learn more about the pros and cons of AI-generated content .
While ChatGPT can be helpful for some tasks, there are some ethical concerns that depend on how it is used, including bias , lack of privacy and security, and cheating in education and work.
ChatGPT can be used unethically in ways such as cheating, impersonation or spreading misinformation due to its humanlike capabilities. Educators have brought up concerns about students using ChatGPT to cheat, plagiarize and write papers. CNET made the news when it used ChatGPT to create articles that were filled with errors.
To help prevent cheating and plagiarizing, OpenAI announced an AI text classifier to distinguish between human- and AI-generated text. However, after six months of availability, OpenAI pulled the tool due to a "low rate of accuracy."
There are online tools, such as Copyleaks or Writing.com, to classify how likely it is that text was written by a person versus being AI-generated. OpenAI plans to add a watermark to longer text pieces to help identify AI-generated content.
Because ChatGPT can write code, it also presents a problem for cybersecurity. Threat actors can use ChatGPT to help create malware. An update addressed the issue of creating malware by stopping the request, but threat actors might find ways around OpenAI's safety protocol.
ChatGPT can also be used to impersonate a person by training it to copy someone's writing and language style. The chatbot could then impersonate a trusted person to collect sensitive information or spread disinformation .
One of the biggest ethical concerns with ChatGPT is its bias in training data . If the data the model pulls from has any bias, it is reflected in the model's output. ChatGPT also does not understand language that might be offensive or discriminatory. The data needs to be reviewed to avoid perpetuating bias, but including diverse and representative material can help control bias for accurate results.
As technology advances, ChatGPT might automate certain tasks that are typically completed by humans, such as data entry and processing, customer service, and translation support. People are worried that it could replace their jobs, so it's important to consider ChatGPT and AI's effect on workers.
Rather than replacing workers, ChatGPT can be used as support for job functions and creating new job opportunities to avoid loss of employment. For example, lawyers can use ChatGPT to create summaries of case notes and draft contracts or agreements. And copywriters can use ChatGPT for article outlines and headline ideas.
ChatGPT uses text based on input, so it could potentially reveal sensitive information. The model's output can also track and profile individuals by collecting information from a prompt and associating this information with the user's phone number and email. The information is then stored indefinitely.
To access ChatGPT, create an OpenAI account. Go to chat.openai.com and then select "Sign Up" and enter an email address, or use a Google or Microsoft account to log in.
After signing up, type a prompt or question in the message box on the ChatGPT homepage. Users can then do the following:
Even though ChatGPT can handle numerous users at a time, it reaches maximum capacity occasionally when there is an overload. This usually happens during peak hours, such as early in the morning or in the evening, depending on the time zone.
If it is at capacity, try using it at different times or hit refresh on the browser. Another option is to upgrade to ChatGPT Plus, which is a subscription, but is typically always available, even during high-demand periods.
ChatGPT is available for free through OpenAI's website. Users need to register for a free OpenAI account. There is also an option to upgrade to ChatGPT Plus for access to GPT-4, faster responses, no blackout windows and unlimited availability. ChatGPT Plus also gives priority access to new features for a subscription rate of $20 per month.
Without a subscription, there are limitations. The most notable limitation of the free version is access to ChatGPT when the program is at capacity. The Plus membership gives unlimited access to avoid capacity blackouts.
Because of ChatGPT's popularity, it is often unavailable due to capacity issues. Google announced Bard in response to ChatGPT . Google Bard will draw information directly from the internet through a Google search to provide the latest information.
Microsoft added ChatGPT functionality to Bing, giving the internet search engine a chat mode for users. The ChatGPT functionality in Bing isn't as limited because its training is up to date and doesn't end with 2021 data and events.
There are other text generator alternatives to ChatGPT, including the following:
Coding alternatives for ChatGPT include the following:
Learn more about various AI content generators .
In August 2023, OpenAI announced an enterprise version of ChatGPT. The enterprise version offers the higher-speed GPT-4 model with a longer context window , customization options and data analysis. This model of ChatGPT does not share data outside the organization.
In September 2023, OpenAI announced a new update that allows ChatGPT to speak and recognize images. Users can upload pictures of what they have in their refrigerator and ChatGPT will provide ideas for dinner. Users can engage to get step-by-step recipes with ingredients they already have. People can also use ChatGPT to ask questions about photos -- such as landmarks -- and engage in conversation to learn facts and history.
Users can also use voice to engage with ChatGPT and speak to it like other voice assistants . People can have conversations to request stories, ask trivia questions or request jokes among other options.
The voice update will be available on apps for both iOS and Android. Users will just need to opt-in to use it in their settings. Images will be available on all platforms -- including apps and ChatGPT’s website.
In November 2023, OpenAI announced the rollout of GPTs, which let users customize their own version of ChatGPT for a specific use case. For example, a user could create a GPT that only scripts social media posts, checks for bugs in code, or formulates product descriptions. The user can input instructions and knowledge files in the GPT builder to give the custom GPT context. OpenAI also announced the GPT store, which will let users share and monetize their custom bots.
In December 2023, OpenAI partnered with Axel Springer to train its AI models on news reporting. ChatGPT users will see summaries of news stories from Bild and Welt, Business Insider and Politico as part of this deal. This agreement gives ChatGPT more current information in its chatbot answers and gives users another way to access news stories. OpenAI also announced an agreement with the Associated Press to use the news reporting archive for chatbot responses.
NBASE-T Ethernet is an IEEE standard and Ethernet-signaling technology that enables existing twisted-pair copper cabling to ...
SD-WAN security refers to the practices, protocols and technologies protecting data and resources transmitted across ...
Net neutrality is the concept of an open, equal internet for everyone, regardless of content consumed or the device, application ...
A proof of concept (PoC) exploit is a nonharmful attack against a computer or network. PoC exploits are not meant to cause harm, ...
A virtual firewall is a firewall device or service that provides network traffic filtering and monitoring for virtual machines (...
Cloud penetration testing is a tactic an organization uses to assess its cloud security effectiveness by attempting to evade its ...
Regulation SCI (Regulation Systems Compliance and Integrity) is a set of rules adopted by the U.S. Securities and Exchange ...
Strategic management is the ongoing planning, monitoring, analysis and assessment of all necessities an organization needs to ...
IT budget is the amount of money spent on an organization's information technology systems and services. It includes compensation...
ADP Mobile Solutions is a self-service mobile app that enables employees to access work records such as pay, schedules, timecards...
Director of employee engagement is one of the job titles for a human resources (HR) manager who is responsible for an ...
Digital HR is the digital transformation of HR services and processes through the use of social, mobile, analytics and cloud (...
A virtual agent -- sometimes called an intelligent virtual agent (IVA) -- is a software program or cloud service that uses ...
A chatbot is a software or computer program that simulates human conversation or "chatter" through text or voice interactions.
Martech (marketing technology) refers to the integration of software tools, platforms, and applications designed to streamline ...
In the months and years since ChatGPT burst on the scene in November 2022, generative AI (gen AI) has come a long way. Every month sees the launch of new tools, rules, or iterative technological advancements. While many have reacted to ChatGPT (and AI and machine learning more broadly) with fear, machine learning clearly has the potential for good. In the years since its wide deployment, machine learning has demonstrated impact in a number of industries, accomplishing things like medical imaging analysis and high-resolution weather forecasts. A 2022 McKinsey survey shows that AI adoption has more than doubled over the past five years, and investment in AI is increasing apace. It’s clear that generative AI tools like ChatGPT (the GPT stands for generative pretrained transformer) and image generator DALL-E (its name a mashup of the surrealist artist Salvador Dalí and the lovable Pixar robot WALL-E) have the potential to change how a range of jobs are performed. The full scope of that impact, though, is still unknown—as are the risks.
Aamer Baig is a senior partner in McKinsey’s Chicago office; Lareina Yee is a senior partner in the Bay Area office; and senior partners Alex Singla and Alexander Sukharevsky , global leaders of QuantumBlack, AI by McKinsey, are based in the Chicago and London offices, respectively.
Still, organizations of all stripes have raced to incorporate gen AI tools into their business models, looking to capture a piece of a sizable prize. McKinsey research indicates that gen AI applications stand to add up to $4.4 trillion to the global economy—annually. Indeed, it seems possible that within the next three years, anything in the technology, media, and telecommunications space not connected to AI will be considered obsolete or ineffective .
But before all that value can be raked in, we need to get a few things straight: What is gen AI, how was it developed, and what does it mean for people and organizations? Read on to get the download.
To stay up to date on this critical topic, sign up for email alerts on “artificial intelligence” here .
Learn more about QuantumBlack , AI by McKinsey.
What’s the difference between machine learning and artificial intelligence, about quantumblack, ai by mckinsey.
QuantumBlack, McKinsey’s AI arm, helps companies transform using the power of technology, technical expertise, and industry experts. With thousands of practitioners at QuantumBlack (data engineers, data scientists, product managers, designers, and software engineers) and McKinsey (industry and domain experts), we are working to solve the world’s most important AI challenges. QuantumBlack Labs is our center of technology development and client innovation, which has been driving cutting-edge advancements and developments in AI through locations across the globe.
Artificial intelligence is pretty much just what it sounds like—the practice of getting machines to mimic human intelligence to perform tasks. You’ve probably interacted with AI even if you don’t realize it—voice assistants like Siri and Alexa are founded on AI technology, as are customer service chatbots that pop up to help you navigate websites.
Machine learning is a type of artificial intelligence. Through machine learning, practitioners develop artificial intelligence through models that can “learn” from data patterns without human direction. The unmanageably huge volume and complexity of data (unmanageable by humans, anyway) that is now being generated has increased machine learning’s potential , as well as the need for it.
Machine learning is founded on a number of building blocks, starting with classical statistical techniques developed between the 18th and 20th centuries for small data sets. In the 1930s and 1940s, the pioneers of computing—including theoretical mathematician Alan Turing—began working on the basic techniques for machine learning. But these techniques were limited to laboratories until the late 1970s, when scientists first developed computers powerful enough to mount them.
Until recently, machine learning was largely limited to predictive models, used to observe and classify patterns in content. For example, a classic machine learning problem is to start with an image or several images of, say, adorable cats. The program would then identify patterns among the images, and then scrutinize random images for ones that would match the adorable cat pattern. Generative AI was a breakthrough. Rather than simply perceive and classify a photo of a cat, machine learning is now able to create an image or text description of a cat on demand.
How do text-based machine learning models work how are they trained.
ChatGPT may be getting all the headlines now, but it’s not the first text-based machine learning model to make a splash. OpenAI’s GPT-3 and Google’s BERT both launched in recent years to some fanfare. But before ChatGPT, which by most accounts works pretty well most of the time (though it’s still being evaluated), AI chatbots didn’t always get the best reviews. GPT-3 is “by turns super impressive and super disappointing,” said New York Times tech reporter Cade Metz in a video where he and food writer Priya Krishna asked GPT-3 to write recipes for a (rather disastrous) Thanksgiving dinner .
The first machine learning models to work with text were trained by humans to classify various inputs according to labels set by researchers. One example would be a model trained to label social media posts as either positive or negative. This type of training is known as supervised learning because a human is in charge of “teaching” the model what to do.
The next generation of text-based machine learning models rely on what’s known as self-supervised learning. This type of training involves feeding a model a massive amount of text so it becomes able to generate predictions. For example, some models can predict, based on a few words, how a sentence will end. With the right amount of sample text—say, a broad swath of the internet—these text models become quite accurate. We’re seeing just how accurate with the success of tools like ChatGPT.
Building a generative AI model has for the most part been a major undertaking, to the extent that only a few well-resourced tech heavyweights have made an attempt . OpenAI, the company behind ChatGPT, former GPT models, and DALL-E, has billions in funding from bold-face-name donors. DeepMind is a subsidiary of Alphabet, the parent company of Google, and even Meta has dipped a toe into the generative AI model pool with its Make-A-Video product. These companies employ some of the world’s best computer scientists and engineers.
But it’s not just talent. When you’re asking a model to train using nearly the entire internet, it’s going to cost you. OpenAI hasn’t released exact costs, but estimates indicate that GPT-3 was trained on around 45 terabytes of text data—that’s about one million feet of bookshelf space, or a quarter of the entire Library of Congress—at an estimated cost of several million dollars. These aren’t resources your garden-variety start-up can access.
As you may have noticed above, outputs from generative AI models can be indistinguishable from human-generated content, or they can seem a little uncanny. The results depend on the quality of the model—as we’ve seen, ChatGPT’s outputs so far appear superior to those of its predecessors—and the match between the model and the use case, or input.
ChatGPT can produce what one commentator called a “ solid A- ” essay comparing theories of nationalism from Benedict Anderson and Ernest Gellner—in ten seconds. It also produced an already famous passage describing how to remove a peanut butter sandwich from a VCR in the style of the King James Bible. Image-generating AI models like DALL-E 2 can create strange, beautiful images on demand, like a Raphael painting of a Madonna and child, eating pizza . Other generative AI models can produce code, video, audio, or business simulations .
But the outputs aren’t always accurate—or appropriate. When Priya Krishna asked DALL-E 2 to come up with an image for Thanksgiving dinner, it produced a scene where the turkey was garnished with whole limes, set next to a bowl of what appeared to be guacamole. For its part, ChatGPT seems to have trouble counting, or solving basic algebra problems—or, indeed, overcoming the sexist and racist bias that lurks in the undercurrents of the internet and society more broadly.
Generative AI outputs are carefully calibrated combinations of the data used to train the algorithms. Because the amount of data used to train these algorithms is so incredibly massive—as noted, GPT-3 was trained on 45 terabytes of text data—the models can appear to be “creative” when producing outputs. What’s more, the models usually have random elements, which means they can produce a variety of outputs from one input request—making them seem even more lifelike.
The opportunity for businesses is clear. Generative AI tools can produce a wide variety of credible writing in seconds, then respond to criticism to make the writing more fit for purpose. This has implications for a wide variety of industries, from IT and software organizations that can benefit from the instantaneous, largely correct code generated by AI models to organizations in need of marketing copy. In short, any organization that needs to produce clear written materials potentially stands to benefit. Organizations can also use generative AI to create more technical materials, such as higher-resolution versions of medical images. And with the time and resources saved here, organizations can pursue new business opportunities and the chance to create more value.
We’ve seen that developing a generative AI model is so resource intensive that it is out of the question for all but the biggest and best-resourced companies. Companies looking to put generative AI to work have the option to either use generative AI out of the box or fine-tune them to perform a specific task. If you need to prepare slides according to a specific style, for example, you could ask the model to “learn” how headlines are normally written based on the data in the slides, then feed it slide data and ask it to write appropriate headlines.
Because they are so new, we have yet to see the long tail effect of generative AI models. This means there are some inherent risks involved in using them—some known and some unknown.
The outputs generative AI models produce may often sound extremely convincing. This is by design. But sometimes the information they generate is just plain wrong. Worse, sometimes it’s biased (because it’s built on the gender, racial, and myriad other biases of the internet and society more generally) and can be manipulated to enable unethical or criminal activity. For example, ChatGPT won’t give you instructions on how to hotwire a car, but if you say you need to hotwire a car to save a baby, the algorithm is happy to comply. Organizations that rely on generative AI models should reckon with reputational and legal risks involved in unintentionally publishing biased, offensive, or copyrighted content.
These risks can be mitigated, however, in a few ways. For one, it’s crucial to carefully select the initial data used to train these models to avoid including toxic or biased content. Next, rather than employing an off-the-shelf generative AI model, organizations could consider using smaller, specialized models. Organizations with more resources could also customize a general model based on their own data to fit their needs and minimize biases. Organizations should also keep a human in the loop (that is, to make sure a real human checks the output of a generative AI model before it is published or used) and avoid using generative AI models for critical decisions, such as those involving significant resources or human welfare.
It can’t be emphasized enough that this is a new field. The landscape of risks and opportunities is likely to change rapidly in coming weeks, months, and years. New use cases are being tested monthly, and new models are likely to be developed in the coming years. As generative AI becomes increasingly, and seamlessly, incorporated into business, society, and our personal lives, we can also expect a new regulatory climate to take shape. As organizations begin experimenting—and creating value—with these tools, leaders will do well to keep a finger on the pulse of regulation and risk.
Articles referenced include:
This article was updated in April 2024; it was originally published in January 2023.
Related articles.
IMAGES
VIDEO
COMMENTS
The topic of this entry is not—at least directly—moral theory; rather, it is the definition of morality.Moral theories are large and complex things; definitions are not. The question of the definition of morality is the question of identifying the target of moral theorizing. Identifying this target enables us to see different moral theories as attempting to capture the very same thing.
Morality refers to the set of standards that enable people to live cooperatively in groups. It's what societies determine to be "right" and "acceptable.". Sometimes, acting in a moral manner means individuals must sacrifice their own short-term interests to benefit society.
1. Aims and Methods of Moral Philosophy. The most basic aim of moral philosophy, and so also of the Groundwork, is, in Kant's view, to "seek out" the foundational principle of a "metaphysics of morals," which Kant understands as a system of a priori moral principles that apply the CI to human persons in all times and cultures. Kant pursues this project through the first two chapters ...
Morality (from Latin moralitas 'manner, character, proper behavior') is the differentiation of intentions, decisions and actions between those that are distinguished as proper (right) and those that are improper (wrong). [1] Morality can be a body of standards or principles derived from a code of conduct from a particular philosophy, religion ...
The term ethics may refer to the philosophical study of the concepts of moral right and wrong and moral good and bad, to any philosophical theory of what is morally right and wrong or morally good and bad, and to any system or code of moral rules, principles, or values. The last may be associated with particular religions, cultures, professions, or virtually any other group that is at least ...
Moral Philosophy. Moral philosophy is the branch of philosophy that contemplates what is right and wrong. It explores the nature of morality and examines how people should live their lives in relation to others. Moral philosophy has three branches. One branch, meta-ethics , investigates big picture questions such as, "What is morality ...
guilt. right and wrong. morality, the moral beliefs and practices of a culture, community, or religion or a code or system of moral rules, principles, or values. The conceptual foundations and rational consistency of such standards are the subject matter of the philosophical discipline of ethics, also known as moral philosophy.
In general, morals are considered guidelines that affect individuals, and ethics are considered guideposts for entire larger groups or communities. Ethics are also more culturally based than morals. For example, the seven morals listed earlier transcend cultures, but there are certain rules, especially those in predominantly religious nations ...
5.1.1 The Language of Ethics. Ethics is about values, what is right and wrong, or better or worse. Ethics makes claims, or judgments, that establish values. Evaluative claims are referred to as normative, or prescriptive, claims. Normative claims tell us, or affirm, what ought to be the case.
Morality, Ethics, Evil, Greed. To put it simply, ethics represents the moral code that guides a person's choices and behaviors throughout their life. The idea of a moral code extends beyond the ...
Morals. Morals are the prevailing standards of behavior that enable people to live cooperatively in groups. Moral refers to what societies sanction as right and acceptable. Most people tend to act morally and follow societal guidelines. Morality often requires that people sacrifice their own short-term interests for the benefit of society.
Morality and Social Order. Moral principles indicate what is a "good," "virtuous," "just," "right," or "ethical" way for humans to behave (Haidt, 2012; Haidt & Kesebir, 2010; Turiel, 2006).Moral guidelines ("do no harm") can induce individuals to display behavior that has no obvious instrumental use or no direct value for them, for instance, when they show empathy ...
Definition of Moral Philosophy Moral philosophy, often called ethics, is like a compass for right and wrong actions. Imagine you're at a fork in the road and each direction leads to a different action. Moral philosophy is your guide, helping you figure out which direction to go. The first simple definition of moral philosophy is this: it's a set of tools that help us choose the best path ...
The term "morality" can be used either. descriptively to refer to a code of conduct put forward by a society or, some other group, such as a religion, or. accepted by an individual for her own behavior or. normatively to refer to a code of conduct that, given specified conditions, would be put forward by all rational persons.
Both morality and ethics loosely have to do with distinguishing the difference between "good and bad" or "right and wrong.". Many people think of morality as something that's personal and normative, whereas ethics is the standards of "good and bad" distinguished by a certain community or social setting. For example, your local ...
Summary. Since the ancients, philosophers, theologians, and political actors have pondered the relationship between the moral realm and the political realm. Complicating the long debate over the intersection of morality and politics are diverse conceptions of fundamental concepts: the right and the good, justice and equality, personal liberty ...
Ethical Issues Definition. Ethical issues refer to situations where a decision, action, or policy conflicts with ethical principles or societal norms. These dilemmas often involve a choice between competing values or interests, such as fairness vs. efficiency, privacy vs. security, or individual rights vs. collective good.
Integrity is about "moral" norms and values, those that refer to what is right or wrong, good or bad. The features also refer to a general consent with relevance for everyone in the same circumstances. That relates to "valid" moral values and norms. In sum, morality and ethics refer to what is right or wrong, good or bad.
Morality in its basic definition, is the knowledge between what is right and what is wrong. In Joan Didion's essay, "On Morality," she uses examples to show how morality is used to justify actions and decisions by people. She explains that morality can have a profound effect on the decisions that people chose to make.
Possession of moral competence—the ability to recognize and respond to moral considerations—is often taken to be a condition on moral responsibility. Wolf's (1987) story of JoJo illustrates this proposal. JoJo was raised by an evil dictator and becomes the same sort of sadistic tyrant that his father was.
Morality in its basic definition, is the knowledge between what is right and what is wrong. In Joan Didion's essay, "On Morality," she uses examples to show how morality is used to justify actions and decisions by people. She explains that morality can have a profound effect on the decisions that people chose to make.
Guest Essay. The Moral Limits of Bankruptcy Law. June 4, 2024. ... the new bankruptcy laws also expanded the definition of "creditor" to include people allegedly injured by the business. Yet ...
Morality indicates what is the "right" and "wrong" way to behave, for instance, that one should be fair and not unfair to others (Haidt & Kesebir, 2010).This is considered of interest to explain the social behavior of individuals living together in groups ().Results from animal studies (e.g., de Waal, 1996) or insights into universal justice principles (e.g., Greenberg & Cropanzano ...
ChatGPT is an artificial intelligence ( AI) chatbot that uses natural language processing to create humanlike conversational dialogue. The language model can respond to questions and compose various written content, including articles, social media posts, essays, code and emails. These are some uses for natural language processing.
It's clear that generative AI tools like ChatGPT (the GPT stands for generative pretrained transformer) and image generator DALL-E (its name a mashup of the surrealist artist Salvador Dalí and the lovable Pixar robot WALL-E) have the potential to change how a range of jobs are performed. The full scope of that impact, though, is still ...