PSYCH 470 Sacred Values & Complexities of Decision Making Discussion

PSYCH 470 Sacred Values & Complexities of Decision Making Discussion

Description

 

 

Post a comment or question to the course’s Learn forum.

You can focus on one of the article to write the discussion.

Greene et al 2001

Johnson&Ahn 2021

Can just read through pg. 9 (up to end of Study 4).

Tetlock 2003

Satel 2007, 2009

 

Unformatted Attachment Preview

REPORTS matic assays using a wide range of in vitro conditions. Furthermore, once the proteins are prepared, proteome screening is significantly faster and cheaper. Using similar procedures, it is clearly possible to prepare protein arrays of 10 to 100,000 proteins for global proteome analysis in humans and other eukaryotes. References and Notes 29. S. Chu et al., Science 282, 699 (1998). 30. A. Casamayor et al., Curr. Biol. 9, 186 (1999); R. Guerra, M. L. Bianconi, Biosci. Rep. 20, 41 (2000). 31. M. Pardo et al., Yeast 15, 459 (1999). 32. Single-letter abbreviations for the amino acid residues are as follows: A, Ala; C, Cys; D, Asp; E, Glu; F, Phe; G, Gly; H, His; I, Ile; K, Lys; L, Leu; M, Met; N, Asn; P, Pro; Q, Gln; R, Arg; S, Ser; T, Thr; V, Val; W, Trp; and Y, Tyr. X indicates any residue. 33. We thank K. Nelson and S. Dellaporta for providing invaluable help. We also thank A. Kumar, G. Michaud, and C. Costigan for providing comments on the manuscript. This research is supported by grants from NIH. H.Z., A.C., and R.J. were supported by postdoctoral fellowships from the Damon Runyon–Walter Winchell Foundation, the Spanish Ministerio de Ciencia y Tecnologia, and by an IBM Graduate Research Fellowship, respectively. 2 May 2001; accepted 13 July 2001 Published online 26 July 2001; 10.1126/science.1062191 Include this information when citing this paper. An fMRI Investigation of Emotional Engagement in Moral Judgment Joshua D. Greene,1,2* R. Brian Sommerville,1 Leigh E. Nystrom,1,3 John M. Darley,3 Jonathan D. Cohen1,3,4 The long-standing rationalist tradition in moral psychology emphasizes the role of reason in moral judgment. A more recent trend places increased emphasis on emotion. Although both reason and emotion are likely to play important roles in moral judgment, relatively little is known about their neural correlates, the nature of their interaction, and the factors that modulate their respective behavioral influences in the context of moral judgment. In two functional magnetic resonance imaging (fMRI) studies using moral dilemmas as probes, we apply the methods of cognitive neuroscience to the study of moral judgment. We argue that moral dilemmas vary systematically in the extent to which they engage emotional processing and that these variations in emotional engagement influence moral judgment. These results may shed light on some puzzling patterns in moral judgment observed by contemporary philosophers. The present study was inspired by a family of ethical dilemmas familiar to contemporary moral philosophers (1). One such dilemma is the trolley dilemma: A runaway trolley is headed for five people who will be killed if it proceeds on its present course. The only way to save them is to hit a switch that will turn the trolley onto an alternate set of tracks where it will kill one person instead of five. Ought you to turn the trolley in order to save five people at the expense of one? Most people say yes. Now consider a similar problem, the footbridge dilemma. As before, a trolley threatens to kill five people. You are Center for the Study of Brain, Mind, and Behavior, Department of Philosophy, 1879 Hall, 3Department of Psychology, Green Hall, Princeton University, Princeton, NJ 08544, USA. 4Department of Psychiatry, University of Pittsburgh, Pittsburgh, PA 15260, USA. 1 2 *To whom correspondence should be addressed. Email: jdgreene@princeton.edu standing next to a large stranger on a footbridge that spans the tracks, in between the oncoming trolley and the five people. In this scenario, the only way to save the five people is to push this stranger off the bridge, onto the tracks below. He will die if you do this, but his body will stop the trolley from reaching the others. Ought you to save the five others by pushing this stranger to his death? Most people say no. Taken together, these two dilemmas create a puzzle for moral philosophers: What makes it morally acceptable to sacrifice one life to save five in the trolley dilemma but not in the footbridge dilemma? Many answers have been proposed. For example, one might suggest, in a Kantian vein, that the difference between these two cases lies in the fact that in the footbridge dilemma one literally uses a fellow human being as a means to some independent end, whereas in the trolley dilemma the unfortunate person just happens to www.sciencemag.org SCIENCE VOL 293 14 SEPTEMBER 2001 Downloaded from https://www.science.org at University of Waterloo on June 18, 2023 1. S. Fields, Y. Kohara, D. J. Lockhart, Proc. Natl. Acad. Sci. U.S.A. 96, 8825 (1999); A. Goffeau et al., Science 274, 546 (1996). 2. P. Ross-Macdonald et al., Nature 402, 413 (1999); J. L. DeRisi, V. R. Iyer, P. O. Brown, Science 278, 680 (1997); E. A. Winzeler et al., Science 285, 901 (1999); P. Uetz et al., Nature 403, 623 (2000); T. Ito et al., Proc. Natl. Acad. Sci. U.S.A. 97, 1143 (2000). 3. H. Zhu, M. Snyder, Curr. Opin. Chem. Biol. 5, 40 (2001). 4. M. R. Martzen et al., Science 286, 1153 (1999). 5. H. Zhu et al., Nature Genet. 26, 283 (2000). 6. G. MacBeath, S. L. Schreiber, Science 289, 1760 (2000). 7. A. Caveman, J. Cell Sci. 113, 3543 (2000). 8. P. Arenkov et al., Anal. Biochem. 278, 123 (2000). 9. D. A. Mitchell, T. K. Marshall, R. J. Deschenes, Yeast 9, 715 (1993). The expression vector pEGH was created by inserting an RGS-HisX6 epitope tag between the GST gene and the polycloning site of pEG(KG). The yeast ORFs were cloned using the strategy described previously (5), except every step was done in a 96-well format. Plasmid DNAs confirmed by DNA sequencing were reintroduced into both yeast ( Y258) and E. coli (DH5!). The library contains 5800 unique ORFs. 10. For details of 96-well format protein purification protocol, a full list of results from all the experiments, and the design of the positive identification algorithms, please visit our public Web site (http:// bioinfo.mbb.yale.edu/proteinchip) and supplementary material at Science Online (www.sciencemag.org/ cgi/content/full/1062191/DC1). 11. Biotinylated calmodulin (CalBiochem, USA) was added to the proteome chip at 0.02 “g/”l in phosphatebuffered saline (PBS) with 0.1 mM calcium and incubated in a humidity chamber for 1 hour at room temperature. Calcium (0.1 mM) was present in buffers in all subsequent steps. The chip was washed three times with PBS at room temperature (RT, 25°C). Cy3-conjugated streptavidin ( Jackson IR, USA) (1: 5000 dilution) was added to the chip and incubated for 30 min at RT. After extensive washing, the chip was spun dry and scanned using a microarray scanner; the data was subsequently acquired with the GenePix array densitometry software (Axon, USA). 12. S. S. Hook, A. R. Means, Annu. Rev. Pharmacol. Toxicol. 41, 471 (2001). 13. M. S. Cyert, R. Kunisawa, D. Kaim, J. Thorner, Proc. Natl. Acad. Sci. U.S.A. 88, 7376 (1991). 14. D. A. Stirling, K. A. Welch, M. J. Stark, EMBO J. 13, 4329 (1994). 15. F. Bohl, C. Kruse, A. Frank, D. Ferring, R. P. Jansen, EMBO J. 19, 5514 (2000); E. Bertrand et al., Mol. Cell 2, 437 (1998). 16. D. C. Winter, E. Y. Choe, R. Li, Proc. Natl. Acad. Sci. U.S.A. 96, 7288 (1999). 17. C. Schaerer-Brodbeck, H. Riezman, Mol. Biol. Cell 11, 1113 (2000). 18. K. Homma, J. Saito, R. Ikebe, M. Ikebe, J. Biol. Chem. 275, 34766 (2000). 19. J. Menendez, J. Delgado, C. Gancedo, Yeast 14, 647 (1998). 20. G. Odorizzi, M. Babst, S. D. Emr, Trends Biochem. Sci. 25, 229 (2000); D. A. Fruman et al., Annu. Rev. Biochem. 67, 481 (1998); T. F. Martin, Annu. Rev. Cell Dev. Biol. 14, 231 (2000); S. Wera, J. C. T. Bergsma, FEMS Yeast Res. 1, 1406 (2001). 21. Liposomes were prepared using standard methods (30). Briefly, appropriate amounts of each lipid in chloroform were mixed and dried under nitrogen. The lipid mixture was resuspended in TBS buffer by vor- texing. The liposomes were created by sonication. To probe the proteome chips, 60 “l of the different liposomes were added onto different chips. The chips were incubated in a humidity chamber for 1 hour at RT. After washing with TBS buffer for three times, Cy3-conjugated streptavidin (1: 5000 dilution) was added to the chip and incubated for 30 min at RT. 22. Positives were identified using a combination of the GenePix software which computes a local intensity background for each spot and a series of algorithms we developed. Details can be found at http://bioinfo.mbb.yale.edu/proteinchip and at www.sciencemag.org/cgi/content/full/1062191/ DC1. 23. M. C. Costanzo et al., Nucleic Acids Res. 29, 75 (2001). 24. M. Gerstein, Proteins 33, 518 (1998). 25. K. Ansari et al., J. Biol. Chem. 274, 30052 (1999). 26. Y. Barral, M. Parra, S. Bidlingmaier, M. Snyder, Genes Dev. 13, 176 (1999). 27. Y. Li, T. Kane, C. Tipper, P. Spatrick, D. D. Jenness, Mol. Cell. Biol. 19, 3588 (1999). 28. I. Arnold et al., J. Biol. Chem. 274, 36 (1999). 2105 REPORTS 2106 of a color word can interfere with participants’ ability to name the color in which it is displayed; e.g., the ability to say “green” in response to the word “red” written in green ink) (6, 7). In light of our proposal that people tend to have a salient, automatic emotional response to the footbridge dilemma that leads them to judge the action it proposes to be inappropriate, we would expect those (relatively rare) individuals who nevertheless judge this action to be appropriate to do so against a countervailing emotional response and to exhibit longer reaction times as a result of this emotional interference. More generally, we predicted longer reaction times for trials in which the participant’s response is incongruent with the emotional response (e.g., saying “appropriate” to a dilemma such as the footbridge dilemma). We predicted the absence of such effects for dilemmas such as the trolley dilemma which, according to our theory, are less likely to elicit a strong emotional response. In each of two studies, Experiments 1 and 2, we used a battery of 60 practical dilemmas (8). These dilemmas were divided into “moral” and “non-moral” categories on the basis of the responses of pilot participants (8). (Typical examples of non-moral dilemmas posed questions about whether to travel by bus or by train given certain time constraints and about which of two coupons to use at a store.) Two independent coders evaluated each moral dilemma using three criteria designed to capture the difference between the intuitively “up close and personal” (and putatively more emotional) sort of violation exhibited by the footbridge dilemma and the more intuitively impersonal (and putatively less emotional) violation exhibited by the trolley dilemma (8, 9). Moral dilemmas meeting these criteria were assigned to the “moralpersonal” condition, the others to the “moralimpersonal” condition. Typical moral-personal dilemmas included a version of the footbridge dilemma, a case of stealing one person’s organs in order to distribute them to five others, and a case of throwing people off a sinking lifeboat. Typical moral-impersonal dilemmas included a version of the trolley dilemma, a case of keeping money found in a lost wallet, and a case of voting for a policy expected to cause more deaths than its alternatives. Participants responded to each dilemma by indicating whether they judged the action it proposes to be “appropriate” or “inappropriate.” In each experiment, nine participants (10) responded to each of 60 dilemmas (11) while undergoing brain scanning using f MRI (12). Figures 1 and 2 describe brain areas identified in Experiment 1 by a thresholded omnibus analysis of variance (ANOVA) performed on the functional images (13). In each case, the Fig. 1. Effect of condition on activity in brain areas identified in Experiment 1. R, right; L, left; B, bilateral. Results for the middle frontal gyrus were not replicated in Experiment 2. The moral-personal condition was significantly different from the other two conditions in all other areas in both Experiments 1 and 2. In Experiment 1 the medial frontal and posterior cingulate gyri showed significant differences between the moral-impersonal and non-moral conditions. In Experiment 2 only the posterior cingulate gyrus was significantly different in this comparison. Brodmann’s Areas and Talairach (28) coordinates (x, y, z) for each area are as follows (left to right in graph): 9/10 (1, 52, 17); 31 (– 4, –54, 35); 46 (45, 36, 24); 7/40 (– 48, – 65, 26); 7/40 (50, –57, 20). Fig. 2. Brain areas exhibiting differences in activity between conditions shown in three axial slices of a standard brain (28). Slice location is indicated by Talairach (28) z coordinate. Data are for the main effect of condition in Experiment 1. Colored areas reflect the thresholded F scores. Images are reversed left to right to follow radiologic convention. 14 SEPTEMBER 2001 VOL 293 SCIENCE www.sciencemag.org Downloaded from https://www.science.org at University of Waterloo on June 18, 2023 be in the way. This answer, however, runs into trouble with a variant of the trolley dilemma in which the track leading to the one person loops around to connect with the track leading to the five people (1). Here we will suppose that without a body on the alternate track, the trolley would, if turned that way, make its way to the other track and kill the five people as well. In this variant, as in the footbridge dilemma, you would use someone’s body to stop the trolley from killing the five. Most agree, nevertheless, that it is still appropriate to turn the trolley in this case in spite of the fact that here, too, we have a case of “using.” These are just one proposed solution and one counterexample, but together they illustrate the sort of dialectical difficulties that all proposed solutions to this problem have encountered. If a solution to this problem exists, it is not obvious. That is, there is no set of consistent, readily accessible moral principles that captures people’s intuitions concerning what behavior is or is not appropriate in these and similar cases. This leaves psychologists with a puzzle of their own: How is it that nearly everyone manages to conclude that it is acceptable to sacrifice one life for five in the trolley dilemma but not in the footbridge dilemma, in spite of the fact that a satisfying justification for distinguishing between these two cases is remarkably difficult to find (2)? We maintain that, from a psychological point of view, the crucial difference between the trolley dilemma and the footbridge dilemma lies in the latter’s tendency to engage people’s emotions in a way that the former does not. The thought of pushing someone to his death is, we propose, more emotionally salient than the thought of hitting a switch that will cause a trolley to produce similar consequences, and it is this emotional response that accounts for people’s tendency to treat these cases differently. This hypothesis concerning these two cases suggests a more general hypothesis concerning moral judgment: Some moral dilemmas (those relevantly similar to the footbridge dilemma) engage emotional processing to a greater extent than others (those relevantly similar to the trolley dilemma), and these differences in emotional engagement affect people’s judgments. The present investigation is an attempt to test this more general hypothesis. Drawing upon recent work concerning the neural correlates of emotion (3–5), we predicted that brain areas associated with emotion would be more active during contemplation of dilemmas such as the footbridge dilemma as compared to during contemplation of dilemmas such as the trolley dilemma. In addition, we predicted a pattern of behavioral interference similar to that observed in cognitive tasks in which automatic processes can influence responses, such as the Stroop task (in which the identity REPORTS Fig. 3. Mean reaction time by condition and response type in Experiment 2. A mixed-effects ANOVA revealed a significant interaction between condition and response type [F(2, 8) # 12.449, P $ 0.0005). Reaction times differed significantly between responses of “appropriate” and “inappropriate” in the moral-personal condition [t(8) # 4.530, P $ 0.0005] but not in the other conditions (P % 0.05). Error bars indicate two standard errors of the mean. but which do not occur in the moral-impersonal and non-moral conditions. As predicted, responses of “appropriate” (emotionally incongruent) were significantly slower than responses of “inappropriate” (emotionally congruent) within the moral-personal condition, and there was no significant difference in reaction time between responses of “appropriate” and “inappropriate” in the other two conditions. In fact, the data exhibit a trend in the opposite direction for the other two conditions (24), with responses of “inappropriate” taking slightly longer than responses of “appropriate.” In each of the brain areas identified in both Experiments 1 and 2, the moral-personal condition had an effect significantly different from both the moral-impersonal and the non-moral conditions. All three areas showing increased relative activation in the moral-personal condition have been implicated in emotional processing. The behavioral data provide further evidence for the increased emotional engagement in moral-personal condition by revealing a reaction time pattern that is unique to that condition and that was predicted by our hypothesis concerning emotional interference. Moreover, the presence of this interference effect in the behavioral data strongly suggests that the increased emotional responses generated by the moral-personal dilemmas have an influence on and are not merely incidental to moral judgment (25). These data also suggest that, in terms of the psychological processes associated with their production, judgments concerning “impersonal” moral dilemmas more closely resemble judgments concerning non-moral dilemmas than they do judgments concerning “personal” moral dilemmas. The trolley and footbridge dilemmas emerged as pieces of a puzzle for moral philosophers: Why is it acceptable to sacrifice one person to save five others in the trolley dilemma but not in the footbridge dilemma? Here we consider these dilemmas as pieces of a psychological puzzle: How do people manage to conclude that it is acceptable to sacrifice one for the sake of five in one case but not in the other? We maintain that emotional response is likely to be the crucial difference between these two cases. But this is an answer to the psychological puzzle, not the philosophical one. Our conclusion, therefore, is descriptive rather than prescriptive. We do not claim to have shown any actions or judgments to be morally right or wrong. Nor have we argued that emotional response is the sole determinant of judgments concerning moral dilemmas of the kind discussed in this study. On the contrary, the behavioral influence of these emotional responses is most strongly suggested in the performance of those participants who judge in spite of their emotions. What has been demonstrated is that there are systematic variations in the engagement of emotion in moral judgment. The systematic nature of these variations is manifest in an observed correlation between (i) certain features that differ between the trolley dilemma and the footbridge dilemma and (ii) patterns of neural activity in emotion-related brain areas as well as patterns in reaction time. Methodological constraints led us to characterize these “certain features” by means of a highly regimented distinction between actions that are “personal” and “impersonal” (8). This personal-impersonal distinction has proven useful in generating the present results, but it is by no means definitive. We view this distinction as a useful “first cut,” an important but preliminary step toward identifying the psychologically essential features of circumstances that engage (or fail to engage) our emotions and that ultimately shape our moral judgments—judgments concerning hypothetical examples such as the trolley and footbridge dilemmas but also concerning the more complicated moral dilemmas we face in our public and private lives. A distinction such as this may allow us to steer a middle course between the traditional rationalism and more recent emotivism that have dominated moral psychology (26). The present results raise but do not answer a more general question concerning the relation between the aforementioned philosophical and psychological puzzles: How will a better understanding of the mechanisms that give rise to our moral judgments alter our attitudes toward the moral judgments we make? References and Notes 1. J. J. Thomson, Rights, Restitution and Risk (Harvard Univ. Press, Cambridge, 1986), pp. 94 –116. 2. A loose but potentially illuminating analogy can be made between this and the Chomskyan question: How is that most people can speak grammatically without being able to exhaustively cite the rules of grammar? 3. A. R. Damasio, Descartes’ Error (Putnam, New York, 1994). 4. R. J. Davidson, W. Irwin, Trends. Cogn Sci. 3, 11 (1999). 5. E. M. Reiman, J. Clin. Psychiatry 58 (suppl 16), 4 (1997). 6. J. R. Stroop, J. Exp. Psychol. 72, 219 (1935). 7. C. M. MacLeod, Psychol. Bull. 109, 163 (1991). 8. Testing materials (dilemmas) are available from Science Online at www.sciencemag.org/cgi/content/ full/293/5537/2105/DC1. 9. The three criteria are as follows: First, coders indicated for each dilemma whether the action in question could “reasonably be expected to lead to serious bodily harm.” Second, they were asked to indicate whether this harm would be “the result of deflecting an existing threat onto a different party.” Our use of this criterion, which parallels a distinction made by Thomson (1), is a an attempt to operationalize an intuitive notion of “agency.” Intuitively, when a harm is produced by means of deflecting an existing threat, the agent has merely “edited” and not “authored” the resulting harm, and thus its contemplation is less emotionally engaging. Lastly, coders were asked to indicate whether the resulting harm would “befall a particular person or a member or members of a particular group of people.” Here the question, in intuitive terms, is whether the victim is “on stage” in the dilemma. The moral dilemmas of which the coders said that the action in question (a) could reasonably be expected to lead to serious bodily harm (b) to a particular person or a member or members of a particular group of people (c) where this harm is not the result of deflecting an existing threat onto a different party were assigned to the “moral- www.sciencemag.org SCIENCE VOL 293 14 SEPTEMBER 2001 Downloaded from https://www.science.org at University of Waterloo on June 18, 2023 ANOVA identified all brain areas differing in activity among the moral-personal, moral-impersonal, and non-moral conditions. Planned comparisons on these areas revealed that medial portions of Brodmann’s Areas (BA) 9 and 10 (medial frontal gyrus), BA 31 ( posterior cingulate gyrus), and BA 39 (angular gyrus, bilateral) were significantly more active in the moralpersonal condition than in the moral-impersonal and the non-moral conditions. Recent functional imaging studies have associated each of these areas with emotion (5, 14–16). Areas associated with working memory have been found to become less active during emotional processing as compared to periods of cognitive processing (17). BA 46 (middle frontal gyrus, right) and BA 7/40 ( parietal lobe, bilateral)— both associated with working memory (18, 19)—were significantly less active in the moral-personal condition than in the other two conditions. In BA 39 (bilateral), BA 46, and BA 7/40 (bilateral), there was no significant difference between the moral-impersonal and the non-moral condition (20, 21). Experiment 2 served to replicate the results of Experiment 1 (22) and to provide behavioral data concerning participants’ judgments and reaction times. Planned comparisons on the seven brain areas identified in Experiment 1 yielded results nearly identical to those of Experiment 1 with the following differences. In Experiment 2 there was no difference in BA 9/10 between the moral-impersonal and non-moral conditions, and no differences were found for BA 46 (23). Reaction time data from Experiment 2 are described by Fig. 3. Our theory concerning emotional interference predicted longer reaction times for emotionally incongruent responses, which occur when a participant responds “appropriate” in the moral-personal condition (e.g., judging it “appropriate” to push the man off the footbridge in the footbridge dilemma) 2107 REPORTS 2108 mas in order to avoid a confound present in the design of the behavioral aspect of Experiment 1 (24). 23. The replicated results for BAs 9/10, 31, and bilateral 7/40 were achieved at a higher significance threshold in Experiment 2 (P $ 0.01) than in Experiment 1. 24. A potential confound in the design of the behavioral aspect of the present study deserves attention. One might suppose that participants respond more slowly when giving an “unconventional” response, i.e., a response that differs from that of the majority. One might suppose further that the moralpersonal condition makes greater use of dilemmas for which the emotionally incongruent response is also the unconventional response (as in judging that one may push the man off the footbridge in the footbridge dilemma), thus confounding emotional incongruity with unconventionality in participants’ responses. Therefore, an effect that we attribute to emotional engagement may simply be an effect of the conventionality of participants’ responses. To deconfound these factors, in Experiment 2 we included additional moral-personal dilemmas for which the conventional response was emotionally incongruent rather than congruent. For example, one dilemma asked whether it is appropriate to smother one’s crying baby to death in order to prevent its crying from summoning enemy soldiers who will kill oneself, the baby, and a number of others if summoned. Most participants judged this action to be appropriate in spite of their putative emotional tendencies to the contrary. As predicted by our hypothesis, reaction times in such cases were significantly longer [t (8) # 4.332, P $ 0.0001] than the reaction times for conventional and emotionally congruent responses, as were typically made in response to the footbridge dilemma. Thus, after controlling for conventionality, reaction times in the moral-personal condition are longer for trials which, according to our theory, reflect a judgment that is emotionally incongruent rather than congruent. 25. Although our conclusion concerning the behavioral influence of the observed emotional responses does not require that the emotion-related areas identified in Experiments 1 and 2 be different from areas that show increased activity in response to more basic kinds of emotional stimuli, one might wonder to what extent they do differ from such areas. We made a preliminary attempt to answer this question in the form of an addendum study to Experiment 1. Five participants responded to moral-personal and moral-impersonal dilemmas as in Experiments 1 and 2. Participants also performed a task in which they named the colors of visually presented emotional and neutral words, a task similar to the one used by Isenberg et al. (27). The emotional word stimuli were extracted from the text of the moral dilemmas by three independent coders. Neutral words and additional emotional words were drawn from materials used by Isenberg et al. (27). A comparison of the emotional and neutral word conditions (t test, P $ 0.05, cluster size !8 voxels) revealed no significant activation in the emotion-related areas identified in Experiment 1 and only a marginal activation (9 out of 123 voxels) in one of the working memory areas (left BA 7/40). This comparison did, however, reveal activations in numerous other areas. A comparison of the moral-personal and moral-impersonal conditions from the same five sessions replicated the activations observed in Experiments 1 and 2 in BA 9/10 (55 of 64 voxels at P $ 0.05) and left BA 7/40 (40 of 123 voxels at P $ 0.05). These results demonstrate, at the very least, that the effects observed in Experiments 1 and 2 in the medial frontal gyrus (BA 9/10) cannot be attributed to the mere reading of emotional words. This area, more than any of the others we have identified, is likely to play a role in the integration of emotion and cognition in complex decision-making (3, 5). 26. J. D. Haidt, Psych. Rev., in press. 27. N. Isenberg et al., Proc. Natl. Acad. Sci. U.S.A. 96, 10456 (1999). 28. J. Talairach, P. Tournoux, A Co-Planar Stereotaxic Atlas of the Human Brain ( Thieme, New York, 1988). 29. We gratefully acknowledge M. Gilzenrat, N. Isenberg, P. Jablonka, J. Kroger, and T.-Q. Li for their contributions to this project. Supported in part by grants from the Pew Charitable Trusts (no. 97001533-000) and the National Science Foundation (no. 2556566). 25 May 2001; accepted 30 July 2001 BAFF-R, a Newly Identified TNF Receptor That Specifically Interacts with BAFF Jeffrey S. Thompson,1 Sarah A. Bixler,1 Fang Qian,1 Kalpit Vora,1* Martin L. Scott,1 Teresa G. Cachero,1 Catherine Hession,1 Pascal Schneider,2 Irene D. Sizing,1 Colleen Mullen,1 Kathy Strauch,1 Mohammad Zafari,1 Christopher D. Benjamin,1 Jurg Tschopp,2 Jeffrey L. Browning,1 Christine Ambrose1† B cell homeostasis has been shown to critically depend on BAFF, the B cell activation factor from the tumor necrosis factor (TNF) family. Although BAFF is already known to bind two receptors, BCMA and TACI, we have identified a third receptor for BAFF that we have termed BAFF-R. BAFF-R binding appears to be highly specific for BAFF, suggesting a unique role for this ligand-receptor interaction. Consistent with this, the BAFF-R locus is disrupted in A/WySnJ mice, which display a B cell phenotype qualitatively similar to that of the BAFF-deficient mice. Thus, BAFF-R appears to be the principal receptor for BAFF-mediated mature B cell survival. The TNF family ligand BAFF, also known as TALL-1, THANK, BLyS, and zTNF4 (1–5), enhances B cell survival in vitro (6 ) and has recently emerged as a key regulator of peripheral B cell populations in vivo. Mice overexpressing BAFF display mature B cell hyperplasia and symptoms of systemic lupus erythaematosus (SLE) (7 ). Likewise, some SLE patients have significantly increased levels of BAFF in their 14 SEPTEMBER 2001 VOL 293 SCIENCE www.sciencemag.org Downloaded from https://www.science.org at University of Waterloo on June 18, 2023 personal” condition; the others were assigned to the “moral-impersonal” condition. 10. Participants were five male and four female undergraduates in Experiment 1, four male and five female in Experiment 2. All participants provided written informed consent. 11. Dilemmas were presented in random order in a series of six blocks of ten trials each in Experiment 1, twelve blocks of five trials each in Experiment 2. Participants’ responses to versions of the trolley and footbridge dilemmas were consistent with the intuitions described above (8). 12. Stimuli (dilemmas) were presented on a visual display projected into the scanner. Each dilemma was presented as text through a series of three screens, the first two describing a scenario and the last posing a question about the appropriateness of an action one might perform in that scenario (e.g., turning the trolley). Participants were allowed to read at their own pace, pressing a button to advance from the first to the second screen and from the second to the third screen. After reading the third screen participants responded by pressing one of two buttons (“appropriate” or “inappropriate”). Participants were given a maximum of 46 s to read all three screens and respond. The intertrial interval (ITI) lasted for a minimum of 14 s (seven images) in each trial, allowing the hemodynamic response to return to baseline after each trial. Baseline activity was defined as the mean signal across the last four images of the ITI. Task-related activity was measured using a “floating window” of eight images surrounding (four before, one during, and three after) the point of response. (This window includes three post-response images in order to allow for the 4- to 6-s delay in hemodynamic response to neural activation.) This “floating window” technique combined the benefits of an eventrelated design with the flexibility required to image a complex and temporally extended psychological process that inevitably proceeds at its own pace. In Experiment 1, functional images were acquired in 20 axial slices parallel to the AC-PC (anterior commisure–posterior commisure) line [spiral pulse sequence; repetition time (TR), 2000 ms; echo time (TE), 45 ms; flip angle, 80°; field of view (FOV), 240 mm; 3.75-mm isotropic voxels] using a 1.5-T GE Signa whole-body scanner. In Experiment 2, functional images were acquired in 22 axial slices parallel to the AC-PC line (echoplanar pulse sequence; TR, 2000 ms; TE, 25 ms; flip angle, 90°; FOV, 192 mm; 3.0-mm isotropic voxels; 1-mm interslice spacing) using a 3.0-T Siemens Allegra head-dedicated scanner. 13. Before statistical analysis, images for all participants were coregistered using a 12-parameter automatic algorithm. Images were smoothed with an 8-mm fullwidth at half maximum (FWHM) 3D Gaussian filter. In Experiment 1, the images contained in each response window were analyzed with the use of a voxelwise mixed-effects ANOVA with participant as a random effect, and dilemma-type, block, and response-relative image as fixed effects. Statistical maps of voxelwise F-ratios were thresholded for significance (P $ 0.0005) and cluster size (!8 voxels). In Experiments 1 and 2, planned comparisons for significant differences between conditions (P $ 0.05, cluster size !8 voxels) were made for each area identified by the thresholded ANOVA in Experiment 1. 14. R. J. Maddock, Trends Neurosci. 22, 310 (1999). 15. S. M. Kosslyn et al., Neuroreport 7, 1569 (1996). 16. E. M. Reiman et al., Am. J. Psychiatry 154, 918 (1997). 17. W. C. Drevets, M. E. Raichle, Cognition Emotion 12, 353 (1998). 18. E. E. Smith, J. Jonides, Cognit. Psychol. 33, 5 (1997). 19. J. D. Cohen et al., Nature 386, 604 (1997). 20. In BA 7/40 (right) a small minority of voxels (10 of 91) showed a significant difference between the moral-impersonal and non-moral conditions. 21. Due to magnetic susceptibility artifact we were unable to image the orbitofrontal cortex, an area thought by some to play an important role in moral judgment (3). 22. Experiments 1 and 2 were not identical (8). Experiment 2 employed some modified versions of dilemmas from Experiment 1 as well as some new dilem- 9/8/23, 10:39 AM Desperately Seeking a Kidney – The New York Times https://www.nytimes.com/2007/12/16/magazine/16kidney-t.html Desperately Seeking a Kidney By SALLY SATEL Dec. 16, 2007 In the fall of 2005, I started my first online relationship. He was a 62-year-old retiree from Canada; I was a 49-year-old psychiatrist living in Washington. Beginning in early October of that year, we talked or e-mailed several times a week. This arrangement was novel to both of us, so our conversations were tentative at first, but we soon grew more comfortable, and excitement over our shared vision blossomed. After a few weeks, we decided to meet for a uniquely intimate encounter. After New Year’s, the Canadian would fly to Washington to meet me — at a hospital, where he would give me one of his kidneys. Thank God. My own kidneys were failing. On a steamy day in August 2004, I went to the doctor for a routine checkup. I was feeling fine, but a basic test revealed that my kidneys were shot, functioning at about 16 percent of normal capacity. One nephrologist I went to predicted that within roughly six months to a year I would need to begin dialysis. Three days a week, for four debilitating hours at a time, I would be tethered to a blood-cleansing machine. Even simple things like traveling to see friends or to give talks would be limited. This would very likely continue for at least five years until my name crawled to the top of the national list of people waiting for kidneys from the newly deceased. On average, 12 names, the death toll from the ever-growing organ shortage, would be scratched off the list each day. A much better option would be to get a transplant from a living person. I had tried that and failed. Thus my plans for a rendezvous with a man I had never met. But shortly before Thanksgiving, he disappeared. I panicked. Everything turned to radio silence as my e-mail and phone messages went unanswered. Was I, a psychiatrist no less, crazy to have put my trust in a stranger who goes on the Internet to relinquish an organ? Friends wanted to know why my kidneys were giving out, but there was no good answer. I didn’t have diabetes or hypertension, the most common causes of end-stage renal disease. My doctor’s theory was that my kidney damage may have been caused by a medication I had taken during my 20s. The one thing we knew was that whatever was destroying my kidneys did so stealthily. Like most organs, kidneys have impressive reserves, and the slower they deteriorate, the longer they can keep up a good front, maintaining blood pressure, balancing the salt and electrolytes in the blood and, of course, producing about one to two liters of urine a day. I remembered a line from “The Sun Also Rises,” when a drunkard is asked how he went bankrupt. “Two ways,” he answers. “Gradually and then suddenly.” That was how my kidneys went out of business too. The obvious place to find a donor is your own family, but that was not really an option for me. My parents were not alive and would have been far too old to help me even if they were. I have no siblings and only three cousins; I hadn’t seen two of them since high school; and the third I see maybe once every two or three years. I couldn’t call out of the blue with this news. I could just imagine my relatives tsking into the phone, “You only call when you want something.” Indeed. Theoretically, kidneys should be in booming supply. Virtually everyone has two, and healthy individuals can give one away and still lead perfectly normal lives. Yet people aren’t exactly lining up to give. At the beginning of 2005, when I put my name on the list, there were about 60,000 people ahead of me; by the end of that year, only 1 in 9 had received one from a relative, spouse or friend. Today, just under 74,000 people are waiting for kidneys. I wanted my donor to be completely anonymous so I could avoid the treacherous intimacy of accepting an organ from someone I knew. I would have gladly paid someone to give me a kidney, but exchanging money for an organ is a felony in this country. Altruistic giving is the metaphorical bedrock of our transplant system. Organ donation, we are told, should be the ultimate gift: the “gift of life,” a sublime act of generosity. The giver — whether living or deceased — must not expect to be enriched in any way. In late 2004, not long after I learned my kidneys were failing and a little over a year before I met the Canadian online, I told one of my best friends about my diagnosis. She and I first met more than 20 years before at the medical school at Yale, when I was finishing my residency in psychiatry and she was an instructor in the same department. Dr. Yale, as I’ll refer to her to https://www.nytimes.com/2007/12/16/magazine/16kidney-t.html 1/7 9/8/23, 10:39 AM Desperately Seeking a Kidney – The New York Times protect her privacy, is a feisty blend of bubbly energy (last summer she made me ride the Cyclone with her at Coney Island) and intellectual seriousness (she is training to be a psychoanalyst). She immediately offered to check her blood type. I needed someone with type A or O, and in uncomplicated cases like mine, blood-type matching is usually one of the biggest hurdles to compatibility. Dr. Yale was type O. Presto! She said she needed to talk it over with her husband but thought it would be fine. A week later, however, she said it wasn’t. “Giving you a kidney seemed a perfectly natural thing to do,” she told me. “I had the time, and I wanted to do what I could and in a clear way, far clearer than the vague helpfulness of say, psychiatry. But then I mentioned my plan to donate to a fellow alto at chorus rehearsal one evening.” As it turned out, the alto in question was no typical acquaintance: she was a transplant surgeon. My friend continued: “She was very surprised that I was planning to donate to a friend and then pulled an article out of her bag about hemorrhaging after donating.” The exchange set off a spiral of anxiety in Dr. Yale’s mind — What if my brother or kids need my kidney? What if I had complications from surgery? I’m sorry, she said matter-of-factly, and that was that. I understood that my friend wanted to spend her kidney wisely. What mystifies me still is how she got so spooked. After all, Dr. Yale was a physician herself, capable of weighing the risks. The operation is done by laparoscope, leaving only a modest three-inch scar; she would have been out of the hospital after two or three nights. Most important, the chance of death is tiny — 2 in every 10,000 transplants — and the long-term health risks are generally negligible. More baffling to me, though, was the fact that she was talked out of donating by a person who removes and implants organs for a living. I was outraged. A transplant surgeon, of all people, knows how hard it is to find a donor, how grueling dialysis can be and how significant the health benefits of a “pre-emptive” transplant (that is, one received before the patient goes on dialysis) are. Not to mention the fact that hemorrhaging after donation is unusual. How dare she discourage someone who was ready to donate! Or had my friend been ready? It doesn’t matter now. But at the time, the surgeon was such a ready scapegoat that I could push the uneasy question about Dr. Yale aside. I fumed for a week and then got over it because I figured it was early and it wouldn’t be hard to find someone else. And sure enough, two more friends quickly stepped forward to have their blood typed. It turned out they were poor matches for me. A week after my 49th birthday in January 2005, half a year after being given a diagnosis of renal failure, a friend and I were drinking coffee at a Starbucks when I wondered aloud if I would find a donor before I reached 50. I wasn’t hinting. I knew she would never offer because she was so squeamish about blood and pain. My friend, whom I met a decade before when we were both new to Washington and worked together on an advocacy project, was a little older than I; she was charming, stylish, smart — and a hypochondriac. Nor, to be honest, did I want her kidney. Anyone as anxious about health as she was would surely view donation as a whiteknuckle ordeal. And the bigger the sacrifice for her, the heavier the burden of reciprocity on me. The bigger the burden on me, the more I would resent her. Then I would feel guilty over resenting her and, in turn, resent the guilt. Who could survive inside this echo chamber of reverberating emotions? Thank goodness my friend would be holding on to her kidney. Ralph Gibson https://www.nytimes.com/2007/12/16/magazine/16kidney-t.html 2/7 9/8/23, 10:39 AM Desperately Seeking a Kidney – The New York Times But then to my amazement, within a minute or so of my speculating when or if a donor would ever appear, she offered to do it. Later that night we talked on the phone and she rhapsodized about what a “mitzvah” it would be. Yes, her sentiments were lovely, but I felt secretly annoyed because I knew it was her habit to embark upon grandiose plans; when they fizzled, she would just shrug. I told her that giving me a kidney was out of the question — “It would be too weird,” was what I kept saying — but she persisted. I couldn’t quite believe it when she told her family of her decision (they were graciously in favor) and then had blood tests and consulted with my transplant team. Gradually, I began to believe that she meant it, and I decided to embrace her just as you might accept an in-law, as someone who could drive you a little mad but whom you loved because they were the source of something very precious to you — in my case, not a spouse but a kidney. But then after a few months she stopped talking about it. When I finally broke the silence, she said her doctor had advised against it. More likely, I thought, she was scared. I felt sorry to have put her in this position, but I was also bitter: just when would she have gotten around to telling me? Such near-transplant experiences are not uncommon. All of the transplant candidates I spoke to, as part of my own small nonscientific sample, mentioned at least one person who promised to donate, had some tests done and then developed cold feet. Transplant teams explicitly, and properly, offer face-saving “medical alibis” to potential donors who don’t really want to go through with it, which suggests that bailing out isn’t all that rare. They might tell the person needing the transplant and the rest of the family, for example, that additional tests on the prospective donor revealed a compatibility problem or some evidence that the donor might be putting her own health at risk. What’s next, I wondered? I couldn’t imagine asking friends or colleagues to donate; it was too momentous a request. Not because the risks are great, but because the idea scares the hell out of a lot of people. Also, the recent drama with my friend was a potent reminder of just how suffocating a lifelong obligation might be. Maybe when I began to feel really ill, I would force myself to ask. But not now. The “tyranny of the gift” is an artful term coined by the medical sociologists Renée C. Fox and Judith P. Swazey to capture the way immense gratitude at receiving a kidney can morph into a sense of constricting obligation. In their 1992 book, “Spare Parts: Organ Replacement in American Society,” the authors write, “The giver, the receiver and their families may find themselves locked in a creditor-debtor vise that binds them one to another in a mutually fettering way.” I had read of a brother who was so overwhelmed by feelings of obligation that he could “not even stand to look at” his donor sister. And I was also aware of the lengths people went to to avoid the vise: the son who refused a kidney from an overbearing mother, telling his surgeon, “She’s devoured enough of me already”; the young man who chose to remain on dialysis rather than accept a kidney from his long-term girlfriend lest he be forced to reciprocate by marrying her. Maimonides, the 12th-century Jewish physician and philosopher, believed that anonymous giving was nobler than charity performed face to face because it protected the beneficiary from shame or a sense of indebtedness. He was onto something. I ruminated constantly about what it would mean to be related to someone “by organ.” Would my future donor assume a proprietary interest in how I lived my life, since she had made it possible? Would she make sure I was taking proper care of “our” kidney or lord her sacrifice over me? Or would I hold it over my own head, constantly questioning whether I might have said or done anything that could offend or disappoint my donor, anything that might be taken as ingratitude? How could a relationship breathe under such stifling conditions? It was exhausting to think about; I wanted no part of a debtorcreditor relationship. I didn’t want a gift, I wanted a kidney. Naturally, I was preoccupied with the ways in which the gift might tyrannize me, but for every patient who wonders, “Do I want to accept?” there are many more prospective donors who ask, “Do I want to give?” News that a patient needs a transplant quickly leads to anxious glances among relatives, wondering who the future donor will be. “I and others had seen refusal of donation lead to ostracism within a family or donation made as a reluctant sacrifice to someone for whom there was little or no affection,” wrote Thomas E. Starzl, the pre-eminent transplant surgeon, in his memoirs. “If a prospective donor was deficient in some way, usually intellectually, the family power structure tended to focus on his or her presumed expendability.” This so troubled Starzl that he stopped performing live kidney transplants in 1972. Donors can have their own agendas, too. The academic literature on donor psychology offers many examples, like a man who sought the adulation of his community by offering a kidney to his minister, a daughter who competed with her own mother to be the rescuer of another family member and a woman who told researchers that her motive for wanting to give a kidney to a stranger was to become “‘Daddy’s good girl.” Then there is the “black-sheep donor,” a wayward relative who https://www.nytimes.com/2007/12/16/magazine/16kidney-t.html 3/7 9/8/23, 10:39 AM Desperately Seeking a Kidney – The New York Times shows up to offer an organ as an act of redemption, hoping to reposition himself in the family’s good graces. For others, donation is a sullen fulfillment of familial duty, a way to avoid the shame and guilt of allowing a relative to suffer needlessly and even die. By comparison, friends are better insulated from emotional pressures; their compassion is less likely to be tinged with obligation, let alone tainted by it. And the rare Good Samaritan donor who cold-calls a transplant center to donate to the next suitable person in the queue, not even knowing who will get his kidney, surely embodies the purest form of altruism. In the end, though, people who don’t want to donate usually manage to extract themselves. They miss appointments for screening tests or just drop out of the process. People who actually do become donors, however, usually regard it as a supremely gratifying experience: they were given a blessed opportunity to save a life, a chance that relatives of a dying cancer patient can only dream of. I’ve read of siblings jousting to give an organ to a cherished parent and of adult children who were heartbroken when doctors ruled them out on medical grounds. According to a review of published surveys on donor attitudes by Mary Amanda Dew, a psychologist as the University of Pittsburgh Medical Center, about 95 percent of donors say they would do it again. Most experience a boost in self-worth and enjoy feelings of deep purpose, while only a small minority regret having donated or report that their relationships with recipients changed for the worse. From my medical training, I was familiar with some of the ins and outs of end-stage renal disease. I had an especially morbid dread of dialysis. The playwright Neil Simon received a kidney from his longtime publicist in 2004 — “The Odd Donor Couple,” as The New York Times put it — but before that he endured 18 wretched months on dialysis, suffering cramps and vomiting spells that kept him largely confined to his house. His memory deteriorated, and he hated the time away from his writing. Shortly before his donor came forward (unsolicited, it should be noted), Simon’s doctors said he might have to start spending more time on dialysis. If that were necessary, he said, he had decided, “I didn’t want to live my life anymore.” Neither, I thought, would I. It is possible that I was overestimating how miserable I would be on dialysis. An avalanche of psychological data shows that people are far better at handling adversity when it actually befalls them than they expect they will be. Still, I was quite sure I would flout the longstanding evidence attesting to human adaptability. On dialysis I would be disconsolate and maybe even suicidal if the wait for an organ were to stretch for years. As dispiriting, I would lose all my friends. Not that I expected them to abandon me. I would abandon them out of anger for not rescuing me. By the end of the summer of 2005, a year after the diagnosis, there was no donor in sight. I was mentally preparing myself to undergo the standard predialysis operation to create “access” to the machine. A vein and artery in my arm would be joined to create a large superficial vessel for the insertion of needles and tubing that would carry my blood to and from the machine. I resisted, but I knew that soon I wouldn’t be able to put off dialysis any longer. The “tyranny of the gift” now took on a new meaning for me. It was no longer about moral debt; it was about the very fact that an organ had to be a gift, about the tyranny of the system. I heard of people trying to persuade strangers to give them organs. They put up bulletin boards or started Web sites (GordyNeedsAKidney.org, whose opening page carried the plaintive headline, “Please Help Our Dad”). I flirted with the idea of becoming a “transplant tourist” in Turkey or the Philippines, where I could buy a kidney. Or going to China, where I would have to face the frightful knowledge that my kidney would probably come from an executed prisoner. Grim choices, but I was afraid I could die on dialysis if I didn’t do something to save myself. In October 2005, I stumbled across a Web site called MatchingDonors.com that helps link potential donors and recipients. Once a match is made, the process follows the standard path, with physicians at a transplant center determining whether to proceed. I was given space to describe myself and to post photos. I read a few of the requests. There were parents wanting to see their young children grow up; a new husband hoping to have children with his wife before her kidneys failed; a 70-yearold grandmother yearning to see her only granddaughter get married. https://www.nytimes.com/2007/12/16/magazine/16kidney-t.html 4/7 9/8/23, 10:39 AM Desperately Seeking a Kidney – The New York Times The author nearly two years after finally receiving a kidney. Ralph Gibson My God, how could I possibly compete with these people? I wouldn’t leave children motherless or miss the milestones of life; were I a prospective donor, even I wouldn’t have picked me. I took a minimalist approach to my statement, hoping it would attract a no-nonsense donor who appreciated reserve. I wanted to stand out as the “applicant” who wasn’t begging; no emotional blackmail here. Of course, I would have poured out every detail of my moribund state most operatically if I were living on dialysis or near death. I thought of boosting my stock by mentioning that I was a psychiatrist at a methadone clinic, but the prospect of heroin addicts bereft of their shrink might not conjure a poignant Hippocratic tableau. In the end, I simply wrote: “Type A blood. 49 yr old female physician . . . idiopathic kidney failure. Otherwise healthy. Aug 2004 discovered chronic renal failure during routine blood test. BUN 80, Cr 7. Not yet on dialysis. Doctor predicts organ would be needed by Jan. 06.” Three days later, the Canadian called. He told me he considered becoming a donor five years ago when he heard through his church about someone who was failing on dialysis. That was the most personal thing I ever learned about him. Well into November, we were in regular contact, yet our phone calls rarely lasted more than 10 minutes. He asked about my health, and we would talk about logistics — whether my insurance would pay for his tests, whether he could take time away from a project he was working on and so on. I ended the calls blubbering with gratitude, and he would tell me to stop. Although the Canadian seemed kind and steady, he had enormous power over me. I deliberately kept our calls brief to minimize my chances of saying something that might antagonize him. I wondered why he chose me, but I dared not ask, lest his decision was based on a misconception of who I was. Would I then be morally bound to set him straight so that he wasn’t giving a body part under false pretenses? What if he loathed conservatives? After all, he was involved in politics, and I was associated with a right-of-center think tank. This is ridiculous, I told myself; a person whose inspiration to donate is forged in church is surely above partisanship in such matters. Still, until both of us were snug in our adjoining operating rooms, I could never relax — everything was tentative, conditional and prone to collapse. I prayed the Canadian wouldn’t talk about his decision to donate (and the identity of his recipient) with his family or friends. They could look me up online and not like what they saw or think I wasn’t sick enough to warrant heroics on his part or turn him against the idea of donating altogether. Yet from what I could tell, the Canadian’s only agenda was the act itself. Had I detected any hint of ambivalence, I would have cut him loose immediately, since each false hope ate up irreplaceable time. But, it turned out, I misread him. About a week before Thanksgiving, the Canadian went dark. By then I was fatigued most of the time and fluid was pooling in my ankles. I took four antihypertensive drugs a day and had injections of a hormone that stimulated my body to make more red blood cells. Dialysis was closing in. Around Christmas, the Canadian finally called. The conversation went as if nothing had happened. I didn’t dare ask about his silence; instead, I forced myself to sound upbeat and touched on the few things I knew about his life: that he was volunteering on a political campaign and that his father had been ill. He swore he was still “raring to go with the transplant,” https://www.nytimes.com/2007/12/16/magazine/16kidney-t.html 5/7 9/8/23, 10:39 AM Desperately Seeking a Kidney – The New York Times which my transplant coordinator, a young woman named Julie, had tentatively scheduled for January. I wanted to press him for a firmer promise, but I worried that if I betrayed my irritation, he might be offended: “That’s it!” I imagined him saying, just before he slammed down the phone like a twisted character in a “Seinfeld” episode. “No kidney for you!” A few days later, Julie contacted him. Straight-talking and bright-eyed, Julie spoke to the Canadian in a way I could not. “We need to know how to proceed,” she told him firmly. “There is no time to spare. Can you be here in January for the surgery?” He conceded that the campaign he was working on was too unpredictable. Julie said he seemed to feel genuinely bad about reneging, but he did not tell her to convey that disappointment to me, and I never heard from him again. I was astonished at the Canadian’s . . . what? Negligence, cowardice, rudeness? It was a sickening roller-coaster ride: hope yielding to helpless frustration, gratitude giving way to fury. How dare he reduce me to groveling and dependence? Yet I assume he intended no such thing. I think the Canadian was actually quite devoted to the idea of giving a kidney — just not necessarily now or to me. Then, again, it occurred to me that one of the most brilliantly cruel games a sadist could devise would be to promise an organ with the plan of later snatching it away. The Canadian knew that I was relying on him, that I suspended my donor search when we settled on a date for the transplant. At the very least, he owed me an apology — not so much for backing out, although by now I was frantic over that, but because he led me on for weeks. And would have continued doing so had Julie not pushed him. The truth, naturally, was that I had no right to anything of his, let alone something so absolutely and intimately his as a kidney. Who could dare to presume — or even appear to presume — that his kidney was meant for me? I was thoroughly confused about how entitled I was to hate him. Meanwhile, my kidneys were deteriorating, and I didn’t have time for more cycles of commitment, silence and rejection. Salvation came out of nowhere. In early November 2005, a few weeks before the Canadian withdrew, I received an e-mail message from a friend — a fond acquaintance really — whom I knew from the think-tank circuit. “Serious offer” was the message in the subject line. It was from Virginia Postrel, a 45-year-old author and journalist. (She has written for the business pages of The Times and for this magazine.) Known for her original mind, she is especially popular within libertarian intellectual circles. Virginia ran into a mutual friend at a meeting, who told her about me, and she sent an e-mail message within days: “If I’m compatible, I’ll be a donor. Best, Virginia” Two weeks later, she sent this: “By the way, I absolutely promise you that I will not back out.” Intuitively, she had grasped the golden rule of responsible donorship. Mercifully, Virginia was the right blood type; even better, she was the right personality type. In early March, four days before our operations, she came to D.C. from her home in Dallas. On March 4, 2006, I became the proud owner of Virginia’s right kidney. She was out of the hospital after three nights; I was home after seven; our recoveries were uneventful. I require no drugs except medication that prevents my body from rejecting the new organ. Though Virginia never before gave blood or even signed an organ donor card, the decision to donate, she told me, was quick and sure: “I felt intense empathy and imagined how desperate you must feel,” she said.“I liked the idea of being able to help in a straightforward way — to be able to cure a sick friend rather than just bring food or send a card. Virginia was the perfect donor for me. For one thing, she lives far away. She also has an ingrained respect for personal privacy. She never suggested that I might owe her a thing beyond the extraordinary gratitude that decency demands. And she is bracingly pragmatic. “I have a very instrumental view of my body,” she told me, “so when you needed a part, I was happy to give it. I knew you had no family. I wouldn’t have done this for a stranger, but I would do it, I did do it, for someone I cared about even though we weren’t close.” My story, it turns out, is a triumph of altruism. Looking back, I see that my anxiety over my future donor was a neurotic luxury. I worried about finding the ideal donor, but thousands of people have no donor at all — no relative who will do it out of love or obligation, no friend out of kindness, no stranger out of humane impulse. Alas, I have no kidney to give away. Instead, I am urging wherever I can — in articles, in lectures, from assorted rooftops — that society has a moral imperative to expand the idea of “the gift.” Altruism is a beautiful virtue, but it has fallen painfully short of its goal. We must be bold and experiment with offering prospective donors other incentives for giving, not necessarily payment but material reward of some kind — perhaps something as simple as offering donors lifelong Medicare coverage. Or maybe Congress should grant waivers so that states can implement their own creative ways of giving something to donors: tax credits, tuition vouchers or a contribution to a giver’s retirement account. https://www.nytimes.com/2007/12/16/magazine/16kidney-t.html 6/7 9/8/23, 10:39 AM Desperately Seeking a Kidney – The New York Times In short, we should reward individuals who relinquish an organ to save a life because doing so would encourage others to do the same. Yes, splendid people like Virginia will always be moved to rescue in the face of suffering, and I did get my kidney. But unless we stop thinking of transplantable kidneys solely as gifts, we will never have enough of them. Sally Satel is a psychiatrist and lecturer at the Yale University School of Medicine and a resident scholar at the American Enterprise Institute. https://www.nytimes.com/2007/12/16/magazine/16kidney-t.html 7/7 9/21/23, 9:39 AM Kidney for Sale by Sally L. Satel – Project Syndicate Kidney for Sale Mar 5, 2009 |SALLY L. SATEL NEW HAVEN – World Kidney Day, to be held on March 12, is part of a global health campaign meant to alert us to the impact of kidney disease. Sadly, there is little to celebrate. According to the International Society of Nephrology, kidney disease affects more than 500 million people worldwide, or 10% of the adult population. With more people developing high blood pressure and diabetes (key risks for kidney disease), the picture will only worsen. There are 1.8 million new cases of the most serious form of kidney disease – renal failure – each year. Unless patients with renal failure receive a kidney transplant or undergo dialysis – an expensive life-long procedure that cleanses the blood of toxins – death is guaranteed within a few weeks. Last year, an Australian nephrologist, Gavin Carney, held a press conference in Canberra to urge that people be allowed to sell their kidneys. “The current system isn’t working,” the Sydney Morning Herald quoted him as saying. “We’ve tried everything to drum up support” for organ donation, but “people just don’t seem willing to give their organs away for free.” Carney wants to keep patients from purchasing kidneys on the black market and in overseas organ bazaars. As an American recipient of a kidney who was once desperate enough to consider doing that myself (fortunately, a friend ended up donating to me), I agree wholeheartedly that we should offer well-informed individuals a reward if they are willing to save a stranger’s life. If not, we will continue to face a dual tragedy: on one side, the thousands of patients who die each year for want of a kidney; on the other side, a human-rights disaster in which corrupt brokers deceive indigent donors about the nature of surgery, cheat them out of payment, and ignore their post-surgical needs. The World Health Organization estimates that 5% to 10% of all transplants performed annually – perhaps 63,000 in all – take place in the clinical netherworlds of China, Pakistan, Egypt, Colombia, and Eastern Europe. Unfortunately, much of the world transplant establishment – including the WHO, the international Transplantation Society, and the World Medical Association – advocates only a partial remedy. They focus on ending organ trafficking but ignore the time-tested truth that trying to stamp out illicit markets either drives them further underground or causes corruption to reappear elsewhere. For example, after China, India, and Pakistan began cracking down on illicit organ markets, many patients turned to the Philippines. Last spring, after the Philippines banned the sale of kidneys to foreigners, a headline in the Jerusalem Post read, “Kidney Transplant Candidates in Limbo after Philippines Closes Gates.” (Israel has one of the lowest donation rates in the world, so the government pays for transplant surgery performed outside the country.) Similarly, patients from Qatar who traveled to Manila are “looking for alternative solutions,” according to The Peninsula . True, more countries must develop efficient systems for posthumous donation, a very important source of organs. But even in Spain, which is famously successful at retrieving organs from the newly deceased, people die while waiting for a kidney. The truth is that trafficking will stop only when the need for organs disappears. Opponents allege that a legal system of exchange will inevitably replicate the sins of the black market. This is utterly backward. The remedy to this corrupt and unregulated system of exchange is a regulated and transparent regime devoted to donor protection. My colleagues and I suggest a system in which compensation is provided by a third party (government, a charity, or insurance) with public oversight. Because bidding and private buying would not be permitted, available organs would be distributed to the next in line – not just to the wealthy. Donors would be carefully screened for physical and psychological problems, as is currently done for all volunteer living kidney donors. Moreover, they would be guaranteed follow-up care for any complications. Many people are uneasy about offering lump-sum cash payments. A solution is to provide in-kind rewards – such as a down payment on a house, a contribution to a retirement fund, or lifetime health insurance – so that the program would not be attractive to people who might otherwise rush to donate on the promise of a large sum of instant cash. The only way to stop illicit markets is to create legal ones. Indeed, there is no better justification for testing legal modes of exchange than the very depredations of the underground market. Momentum is growing. In the British Medical Journal , a leading British transplant surgeon called for a controlled donor compensation program for unrelated live donors. Within the last year, the Israeli, Saudi, and Indian governments have decided to offer incentives ranging from lifelong health insurance for the donor to a cash benefit. In the United States, the American Medical Association has endorsed a draft bill that would make it easier for states to offer various non-cash incentives for donation. Until countries create legal means of rewarding donors, the fates of Third World donors and the patients who need their organs to survive will remain morbidly entwined. What better way to mark World Kidney Day than for global health leaders to take a bold step and urge countries to experiment with donor rewards? SALLY L. SATEL Sally L. Satel, a medical doctor, is a resident scholar at the American Enterprise Institute in Washington, DC. https://prosyn.org/APIQWas Support Project Syndicate Subscribe Upgrade Donate Get our weekly newsletter your@email.com Sign up https://www.project-syndicate.org/commentary/kidney-for-sale 1/2 Cognition 206 (2021) 104467 Contents lists available at ScienceDirect Cognition journal homepage: www.elsevier.com/locate/cognit Principles of moral accounting: How our intuitive moral sense balances rights and wrongs T Samuel G.B. Johnsona,b,c, , Jaye Ahnd ⁎ a University of Warwick, Department of Psychology, United Kingdom of Great Britain and Northern Ireland University of Bath, School of Management, United Kingdom of Great Britain and Northern Ireland c University College London, Centre for the Study of Decision-Making Uncertainty, United Kingdom of Great Britain and Northern Ireland d University of Minnesota, Department of Psychology, United States of America b ARTICLE INFO ABSTRACT Keywords: Moral judgment Reputation Intuitive ethics Social cognition Person perception We are all saints and sinners: Some of our actions benefit others, while other actions lead to harm. How do people balance moral rights against moral wrongs when evaluating others’ actions? Across 9 studies, we contrast the predictions of three conceptions of intuitive morality—outcome-based (utilitarian), act-based (deontologist), and person-based (virtue ethics) approaches. These experiments establish four principles: Partial offsetting (good acts can partly offset bad acts), diminishing sensitivity (the extent of the good act has minimal impact on its offsetting power), temporal asymmetry (good acts are more praiseworthy when they come after harms), and act congruency (good acts are more praiseworthy to the extent they offset a similar harm). These principles are difficult to square with utilitarian or deontological approaches, but sit well within person-based approaches to moral psychology. Inferences about personal character mediated many of these effects (Studies 1–4), explained differences across items and across individuals (Studies 5–6), and could be manipulated to produce downstream consequences on blame (Studies 7–9); however, there was some evidence for more modest roles of utilitarian and deontological processing too. These findings contribute to conversations about moral psychology and person perception, and may have policy and marketing implications. 1. Introduction If you were to fly round-trip from NYC to LA, you would be re­ sponsible for emitting 1.3 tons of CO2 into the atmosphere. This action imposes a cost on the environment and on society. But there is an easy way to neutralize these social costs—buying carbon offsets. In their most common form, the consumer contracts with a third-party to plant trees, which absorb CO2 from the atmosphere and can thus neutralize any given amount of carbon. It turns out that planting 7 trees neu­ tralizes approximately 1.3 tons of CO2, and at current market prices this costs about $13. Many social and environmental scientists consider this a win–win, since this allows you to achieve whatever personal or eco­ nomic benefits that motivated you to fly, while imposing zero net-cost on society and the planet. But it is less clear how ordinary people, as opposed to policy wonks, think about carbon offsets. Anecdotally, many commentators seem to believe that they are ethically problematic. An op-ed in The Guardian characterized offsets as a way to “buy yourself a clean conscience by paying someone else to undo the harm you are causing” (Monbiot, ⁎ 2006). Building on this argument, a parody website called cheatneutral. com even promised to offer “cheating offsets” to neutralize marital infidelity, boasting that their service “offsets your cheating by funding someone else to be faithful and NOT cheat” (quoted in May, 2007). Even though carbon offsets appear to be a good bargain for society from the utilitarian perspective of minimizing net harm, they may run up against deep psychological resistance. The debate about carbon offsets is one example of moral ac­ counting—how our intuitive morality balances harmful acts against beneficial acts. We are all saints as well as sinners; therefore, moral accounting is relevant to much human behavior. For example, a person might shirk off from work and make up for the shirking by working harder later on, might litter and then volunteer to pick up trash, or might discriminate against a black loan applicant and then make up for the discrimination by helping another applicant. This paper maps the principles governing moral accounting and tests the psychological mechanisms underlying these principles. Many studies have looked at the behavioral effects of gaining moral credits or moral credentials on subsequent behavior. People morally self- Corresponding author at: University of Warwick, Department of Psychology, United Kingdom of Great Britain and Northern Ireland. E-mail address: sam.g.b.johnson@warwick.ac.uk (S.G.B. Johnson). https://doi.org/10.1016/j.cognition.2020.104467 Received 18 June 2020; Received in revised form 8 September 2020; Accepted 15 September 2020 0010-0277/ © 2020 Published by Elsevier B.V. Cognition 206 (2021) 104467 S.G.B. Johnson and J. Ahn license, becoming likelier to perform an immoral act after they or an ingroup member perform an earlier positive act (Kouchaki, 2011; Merritt, Effron & Monin, 2010; Sachdeva et al., 2009). For example, after choosing a qualified female job candidate, people feel licensed to en­ dorse gender stereotypes (Monin & Miller, 2001). Analogously, “vir­ tuous” consumer behaviors (e.g., volunteering for community service) motivate “vice” behaviors (e.g., consuming luxury products) (Khan & Dhar, 2006). Although more research has looked at licensing behavior (performing good acts then bad acts), people are also known to engage in cleansing or redemption behavior (performing bad acts then good acts) (Tangney et al., 2007; Tetlock, 2003). For example, participants who relied on a “forbidden” (racially-tainted) base rate in setting in­ surance premiums later expressed greater interest in volunteering for race-related causes (Tetlock et al., 2000). Intriguingly, people some­ times act as though gaining moral credit and debits can have causal effects on future random outcomes (Callan et al., 2014), particularly when uncertainty is high and control is low (Converse et al., 2012). But it is also critical to understand how others judge combinations of morally right and wrong actions. The study of praiseworthy and blameworthy acts have proceeded largely independently. Some re­ search has compared moral judgments about blameworthy versus prai­ seworthy acts, documenting both symmetries (e.g., De Freitas & Johnson, 2018; Gray & Wegner, 2009; Siegel et al., 2017; Wiltermuth et al., 2010) and asymmetries (e.g., Bostyn & Roets, 2016; Guglielmo & Malle, 2019; Klein & Epley, 2014; Knobe, 2003; Pizarro et al., 2003), while other work has studied the ethicality of morally ambiguous acts that are not clearly blameworthy or praiseworthy (e.g., Everett et al., 2018; Levine et al., 2018; Levine & Schweitzer, 2014; Rottman et al., 2014). But the majority of this literature has theorized (separately) about the mechanisms underlying judgments about morally negative acts (e.g., Alicke, 1992; Baez et al., 2017; Cushman, 2008; Cushman et al., 2006; Graham et al., 2009; Guglielmo & Malle, 2017; Haidt et al., 1993; Inbar et al., 2012; Niemi & Young, 2016; Paxton et al., 2012; Schnall et al., 2008; Tannenbaum et al., 2011; Tetlock et al., 2000; Young & Saxe, 2011) or positive acts (e.g., Critcher & Dunning, 2011; Johnson, 2020; Johnson & Park, 2020; Lin-Healy & Small, 2013; Monin et al., 2008; Newman & Cain, 2014). Many of these articles propose detailed theories of how people assign praise or blame. But existing theory does not supply a ready account of how people evaluate com­ binations of praise and blame—a critical question if we are to under­ stand how moral judgments of acts and persons unfold over time. We aim to fill this theoretical vacuum. In addition to its theoretical value, it is practically useful to un­ derstand moral accounting. Moral decisions often depend on how we expect others to perceive our actions—people are aware that their (im) moral actions send signals to third-parties and therefore attend to those third parties’ perceptions. For example, people conspicuously conserve resources: They are likelier to purchase “green” products when shop­ ping in public rather than in private (Griskevicius et al., 2010). More­ over, moral signaling can sometimes lead to socially suboptimal beha­ viors: Since donations of time signal emotional investment more than donations of money, people with an affiliation goal express greater intention to donate time rather than money, even though people believe that such donations help fewer people (Johnson & Park, 2020). Since third-party moral judgments inform our predictions about how our actions will be perceived and therefore what actions we take, under­ standing these third-party judgments and their moderators can help to promote socially beneficial behaviors. negative consequences. Deontology, in contrast, is act-centered, holding that our moral duty is to act according to moral laws. Although these approaches often agree, they sometimes diverge, as in moral di­ lemmas that involve instrumental harm—harming someone as a means to some greater end (Foot, 1967). Much of the theoretical and empirical discussion in moral psychology has concerned when, why, and how much these two factors—the outcome of an act versus the nature of the act itself—influence our moral judgments and decisions (e.g., Baron & Spranca, 1997; Bartels & Medin, 2007; Bartels & Pizarro, 2011; Conway & Gawronski, 2013; Côté et al., 2013; Greene et al., 2008; Kahane et al., 2015; Kahane et al., 2018; Paxton et al., 2012; Shenhav & Greene, 2010; Tetlock et al., 2000). At the risk of oversimplifying a complicated debate, it seems reasonably clear that both factors matter to most people, that their relative importance shifts across contexts, and that people do not adopt either a consistently utilitarian or deontological moral theory. Yet, these approaches make quite different predictions about how moral accounting might work. According to utilitarianism, the netbenefit should drive judgments of blameworthiness: One would be morally blameworthy to the extent that one has caused more harm than good on balance and praiseworthy to the extent that one has caused more good than harm. This view is quite friendly to offsetting: Other things being equal, actions causing equal harm and benefit have zero net-harm and are equivalent to doing nothing at all. Different philo­ sophical refinements of utilitarianism may very well give different verdicts. Whereas direct (or act) utilitarianism focuses on the im­ mediate costs and benefits of actions, indirect utilitarianism allows agents to consider more far-flung consequences of their actions. For example, motive utilitarianism and rule utilitarianism account for the broader consequences of acting for particular reasons or in accordance with particular rules, respectively (Adams, 1976; Rawls, 1955; Singer, 1977). Since utilitarianism as understood in moral psychology is typi­ cally operationalized as direct or act utilitarianism, we stick with that operationalization here, while acknowledging that more sophisticated versions of utilitarianism may be flexible enough to accommodate many possible patterns of judgments. According to deontology, some acts are wrong regardless of their consequences. Thus, it is wrong to perform forbidden actions as a means to some other end, even if that end itself is good. This view is much less friendly toward offsetting, which allows morally negative actions as long as they are balanced out by contravening positive out­ comes. For acts that are viewed as forbidden, blame should differ little based on whether those acts are offset. As with utilitarianism, there are many philosophical refinements to deontology, with versions differing in where moral rules come from, the role of intention versus causation, whether actions are distinguished from omissions, the scope of actions that are supererogatory (permissible but not obligatory), and their re­ lative emphasis on rights (Nozick, 1974; Quinn, 1989; Scheffler, 1982). Also as with utilitarianism, we operationalize deontology in a simple manner consistent with prior studies: That acts are blameworthy when they violate a moral norm and such acts are wrong regardless of their consequences (Baron & Spranca, 1997). Both of these approaches, however, have been challenged by char­ acter-based approaches, which have a very old pedigree in philosophy (e.g., the virtue ethics of Aristotle, 1999/350 BCE and Hursthouse, 1999) but have only recently received attention in cognitive science (e.g., Goodwin et al., 2014; Uhlmann et al., 2015). On this view, morality is person-centered in the sense that it serves mainly to identify others who are likely to behave in cooperative and trustworthy ways in the future. Although utilitarianism and deontology benefit from their elegance and impressive philosophical pedigree, person-centered approaches benefit from their theoretical links with evolutionary biology, particularly the ideas of reciprocity, signaling, and reputation as key to the evolution of morality (e.g., Miller, 2007; Nowak & Sigmund, 2005; Silver & Shaw, 2018; Sperber & Baumard, 2012; Trivers, 1971). The core idea is that moral judgments such as blame serve to adaptively identify who one 1.1. Moral accounting and theories of morality In psychology and philosophy, the two dominant approaches are variants on utilitarianism (e.g., Bentham, 1907/1789; Mill, 1998/1861; Singer, 2011) and deontology (e.g., Aquinas, 2000/1274; Kant, 2002/ 1796; Nagel, 1979). Utilitarianism is outcome-centered, holding that our moral duty is to maximize positive consequences and minimize 2 Cognition 206 (2021) 104467 S.G.B. Johnson and J. Ahn should interact with in the future (reputation-tracking), which in turn motivates others to avoid blameworthy acts (reputation-management). Thus, when acts signal that a person has poor moral character, this triggers assignment of blame. A number of empirical findings support character-based approaches, including the assignment of blame for harmless acts that seem to imply “wicked” desires (Inbar et al., 2012), people’s computational facility at moral character evaluations relative to other equivalent information integration tasks (Johnson, Murphy, et al., 2019), outrage over incon­ sequential acts that are nonetheless diagnostic of character (Tannenbaum et al., 2011), and the outsized impact in praise judgments of the costs (Johnson, 2020) and emotional investment (Johnson & Park, 2020) signaled by charitable contributions rather than their ef­ fectiveness. Indeed, character inferences may be a key controlling factor that guides moral attention to both outcomes and actions; for example, character inferences moderate the relationship between consequences and blame (Siegel et al., 2017). What would character-based approaches predict? Simply, combi­ nations of positive and negative acts should be blameworthy to the extent that they provide negative evidence about a person’s moral character or reputation. This provides a link between moral judgment and diagnostic or explanatory reasoning—acts are blameworthy when their best explanation implies negative underlying propensities that best explain those acts (see Johnson et al., 2020 and Lombrozo, 2016 for reviews of explanatory reasoning; see Gerstenberg et al., 2018 and Johnson et al., 2016 on the link between explanation and social cog­ nition). Unlike utilitarianism and deontology, which provide some no­ tion of how moral accounting would work based on first principles (notwithstanding the various refinements described above), characterbased accounts of blame are inherently less theoretically constrained: They depend on auxiliary assumptions about how people evaluate moral character. To put some reins on these theories, we rely on prior research on person perception—the study of how people infer person­ ality and character traits based on observed actions. Some of this prior work has looked at how positive and negative information is integrated into summary judgments such as liking (Anderson, 1965; Asch, 1946; Jones, 1990; Reeder & Brewer, 1979), allowing us to derive predictions about how moral accounting of blame might work on a character-based account. Norm-violating actions are by their nature rarer and likelier to be di­ agnostic of underlying character. Of course, this negativity bias is also consistent with many related findings (Baumeister et al., 2001; Kahneman & Tversky, 1979), including some recent evidence that blame judgments, taken in isolation, tend to be more extreme than praise (Guglielmo & Malle, 2019). If character judgments are tightly linked with blame, as we propose, then the negativity bias implies that blame offsetting, if it occurs at all, should be partial, as negative actions receive greater weight in character evaluations than equivalently po­ sitive actions. 1.2.2. Principle 2. Diminishing sensitivity: moral judgments about offsetting are insensitive to the magnitude of the good act Let’s now compare Anna (remember, she littered and then picked up a similar amount of trash) versus Christine, who litters but then picks up twice as much trash as she littered. On balance, Christine has now done more good than bad and the world is a better place overall for her actions. The diminishing sensitivity principle says that Christine’s greater benefit should not make her much less blameworthy than Anna; specifically, the difference in moral judgments for Anna versus Christine (offsetting the harm versus twice the harm) should be much smaller than between Anna and Betty (offsetting the harm versus not offsetting at all). Character-based theories would predict this effect if diminishing sensitivity is also found in person perception. Indeed, such effects have been documented. For example, a single untrustworthy action requires a consistent series of many trustworthy actions before trust is restored because each successive good deed bears diminishing returns; indeed, in some situations trust may never be restored, such as if the initial untrustworthy action is accompanied by deception (Schweitzer et al., 2006). More broadly, negative impressions formed by a target person’s extreme bad deeds require many good deeds before that person’s re­ putation is restored, if ever (Birnbaum, 1973; Riskey & Birnbaum, 1974). Thus, while diminishing sensitivity has not been established for blame ascriptions for acts, it is known to occur in impression formation. Although diminishing sensitivity has been found in other domains, it is not obvious whether it would apply to blame judgments. On the one hand, diminishing sensitivity is seen in many domains, such as valuation of public goods (Frederick & Fischhoff, 1998), risky choice (Kahneman & Tversky, 1979), and charitable giving (Slovic, 2007). Particularly relevant to the current theorizing, people are more sensi­ tive to magnitudes for selfish rather than prosocial actions (Klein & Epley, 2014). For example, a person is seen as much warmer if they make a suggested $10 donation rather than donate nothing, but do­ nating $20 instead of $10 does not buy additional perceived warmth. But there is also reason to think we may not see it in the case de­ scribed above. This is because people’s sensitivity to magnitude along a dimension is tied to how evaluable that dimension is (Hsee, 1996). For example, when evaluating dictionaries one at a time, people pay little attention to the number of words (10,000 vs. 20,000) since this attri­ bute is hard to understand out of context, but when evaluating these dictionaries side-by-side, people rely heavily on this attribute. In some cases, the amount of benefit may not be particularly evaluable (e.g., Johnson, 2020), but in this case it clearly is: Whether the actor offsets precisely the amount of harm is a natural reference-point, and the actor who offset twice their harm would be plainly producing twice as much benefit as an actor who offset their harm precisely. Thus, it is plausible we would only see diminishing sensitivity to benefits after the agent’s net harm is neutral and might even see increasing sensitivity up to the neutral point. Indeed, the Klein and Epley (2014) study mentioned above is consistent with this: If making the suggested donation is the reference point, then people are especially sensitive to donations that bring the person up to that reference point. Given these contrasting predictions, it is important to measure character and blame judgments in the same study. 1.2. Principles of moral accounting In this article, we test four potential principles of moral accounting that might underlie how we judge combinations of rights and wrongs. These principles are motivated from person perception research, but in some cases, this past research does not uniquely determine the direction of the prediction. For this reason, our studies test both person percep­ tion and moral judgments to verify that these auxiliary hypotheses about character judgment hold for our stimulus set. 1.2.1. Principle 1. Partial offsetting: bad acts can be offset by comparable good acts, but only partially For example, consider Betty, who litters, versus Anna, who litters and then volunteers to pick up an equivalent amount of trash. On balance, Anna has done no harm, since the world has the same amount of litter before and after this combination of actions. The partial off­ setting principle makes two predictions. First, Anna should be blamed less than Betty, since Anna (but not Betty) offset the amount of harm by doing an equivalent amount of good. But second, Anna should not be perceived neutrally, but instead seen as somewhat blameworthy. Although this principle has not been tested directly, a negativity bias has long been documented in impression formation, such that negative traits weigh more heavily on liking than do positive traits (Skowronski & Carlston, 1989), and people are more sensitive to si­ tuational factors when evaluating people who did positive rather than negative actions (Reeder & Spores, 1983). This makes good sense: 3 Cognition 206 (2021) 104467 S.G.B. Johnson and J. Ahn 1.2.3. Principle 3. Temporal asymmetry: offsetting (a bad act followed by a good act) is more permissible than licensing (a good act followed by a bad act) Now compare Anna (again, she littered and then picked up trash) versus Diane (who picked up trash and then littered). That is, Anna’s actions look like moral cleansing, redemption, or offsetting (Tangney et al., 2007; Tetlock, 2003) whereas Diane’s actions look more like moral self-licensing (Merritt et al., 2010). The temporal asymmetry principle says that Diane’s licensing will be judged more harshly than Anna’s offsetting, even though Anna and Diane did precisely the same set of things in different orders. Person perception research here can motivate either prediction, which is one reason we test both person perception and moral judg­ ments in our studies. On the one hand, classic literature points to the power of first impressions, often finding primacy effects in social judgment tasks (e.g., Anderson & Hubert, 1963). On the other hand, more recent work on hypocrisy points in the opposite direction (see Effron et al., 2018 for a review). People who act in opposition to their stated moral views tend to be judged more harshly when their avowal of a norm (a positive action) precedes its violation (a negative action) rather than the converse order (Barden et al., 2005). This is thought to occur because people are likelier to believe that a person’s character has changed for the better when the norm avowal occurs after the violation, which explains why this asymmetry is larger for in-group rather than out-group members (Barden et al., 2014). Another example of a recency effect in person perception is the end-of-life bias, in which people’s ac­ tions near the end of their lives receive far greater weight than their actions earlier in their lives when third-parties form summary judg­ ments of their moral character (Newman et al., 2010), perhaps because the later actions are thought to be more revealing of the “true self.” Given the mix of primacy and recency effects in the literature, we test both character and blame judgments to resolve this ambiguity. congruent negative acts? One reason is that self-licensing and moral accounting take place at different time points. Whereas moral creden­ tials and credits in self-licensing are evaluated after an initial positive act but before the negative act, moral accounts take account of both actions simultaneously. Thus, whereas highly congruent negative ac­ tions can feel hypocritical to an actor after having done a positive ac­ tion, observers who get a broader sense of the overall picture may not interpret the sequence of actions in the same way, and indeed may view more similar acts as more redemptive as they can be more readily construed as expressions of remorse. This prediction has not been tested in person perception, so it is necessary to validate this assumption empirically in the current studies. Another way to think about the act congruency principle is by analogy to mental accounting phenomena in consumer behavior (Thaler, 1985). The essence of mental accounting is that income, ex­ penses, assets, and debts are segregated into different mental accounts, for instance based on income source, rather than mentally consolidating income streams as economists would recommend. These behaviors re­ sult from fundamental cognitive processes surrounding categorization (Henderson & Peterson, 1992) that apply equally to categorizing in­ come streams and moral actions. Thus, analogous to traditional mental accounting, one might theorize that moral credits belong to different “moral accounts,” such that a credit for a beneficial act can only be applied against a debit for a harmful act from the same category. This predicts the act congruency principle. Even though Francine might be thought praiseworthy for mowing her lawn in isolation, this does not help to clear the negative moral account for her littering. Francine has one moral account in the black and another in the red. Why might a person with two neutral moral accounts be thought higher in moral character than a person with one moral account in the red and an offsetting moral account in the black? This follows directly from the same negativity bias in person perception that motivates the partial offsetting principle (Skowronski & Carlston, 1989). Even if the size of the moral credits and debits are equivalent, the debit looms larger than the credit, leading to overall negative character perception. Given the hypothesized link between character and blame, Anna (with her accounts nearly in balance) would be deemed less blameworthy than Francine (with a large account in the red and another in the black). 1.2.4. Principle 4. Act congruency: moral judgments about offsetting depend on the match between the good and bad acts Finally, consider again Anna (she littered and then picked up trash in the same area where she littered) versus Emma (who littered and picked up trash in a different area) versus Francine (who littered and mowed her neighbor’s lawn). Even if these three offsetting acts are seen as equally beneficial in isolation, the act congruency principle says that people would nonetheless think that Anna is less blameworthy than Emma, who in turn is less blameworthy than Francine. This principle is the most unique to the moral accounting frame­ work, since it concerns the qualitative relationship between the harm and benefit: To what extent does “like offset like,” or do our minds track a universal system of moral credits and debits? To our knowledge, the person perception literature contains no direct demonstrations of this, although there is related work on moral self-licensing. First, there is evidence of licensing both within-domain (e.g., hiring a minority ap­ plicant licenses expression of prejudiced attitudes; Monin & Miller, 2001) and cross-domain (e.g., eco-friendly behaviors license cheating in an unrelated task; Mazar & Zhong, 2010). Second, the mechanisms underlying these effects seem to differ (Effron & Monin, 2010). Withindomain licensing seems to occur because people accrue “moral credits” that they then feel licensed to “spend” on subsequent transgressions (Hollander, 1958; Nisan, 1991). In contrast, cross-domain licensing seems to occur mainly because people acquire “moral credentials” that they can integrate into their self-concept and which shapes the inter­ pretation of, and can justify, subsequent behaviors (Monin & Miller, 2001). Third, when transgressions are blatant rather than ambiguous, within-domain is weaker than cross-domain licensing, and indeed may not occur at all, because within-domain transgressions trigger the per­ ception of hypocrisy (Effron & Monin, 2010). All this suggests that less congruent acts would be more powerful offsets than more congruent acts—the opposite of the proposed principle. Why might we nonetheless expect positive acts to better offset more 1.2.5. Predictions Table 1 sets out the predictions made by utilitarian, deontolog…
Purchase answer to see full attachment

Explanation & Answer:

100 Words