MSU QALMRIQ Writing Exercise

MSU QALMRIQ Writing Exercise

Description

 

 

Unformatted Attachment Preview

QALMRIQ Written Assignments Purpose of the QALMRIQ Writing QALMRIQs will help you comprehend the empirical papers that you read on a weekly basis. Empirical papers contain a lot of information and it requires practice to extract the central important questions that are being asked, and the main findings that are being reported. QALMRIQ is an acronym. It stands for the following components of empirical papers: What are the broad and specific questions?, what were the alternative hypotheses, what was the logic of the design, what was the method, what were the results, what inferences about the specific and broad question can be made from the results, what’s the next question? QALMRIQ is a guide to focus your reading so that you can identify these aspects of the articles you read. If you can clearly identify all of these aspects of a journal article, then you are on your way to comprehending the research you read about. The best way to ensure your comprehension is complete is to write it out. Assignment Writing a QALMRI for any research paper is simply writing short answers to each of these questions using clear and concise language. It is a condensed, short-form, version of the research. You should think of this as an efficient way to summarize the paper. If someone hadn’t read the paper, they should be able to get the gist of the paper from your QALMRIQ. You should not copy and paste any sections of the article- these summaries must be entirely in your own words to reflect your own understanding of the content in the article. To be even more specific, your task is to answer these questions for the paper that you read: Question: What was the broad question? What was the specific question? Alternative hypotheses: What were the hypotheses? Logic: If hypothesis #1 was true, what was the predicted outcome? What was the predicted outcome if hypothesis #2 was true? Method: What was the experimental design? Results: What was the pattern of data? What was the main finding? Inferences: What can be concluded about the hypotheses based on the data? What can be concluded about the specific and broad question? Question: What new questions arise from these results? What’s the next step in the research? What remains unknown with regard to the original broad/specific questions? How long is a QALMRIQ? Long enough to answer each question with clear and brief sentences. Should be about one-page (single-spaced). © 2008 Nature Publishing Group http://www.nature.com/natureneuroscience DECISION MAKING REVIEW Risky business: the neuroeconomics of decision making under uncertainty Michael L Platt & Scott A Huettel Many decisions involve uncertainty, or imperfect knowledge about how choices lead to outcomes. Colloquial notions of uncertainty, particularly when describing a decision as ‘risky’, often carry connotations of potential danger as well. Gambling on a long shot, whether a horse at the racetrack or a foreign oil company in a hedge fund, can have negative consequences, but the impact of uncertainty on decision making extends beyond gambling. Indeed, uncertainty in some form pervades nearly all our choices in daily life. Stepping into traffic to hail a cab, braving an ice storm to be the first at work, or dating the boss’s son or daughter also offer potentially great windfalls, at the expense of surety. We continually face trade-offs between options that promise safety and others that offer an uncertain potential for jackpot or bust. When mechanisms for dealing with uncertain outcomes fail, as in mental disorders such as problem gambling or addiction, the results can be disastrous. Thus, understanding decision making—indeed, understanding behavior itself—requires knowing how the brain responds to and uses information about uncertainty. The economics of uncertainty ‘Uncertainty’ has been defined in many ways for many audiences. Here we consider it the psychological state in which a decision maker lacks knowledge about what outcome will follow from what choice. The aspect of uncertainty most commonly considered by both economists and neuroscientists is risk, which refers to situations with a known distribution of possible outcomes. Early considerations of risk were tied to a problem of great interest to seventeenth-century intellectuals; namely, how to bet wisely in games of chance. Blaise Pascal recognized that by calculating the likelihood of the different outcomes in a gamble, an informed bettor could choose the option that provided the greatest combination of value (v) and probability (p). This quantity (v  p) is now known as ‘expected value’. Yet expected value is often a poor predictor of choice. Suppose that you are a contestant on the popular television game show Deal or No Deal. There are two possible prizes remaining: a very large prize of $500,000 and a very small prize of $1. One of those rewards—you do not know which!—is in a briefcase next to you. The host of the game show offers you $100,000 for that briefcase, giving you the enviable yet difficult choice between a sure $100,000 or a 50% chance of $500,000. Selecting the briefcase would be risky, as either a desirable or undesirable outcome might occur with equal likelihood. Which do you choose? Most individuals faced with real-world analogs of this scenario choose the safe option, even though it has a lower expected value. This phenomenon, in which choosers sacrifice expected value for surety, is known as risk aversion. However, the influence of risk and reward on decision making may depend on many factors: a sure $100,000 may Center for Neuroeconomic Studies, Room B243F LSRC Building, Duke University, Durham, North Carolina 27708-0999, USA. Correspondence should be addressed to M.L.P. (platt@neuro.duke.edu). Published online 26 March 2008; doi:10.1038/nn2062 398 mean more to a pauper than to a hedge fund manager. Based on observations such as these, Daniel Bernoulli1 suggested that choice depends on the subjective value, or utility, of goods (u), which leads to models of choice based on ‘expected utility’ (that is, u  p). When outcomes will occur with 100% probability (‘‘Should I select the steak dinner or the salad plate?’’), people’s choices may be considered to reflect their relative preferences for the different outcomes2. Although expected utility models provide a simple and powerful theoretical framework for choice under uncertainty, they often fail to describe real-world decision making. Across a wide range of situations— from investment choices to the allocation of effort—uncertainty leads to systematic violations of expected utility models3. In the decisions made by real Deal or No Deal contestants in several countries, contestants’ attitudes toward uncertainty were influenced by the history of their previous decisions4, not just the prizes available for the current decision. Moreover, many real-world decisions have a more complex form of uncertainty because the distribution of outcomes is itself unknown. For example, no one can know all of the consequences that will follow from enrolling at one university or another. When the outcomes of a decision cannot be specified, even probabilistically, the decision is said to be made under ambiguity, following concepts introduced by Knight5 and the terminology of Ellsberg6. In most circumstances, people are even more averse to ambiguity than to risk alone. Economists and psychologists have studied how these different aspects of uncertainty influence decision making. This research indicates that people are generally uncertainty averse when making decisions about monetary gains, but uncertainty-seeking when faced with potential losses. However, when either probabilities or values get very small, these tendencies reverse. Thus, the same individual may buy lottery tickets in the hope of a large unlikely gain, but purchase insurance to protect against an unlikely loss. To account for these uncertainty-induced deviations from expected utility models, Tversky and Kahneman proposed prospect theory7,8, which posits separate VOLUME 11 [ NUMBER 4 [ APRIL 2008 NATURE NEUROSCIENCE © 2008 Nature Publishing Group http://www.nature.com/natureneuroscience DECISION MAKING REVIEW functions for how people judge probabilities and how they convert objective value to subjective utility. Other theorists have developed models that account for the effects of ambiguity on choice, by treating ambiguity as a distribution of probabilities (and thus converting it to risk)9 or by modeling the psychological biases that ambiguity induces (such as attention to extreme outcomes)10,11. An important area of ongoing research is the study of individual differences in decision making under uncertainty. To address this issue, some researchers have evaluated whether risk attitudes constitute a personality trait12–14. In part, these efforts reflect the intuition of both the public and the scientific community that some individuals are inherently risk-seeking, while others are consistently risk averse. Despite its plausibility, the identification of a risk-seeking phenotype has foundered on a number of difficult problems. First, risk taking seems to be highly domain specific, such that one might find very different attitudes toward risk taking in financial versus health versus social situations. For example, among 126 respondents, no person was consistently risk averse in all five content domains, and only four individuals were consistently risk-seeking14. Moreover, the wording of questions leads to systematic differences in apparent willingness to take risks. People may differ more across domains in how they perceive risk rather than in their willingness to trade increased risk for increased benefits. Despite these inconsistencies, some data suggest stable gender and cultural differences in attitudes toward uncertainty15,16. Women, for instance, are more averse to uncertainty in all domains except social decision making14. That the same individual may express different attitudes toward uncertainty under different circumstances points to a potentially important role for neuroscience in that an understanding of mechanism may provide a powerful framework for interpreting these diverse behavioral findings. behavior may reflect the well known effects of delay on the subjective valuation of rewards, a phenomenon known as temporal discounting23. A wealth of data from studies of interval timing behavior indicates that animals represent delays linearly, with variance proportional to the mean24. This internal scaling of temporal intervals effectively skews the distribution of the subjective value of delayed rewards, thereby promoting risk-seeking behavior17. Together, these observations suggest that uncertainty about when a reward might materialize and uncertainty about how much reward might be realized are naturally related. One common explanation for the generality of temporal discounting is that delayed rewards might be viewed as risky, thus leading to preference for the sooner option in intertemporal choice tasks25. The converse might also be true26,27; for example, when an animal makes repeated decisions about a risky gamble that is resolved immediately, that gamble could also be interpreted as offering virtually certain but unpredictably delayed rewards. In a test of this idea, rhesus macaques were tested in a gambling task when there were different delays between choices28. Monkeys preferred the risky option when the time between trials was between 0 and 3 s, but preference for the risky option declined systematically as the time between choices was increased beyond 45 s. One explanation is that the salience of the large reward, and the expected delay until that reward could be obtained, influenced the subjective utility of the risky option. According to this argument, monkeys prefer the risky option because they focus on the large reward and ignore bad outcomes—a possibility consistent with behavioral studies in humans and rats27. Alternatively, monkeys could have a concave utility function for reward when the time between trials is short, which becomes convex when the time between trials is long. In principle, these possibilities might be distinguished with neurophysiological data. Risk sensitivity in nonhuman animals The canonical perspective on decision making in nonhuman animals is that, like people, they are generally risk averse. Indeed, risk aversion is reported for animals as diverse as fish, birds and bumblebees17,18. However, recent studies provide a more nuanced and contextdependent picture of decision making under risk. For example, risk preferences of dark-eyed juncos—a species of small songbird—depend on physiological state19. Birds were given a choice between two trays of millet seeds: one with a fixed number of seeds and a second in which the number of seeds varied probabilistically around the same mean. When the birds were warm, they preferred the fixed option, but when they were cold, they preferred the variable option. The switch from risk aversion to risk seeking as temperature dropped makes intuitive adaptive sense given that these birds do not maintain energy stores in fat because of weight limitations for flight. At the higher temperature, the rate of gain from the fixed option was sufficient to maintain the bird on a positive energy budget, but at the lower temperature energy expenditures were elevated, and the fixed option was no longer adequate to meet the bird’s energy needs. Thus, gambling on the risky option might provide the only chance of hitting the jackpot and acquiring enough resources to survive a long, cold night. In humans, wealth effects on risk taking might reflect the operation of a similar adaptive mechanism, promoting behaviors carrying an infinitesimal, but nonzero, probability of a jackpot. However, statedependent variables such as energy budget or wealth seem likely to influence decision making in different ways in different species, or even among individuals within the same species, depending on other contextual factors20,21. When reward sizes are held constant, but the delay until reward is unpredictable, animals generally prefer the risky option22. This Neuroeconomics of decision making under uncertainty Probability and value in the brain. The extensive economic research on decision making under uncertainty leaves unanswered the question of what brain mechanisms underlie these behavioral phenomena. For example, how does the brain deal with uncertainty? Are there distinct regions that process different forms of uncertainty? What are the contributions of brain systems for reward, executive control and other processes? The complexity of human decision making poses challenges for parsing its neural mechanisms. Even seemingly simple decisions may involve a host of neural processes (Fig. 1). A powerful approach has been to vary one component of uncertainty parametrically while tracking neural changes associated with that parameter, typically with neuroimaging techniques. Such research has identified potential neural substrates for probability and utility. If a decision maker cannot accurately learn the probabilities of potential outcomes, then decisions may be based on incomplete or erroneous information. In functional magnetic resonance imaging (fMRI) experiments29,30, subjects made a series of decisions under different degrees of uncertainty (from 60% to 100% probability that a correct decision would be rewarded). Importantly, subjects were never given explicit information about these probabilities, but learned them over time through feedback from their choices. Activation of the dorsomedial prefrontal cortex (Brodmann area 8) was significantly and negatively correlated with reward probability, an effect distinct from the activation associated with learning about probabilities. The medial prefrontal cortex has been previously implicated in other protocols in which subjects learn about uncertainty by trial and error, such as hypothesis testing31 and sequence prediction32. Different brain regions may contribute to the selection of behavior based on estimated probability under other circumstances. In a NATURE NEUROSCIENCE VOLUME 11 [ NUMBER 4 [ APRIL 2008 399 DECISION MAKING a REVIEW b mPFC INS © 2008 Nature Publishing Group http://www.nature.com/natureneuroscience vIPFC STR c PPC dIPFC probabilistic classification task in which decisions were based on the relative accumulation of information toward one choice or another33, activation of insular, lateral prefrontal and parietal cortices increased with increasing uncertainty, becoming maximal when there was equal evidence for each of two choices. This set of regions overlaps with those implicated in behavioral control and executive processing34–37, suggesting that information about probability may be an important input to neural control systems. Posterior parietal cortex, in particular, may be critical for many sorts of judgments about probability, value, and derivatives such as expected value because of its contributions to calculation and estimation38. Converging evidence from primate electrophysiology and human neuroimaging has identified brain regions associated with utility. The receipt of a rewarding stimulus (‘outcome’ or ‘experienced’ utility) evokes activation of neurons in the ventral tegmental area of the midbrain, as well as in the projection targets of those neurons in the nucleus accumbens within the ventral striatum and in the ventromedial prefrontal cortex39. Computational modeling of the response properties of dopamine neurons has led to the hypothesis that they track a reward prediction error reflecting deviations from expectation40,41. Specifically, firing rate transiently increases in response to unpredicted rewards as well as to cues that predict future rewards, remains constant to fully predicted rewards and decreases transiently when an expected reward fails to occur. Similar results have been observed in human fMRI studies. For example, in a reaction-time game42,43, activation of the ventral striatum depends on the magnitude of the expected reward but is independent of the probability with which that reward is received43. A wide variety of different reward types modulate ventral striatum activation, from primary rewards such as juice44,45 to more abstract rewards such as money46,47, humor48 and attractive images49. Indeed, even information carried by unobtained rewards (a fictive error signal) modulates the ventral striatum50. An important topic for future research is whether representations of value and probability share, at least in part, a common mechanism. While subjects played a gambling task, activation of the ventral striatum showed both a rapid response associated with expected value (maximal when a cue indicated a 100% chance of winning) and a sustained response associated with uncertainty (maximal when a cue indicated a 50% chance of winning)51. These results demonstrate the potential complexity of uncertainty representations 400 Figure 1 Brain regions implicated in decision making under uncertainty. Shown are locations of activation from selected functional magnetic resonance imaging studies of decision making under uncertainty. (a) Aversive stimuli, whether decision options that involve increased risk or punishments themselves, have frequently been shown to activate insular cortex (INS)33,52,53,58 and ventrolateral prefrontal cortex (vlPFC)61. (b) Unexpected rewards modulate activation of the striatum (STR)43,46,53,59,76, particularly its ventral aspect, as well as the medial prefrontal cortex (mPFC)43,53,61,76. (c) Executive control processes required for evaluation of uncertain choice options are supported by dorsolateral prefrontal cortex (dlPFC)52,58 and posterior parietal cortex (PPC)33,34. Each circle indicates an activation focus from a single study. All locations are shown in the left hemisphere for ease of visualization. in the brain, such that a single psychological state may reflect multiple overlapping mechanisms. Uncertainty influences on neural systems mediating choice. Recent work in neuroeconomics has examined the effects of uncertainty on the neural process of decision making. These studies involve trade-offs between economic parameters, such as a choice between one outcome with higher expected value and another with lower risk. Because of the complexity of these research tasks, such studies are typically done using fMRI in human subjects, with monetary rewards. In studies of risky choice, an intriguing and common result is increased activation in insular cortex when individuals choose higher-risk outcomes over safer outcomes. In an important early study, subjects played a ‘double-or-nothing’ game52. If subjects chose the safe option, passing, they would keep their current winnings. If subjects instead chose to gamble, they had a chance of doubling their total, at the risk of losing it all. Activation in the right anterior insula increased when the subjects chose to gamble, and the magnitude of insular activation was greatest in those individuals who scored highest on psychometric measures of neuroticism and harm avoidance. Under some circumstances, avoidance of risk may be maladaptive. For example, in a financial decision-making task that involves choices between safe ‘bonds’ and risky ‘stocks’53, when insular activation is relatively high before a decision, subjects tend to make risk-averse mistakes; that is, they chose the bonds, even though the stocks were an objectively superior choice. Insular activation is also robustly observed when decision-making impairments lead to increased risk54,55. Insular activation may reflect that region’s putative role in representing somatic states that can be used to simulate the potential negative consequences of actions56,57, as when people reject unfair offers in an economic game at substantial cost to themselves58. Individuals tend to avoid risky options that could result in either a potential loss or a potential gain, even when the option has a positive expected value. Most people will reject such gambles until the size of the potential gain becomes approximately twice as large as the size of the potential loss; this phenomenon is known as loss aversion. Loss aversion may reflect competition between distinct systems for losses and gains or unequal responses within a single system supporting both types of outcomes. Both gains and losses evoke activation in similar regions, including the striatum, midbrain, ventral prefrontal cortex and anterior cingulate cortex, with activation increasing with potential gain but decreasing with potential loss59. Consistent with the assumptions of economic prospect theory7, activation in these regions is more sensitive to the magnitude of loss than that of gain. Whether losses and gains are encoded by the same system, as suggested by these results, or by more than one system60,61 remains an open and important question. A potential resolution is suggested by work demonstrating that prediction errors for losses and for gains are encoded in distinct regions of the ventral striatum62. VOLUME 11 [ NUMBER 4 [ APRIL 2008 NATURE NEUROSCIENCE DECISION MAKING REVIEW at the experimental task and close to ambiguity- and risk-neutral in their choices. 30 Thus, lateral prefrontal and parietal cortices may support computational demands of evaluating uncertain gambles, whereas orbitofronp = 0.25 tal cortex and related regions may support emotional and motivational contributions to 15 choice. None of these brain regions specifically processes ambiguity or risk; instead, each may p = 0.5 contribute different aspects of information processing that are recruited to support deci0 sion making under different circumstances. BR030429 500 ms When faced with uncertainty, decision Choice makers often try to gather information to c 0.9 p = 0.75 improve future choices. Yet collecting inforActivity after movement Behavior 12 mation often requires forgoing more immedi0.8 ate rewards. This tension between seeking new 10 0.7 information and choosing the best option, 8 given what is already known, is called the 0.6 ‘explore-exploit’ dilemma. In a study investip = 1.0 6 gating the potential neural basis for such 0.5 4 trade-offs66, subjects chose between four virtual slot machines, each with a different, 0 10 20 30 40 50 0 10 20 30 40 50 Trial within block Trial within block unknown, and changing payoff structure. Stimulus on Reward or stimulus off When subjects received large rewards from Figure 2 Neuronal correlates of risky rewards. (a) Midbrain dopamine neurons in monkeys increase firing their chosen machine, there was increased in anticipation of probabilistically delivered juice rewards (after C.D. Fiorillo et al., 2003)69. (b) Neurons activation in the ventromedial prefrontal corin posterior cingulate cortex preferentially signal uncertain rewards in a visual gambling task. RF, tex, with deviations from an expected reward receptive field target. (c) Changes in neuronal activity following a change in the identity of the uncertain level represented in the amplitude of ventral target mirrored the development of preferences for that target (b,c after A.N. McCoy and M.L. Platt, striatal activation. Most intriguingly, those 2005)75. trials in which subjects showed the most exploratory behavior were associated with Fewer studies have investigated the neural mechanisms recruited by increased activation in frontopolar cortex and the intraparietal sulcus. uncertainty associated with ambiguity, or the lack of knowledge about An important topic for future research will be identifying how these outcome probabilities. In a study notable for its use of parallel latter regions (and, presumably, others) modulate reward processing neuroimaging and lesion methods63, subjects chose between a sure and behavioral control. reward ($3) and an outcome with unknown probability ($10 if a red card was drawn from a deck with unknown numbers of red and blue Neuronal correlates of outcome uncertainty and risky decision cards). On such ambiguity trials, most subjects preferred the sure making. Neuroimaging studies have thus implicated several brain outcomes. On the control trials, subjects were faced with similar trials regions in uncertainty-sensitive decision making. Neurophysiological that involved only risk (a sure $3 versus a 50% chance of winning $10). studies in animals confronted with reward uncertainty and risky Thus, these two types of trials were matched on all factors except decisions have only begun to explore the computations made by whether uncertainty was due to ambiguity or to risk. Ambiguity, neurons in these areas and others. As described above, dopamine relative to risk, increased fMRI activation in the lateral orbitofrontal neurons fire a phasic burst of action potentials after the delivery of an cortex and the amygdala, whereas risk-related activation was stronger unexpected reward as well as after the presentation of cues that predict in the striatum and precuneus. In the same behavioral task, subjects rewards. This same system may contribute to the evaluation of reward with orbitofrontal damage were much less averse to ambiguity (and to uncertainty as well. When monkeys are presented with cues that risk) than were control subjects with temporal lobe deficits. One probabilistically predict rewards, dopaminergic midbrain neurons potential interpretation of these converging results is that aversive respond with a tonic increase in activity after cue presentation that processes mediated by lateral orbitofrontal cortex, which is also reflects reward uncertainty (Fig. 2a)67. Neuronal activity peaks for cues implicated in processes associated with punishment61, exert a greater that predict rewards with 50% likelihood and decline as rewards influence under conditions of ambiguity. become more or less likely. These results thus suggest that dopamine Similar comparisons of the neural correlates of risk and ambiguity neurons may convey information about reward uncertainty. However, have been done by other groups64,65. An fMRI study found that neuronal activity correlated with uncertain rewards may instead reflect, subjects’ preferences for ambiguity correlate with activation in lateral or even contribute to, the subjective utility of risky options, which was prefrontal cortex, whereas preferences for risk correlate with activity in not measured in that study67, or the trial-wise back-propagation of parietal cortex64. These results link individual differences in economic reward prediction errors68 generated by dopamine neurons during preferences to activation of specific brain regions. However, there were learning (but see ref. 69). Nevertheless, the observation that lesions of no activation differences between ambiguity and risk in the ventral the nucleus accumbens, a principal target of midbrain dopamine frontal cortex or in the amygdala. The lack of such effects may reflect neurons, enhance risk aversion in rats70, as well as the increased the extensive training of the fMRI subjects, who were highly practiced incidence of compulsive gambling in patients with Parkinson’s disease a 2 spikes b 400 ms Risky choices into RF Certain choices into RF Risky choices out of RF Certain choices out of RF Firing rate (Hz) Probability of risky choice © 2008 Nature Publishing Group http://www.nature.com/natureneuroscience Firing rate (Hz) p = 0.0 NATURE NEUROSCIENCE VOLUME 11 [ NUMBER 4 [ APRIL 2008 401 © 2008 Nature Publishing Group http://www.nature.com/natureneuroscience DECISION MAKING taking dopamine agonists71,72, provides some functional support for dopaminergic involvement in risky decision making. Regardless of which brain system provides initial signals about the uncertainty of impending rewards, an important question for neurobiologists is how such information about risk influences preferences and how these preferences are mapped onto the actions that express decisions. Neurons in posterior cingulate cortex (CGp) may be involved in risky decision making73. Based on anatomy, CGp is well situated to translate subjective valuation signals into choice because it makes connections with brain areas implicated in processing reward, attention and action74. Moreover, this area is activated during decision making when rewards are uncertain in either amount75 or time76, and the magnitude of activation depends on the subjective appeal of proffered rewards77. Finally, neurophysiology shows that CGp neurons respond to salient visual stimuli78, after visual orienting movements78,79 and after rewards80, and that all these responses scale with reward size and predictability80. Together, these data suggest that CGp has an evaluative role in guiding behavior79,80. In a gambling task to assess whether reward-related modulation of neuronal activity in CGp reflects subjective utility or the objective properties of available rewards73, monkeys were given a choice between two options on a computer monitor that were matched for their expected value. Choosing the safe option always resulted in a mediumsized squirt of juice. Choosing the risky option resulted in a 50% chance of a large squirt of juice and a 50% chance of a small squirt of juice. In this task, monkeys favored the risky option, even in a second experiment when it paid less, on average, than the safe one. Neurons in CGp closely mirrored this behavioral bias. The studied neurons responded more strongly after risky choices (Fig. 2b), and these responses correlated with monkeys’ overall preference for the risky option rather than with the option’s objective value (Fig. 2c). These data are consistent with the hypothesis that CGp contributes to decision making by evaluating external events and actions with respect to subjective psychological state. One concern might be whether modulation of neuronal activity in CGp associated with choosing risky options reflects arousal, which might be elevated when making a risky choice81. Heart-rate, a somatic correlate of physiological arousal, did not vary between highrisk and low-risk blocks of trials, thus indicating that elevated arousal cannot completely account for these results. However, the responses of CGp neurons to uncertain gambles may reflect reward uncertainty per se rather than subjective preference for the risky option, as these variables were not dissociated in that study. Further studies that systematically dissociate risk and preference will be needed to address these questions. CONCLUSIONS Given the pervasiveness of uncertainty, it is hardly surprising that research from a wide range of disciplines—psychology, neuroscience, psychiatry and finance, among many—has already addressed some key questions: How should investors deal with financial risk, and how do they actually deal with it? What brain systems estimate outcome uncertainty? What is different about those systems in people who make bad choices? These are complex but tractable questions, and scientists have made significant progress toward their solution. We now understand that uncertainty strongly biases choice, that these biases vary across individuals and that specific brain systems contribute to biased decision making under uncertainty. These findings represent real advances, the importance of which should not be minimized. Yet deep questions remain unanswered and even unaddressed. We contend that fundamental advances in the study of decision making 402 REVIEW will only arise from consilient integration of expertise and techniques across traditionally independent fields. Neuroscience data can constrain behavioral studies; individual differences in genetic biomarkers can inform economic models; developmental changes in both behavior and brain can guide studies of decision making in adults. Only through explicit interdisciplinary, multimethodological and theoretically integrative research will the current plethora of perspectives coalesce into a single descriptive, predictive theory of risk-sensitive decision making under uncertainty. ACKNOWLEDGMENTS The authors wish to thank B. Hayden for comments on the manuscript and D. Smith for assistance with figure construction. The Center for Neuroeconomic Studies at Duke University is supported by the Office of the Provost and by the Duke Institute for Brain Sciences. The authors are also supported by MH-070685 (S.A.H.), EY-13496 (M.L.P.) and MH-71817 (M.L.P.). Published online at http://www.nature.com/natureneuroscience Reprints and permissions information is available online at http://npg.nature.com/ reprintsandpermissions 1. Bernoulli, D. Specimen theoriae novae de mensura sortis. Commentarii Academiae Scientarum Imperialis Petropolitanae 5, 175–192 (1738). 2. Samuelson, P.A. Consumption theory in terms of revealed preference. Economica 15, 243–253 (1948). 3. Camerer, C.F. Prospect theory in the wild: evidence from the field. in Choices, Values, and Frames (eds. Kahneman, D. & Tversky, A.) 288–300 (Cambridge Univ. Press, Cambridge, UK, 1981). 4. Post, T., van den Assem, M., Baltussen, G. & Thaler, R.H. Deal or no deal? Decision making under risk in a large-payoff game show. Am. Econ. Rev. (in the press). 5. Knight, F.H. Risk, Uncertainty, and Profit (Houghton Mifflin, New York, 1921). 6. Ellsberg, D. Risk, ambiguity, and the Savage axioms. Q. J. Econ. 75, 643–669 (1961). 7. Kahneman, D. & Tversky, A. Prospect theory: an analysis of decision under risk. Econometrica 47, 263–291 (1979). 8. Tversky, A. & Kahneman, D. Advances in prospect theory: cumulative representation of uncertainty. J. Risk Uncertain. 5, 297–323 (1992). 9. Camerer, C. & Weber, M. Recent developments in modeling preferences: uncertainty and ambiguity. J. Risk Uncertain. 5, 325–370 (1992). 10. Ghirardato, P., Maccheroni, F. & Marinacci, M. Differentiating ambiguity and ambiguity attitude. J. Econ. Theory 118, 133–173 (2004). 11. Tversky, A. & Fox, C.R. Weighing risk and uncertainty. Psychol. Rev. 102, 269–283 (1995). 12. MacCrimmon, K.R. & Wehrung, D.A. Taking Risks: The Management of Uncertainty (Free Press, New York, 1986). 13. Slovic, P. Assessment of risk taking behavior. Psychol. Bull. 61, 220–233 (1964). 14. Weber, E.U., Blais, A.R. & Betz, E. A domain specific risk-attitude scale: measuring risk perceptions and risk behaviors. J. Behav. Decis. Making 15, 263–290 (2002). 15. Hsee, C.K. & Weber, E.U. Cross-national differences in risk preference and lay predictions. J. Behav. Decis. Making 12, 165–179 (1999). 16. Bontempo, R.N., Bottom, W.P. & Weber, E.U. Cross-cultural differences in risk perception: a model-based approach. Risk Anal. 17, 479–488 (1997). 17. Kacelnik, A. & Bateson, M. Risky theories—the effects of variance on foraging decisions. Am. Zool. 36, 402–434 (1996). 18. Stephens, D.W. & Krebs, J.R. Foraging Theory (Princeton Univ. Press, Princeton, New Jersey, USA, 1986). 19. Caraco, T. Energy budgets, risk and foraging preferences in dark-eyed juncos (Junco hyemalis). Behav. Ecol. Sociobiol. 8, 213–217 (1981). 20. Gilby, I.C. & Wrangham, R.W. Risk-prone hunting by chimpanzees (Pan troglodytes schweinfurthii) increases during periods of high diet quality. Behav. Ecol. Sociobiol. 61, 1771–1779 (2007). 21. Kacelnik, A. Normative and descriptive models of decision making: time discounting and risk sensitivity. Ciba Found. Symp. 208, 51–67 discussion 208, 67–70 (1997). 22. Bateson, M. & Kacelnik, A. Starlings’ preferences for predictable and unpredictable delays to food. Anim. Behav. 53, 1129–1142 (1997). 23. Mazur, J.E. An adjusting procedure for studying delayed reinforcement. in The Effect of Delay and of Intervening Events on Reinforcement Value (eds. Commons, M., Mazur, J., Nevin, J. & Rachlin, H.) 55–73 (Erlbaum, Hillsdale, New Jersey, USA, 1987). 24. Gibbon, J. Scalar expectancy theory and Weber’s law in animal timing. Psychol. Rev. 84, 279–335 (1977). 25. McNamara, J.M. & Houston, A.I. The common currency for behavioral decisions. Am. Nat. 127, 358–378 (1986). 26. Kalenscher, T. Decision making: don’t risk a delay. Curr. Biol. 17, R58–R61 (2007). 27. Rachlin, H. The Science of Self-Control (Harvard Univ. Press, Cambridge, Massachusetts, USA, 2000). 28. Hayden, B.Y. & Platt, M.L. Temporal discounting predicts risk sensitivity in rhesus macaques. Curr. Biol. 17, 49–53 (2007). 29. Volz, K.G., Schubotz, R.I. & von Cramon, D.Y. Predicting events of varying probability: uncertainty investigated by fMRI. Neuroimage 19, 271–280 (2003). VOLUME 11 [ NUMBER 4 [ APRIL 2008 NATURE NEUROSCIENCE © 2008 Nature Publishing Group http://www.nature.com/natureneuroscience DECISION MAKING 30. Volz, K.G., Schubotz, R.I. & von Cramon, D.Y. Why am I unsure? Internal and external attributions of uncertainty dissociated by fMRI. Neuroimage 21, 848–857 (2004). 31. Elliott, R. & Dolan, R.J. Activation of different anterior cingulate foci in association with hypothesis testing and response selection. Neuroimage 8, 17–29 (1998). 32. Schubotz, R.I. & von Cramon, D.Y. A blueprint for target motion: fMRI reveals perceived sequential complexity to modulate premotor cortex. Neuroimage 16, 920–935 (2002). 33. Huettel, S.A., Song, A.W. & McCarthy, G. Decisions under uncertainty: probabilistic context influences activity of prefrontal and parietal cortices. J. Neurosci. 25, 3304–3311 (2005). 34. Paulus, M.P. et al. Prefrontal, parietal, and temporal cortex networks underlie decisionmaking in the presence of uncertainty. Neuroimage 13, 91–100 (2001). 35. Koechlin, E., Ody, C. & Kouneiher, F. The architecture of cognitive control in the human prefrontal cortex. Science 302, 1181–1185 (2003). 36. Miller, E.K. & Cohen, J.D. An integrative theory of prefrontal cortex function. Annu. Rev. Neurosci. 24, 167–202 (2001). 37. Bunge, S.A., Hazeltine, E., Scanlon, M.D., Rosen, A.C. & Gabrieli, J.D. Dissociable contributions of prefrontal and parietal cortices to response selection. Neuroimage 17, 1562–1571 (2002). 38. Dehaene, S., Piazza, M., Pinel, P. & Cohen, L. Three parietal circuits for number processing. Cogn. Neuropsychol. 20, 487–506 (2003). 39. Schultz, W. & Dickinson, A. Neuronal coding of prediction errors. Annu. Rev. Neurosci. 23, 473–500 (2000). 40. Montague, P.R., Dayan, P. & Sejnowski, T.J. A framework for mesencephalic dopamine systems based on predictive Hebbian learning. J. Neurosci. 16, 1936–1947 (1996). 41. Schultz, W., Dayan, P. & Montague, P.R. A neural substrate of prediction and reward. Science 275, 1593–1599 (1997). 42. Knutson, B., Fong, G.W., Adams, C.M., Varner, J.L. & Hommer, D. Dissociation of reward anticipation and outcome with event-related fMRI. Neuroreport 12, 3683–3687 (2001). 43. Knutson, B., Taylor, J., Kaufman, M., Peterson, R. & Glover, G. Distributed neural representation of expected value. J. Neurosci. 25, 4806–4812 (2005). 44. Berns, G.S., McClure, S.M., Pagnoni, G. & Montague, P.R. Predictability modulates human brain response to reward. J. Neurosci. 21, 2793–2798 (2001). 45. McClure, S.M., Berns, G.S. & Montague, P.R. Temporal prediction errors in a passive learning task activate human striatum. Neuron 38, 339–346 (2003). 46. Delgado, M.R., Nystrom, L.E., Fissell, C., Noll, D.C. & Fiez, J.A. Tracking the hemodynamic responses to reward and punishment in the striatum. J. Neurophysiol. 84, 3072–3077 (2000). 47. Breiter, H.C., Aharon, I., Kahneman, D., Dale, A. & Shizgal, P. Functional imaging of neural responses to expectancy and experience of monetary gains and losses. Neuron 30, 619–639 (2001). 48. Azim, E., Mobbs, D., Jo, B., Menon, V. & Reiss, A.L. Sex differences in brain activation elicited by humor. Proc. Natl. Acad. Sci. USA 102, 16496–16501 (2005). 49. Aharon, I. et al. Beautiful faces have variable reward value: fMRI and behavioral evidence. Neuron 32, 537–551 (2001). 50. Lohrenz, T., McCabe, K., Camerer, C.F. & Montague, P.R. Neural signature of fictive learning signals in a sequential investment task. Proc. Natl. Acad. Sci. USA 104, 9493–9498 (2007). 51. Preuschoff, K., Bossaerts, P. & Quartz, S.R. Neural differentiation of expected reward and risk in human subcortical structures. Neuron 51, 381–390 (2006). 52. Paulus, M.P., Rogalsky, C., Simmons, A., Feinstein, J.S. & Stein, M.B. Increased activation in the right insula during risk-taking decision making is related to harm avoidance and neuroticism. Neuroimage 19, 1439–1448 (2003). 53. Kuhnen, C.M. & Knutson, B. The neural basis of financial risk taking. Neuron 47, 763–770 (2005). 54. Paulus, M.P., Lovero, K.L., Wittmann, M. & Leland, D.S. Reduced behavioral and neural activation in stimulant users to different error rates during decision making. Biol. Psychiatry, published online 23 October 2007 (doi:10.1016/j.biopsych.2007.09.007). NATURE NEUROSCIENCE VOLUME 11 [ NUMBER 4 [ APRIL 2008 REVIEW 55. Venkatraman, V., Chuah, Y.M., Huettel, S.A. & Chee, M.W. Sleep deprivation elevates expectation of gains and attenuates response to losses following risky decisions. Sleep 30, 603–609 (2007). 56. Damasio, A.R. The somatic marker hypothesis and the possible functions of the prefrontal cortex. Phil. Trans. R. Soc. Lond. B 351, 1413–1420 (1996). 57. Craig, A.D. How do you feel? Interoception: the sense of the physiological condition of the body. Nat. Rev. Neurosci. 3, 655–666 (2002). 58. Sanfey, A.G., Rilling, J.K., Aronson, J.A., Nystrom, L.E. & Cohen, J.D. The neural basis of economic decision-making in the Ultimatum Game. Science 300, 1755–1758 (2003). 59. Tom, S.M., Fox, C.R., Trepel, C. & Poldrack, R.A. The neural basis of loss aversion in decision-making under risk. Science 315, 515–518 (2007). 60. Kringelbach, M.L. The human orbitofrontal cortex: linking reward to hedonic experience. Nat. Rev. Neurosci. 6, 691–702 (2005). 61. O’Doherty, J., Kringelbach, M.L., Rolls, E.T., Hornak, J. & Andrews, C. Abstract reward and punishment representations in the human orbitofrontal cortex. Nat. Neurosci. 4, 95–102 (2001). 62. Seymour, B., Daw, N., Dayan, P., Singer, T. & Dolan, R. Differential encoding of losses and gains in the human striatum. J. Neurosci. 27, 4826–4831 (2007). 63. Hsu, M., Bhatt, M., Adolphs, R., Tranel, D. & Camerer, C.F. Neural systems responding to degrees of uncertainty in human decision-making. Science 310, 1680–1683 (2005). 64. Huettel, S.A., Stowe, C.J., Gordon, E.M., Warner, B.T. & Platt, M.L. Neural signatures of economic preferences for risk and ambiguity. Neuron 49, 765–775 (2006). 65. Rustichini, A., Dickhaut, J., Ghirardato, P., Smith, K. & Pardo, J.V. A brain imaging study of the choice procedure. Games Econ. Behav. 52, 257–282 (2005). 66. Daw, N.D., O’Doherty, J.P., Dayan, P., Seymour, B. & Dolan, R.J. Cortical substrates for exploratory decisions in humans. Nature 441, 876–879 (2006). 67. Fiorillo, C.D., Tobler, P.N. & Schultz, W. Discrete coding of reward probability and uncertainty by dopamine neurons. Science 299, 1898–1902 (2003). 68. Niv, Y., Duff, M.O. & Dayan, P. Dopamine, uncertainty and TD learning. Behav. Brain Funct. 1, 6 (2005). 69. Fiorillo, C.D., Tobler, P.N. & Schultz, W. Evidence that the delay-period activity of dopamine neurons corresponds to reward uncertainty rather than backpropagating TD errors. Behav. Brain Funct. 1, 7 (2005). 70. Cardinal, R.N. & Howes, N.J. Effects of lesions of the nucleus accumbens core on choice between small certain rewards and large uncertain rewards in rats. BMC Neurosci. 6, 37 (2005). 71. Dodd, M.L. et al. Pathological gambling caused by drugs used to treat Parkinson disease. Arch. Neurol. 62, 1377–1381 (2005). 72. Driver-Dunckley, E., Samanta, J. & Stacy, M. Pathological gambling associated with dopamine agonist therapy in Parkinson’s disease. Neurology 61, 422–423 (2003). 73. McCoy, A.N. & Platt, M.L. Risk-sensitive neurons in macaque posterior cingulate cortex. Nat. Neurosci. 8, 1220–1227 (2005). 74. Vogt, B.A., Finch, D.M. & Olson, C.R. Functional heterogeneity in cingulate cortex: the anterior executive and posterior evaluative regions. Cereb. Cortex 2, 435–443 (1992). 75. Smith, K., Dickhaut, J., McCabe, K. & Pardo, J.V. Neuronal substrates for choice under ambiguity, risk, gains, and losses. Manage. Sci. 48, 711–718 (2002). 76. Kable, J.W. & Glimcher, P.W. The neural correlates of subjective value during intertemporal choice. Nat. Neurosci. 10, 1625–1633 (2007). 77. Small, D.M., Zatorre, R.J., Dagher, A., Evans, A.C. & Jones-Gotman, M. Changes in brain activity related to eating chocolate: from pleasure to aversion. Brain 124, 1720–1733 (2001). 78. Dean, H.L., Crowley, J.C. & Platt, M.L. Visual and saccade-related activity in macaque posterior cingulate cortex. J. Neurophysiol. 92, 3056–3068 (2004). 79. Olson, C.R., Musil, S.Y. & Goldberg, M.E. Single neurons in posterior cingulate cortex of behaving macaque: eye movement signals. J. Neurophysiol. 76, 3285–3300 (1996). 80. McCoy, A.N., Crowley, J.C., Haghighian, G., Dean, H.L. & Platt, M.L. Saccade reward signals in posterior cingulate cortex. Neuron 40, 1031–1040 (2003). 81. Meyer, G. et al. Casino gambling increases heart rate and salivary cortisol in regular gamblers. Biol. Psychiatry 48, 948–953 (2000). 403
Purchase answer to see full attachment

Explanation & Answer:

1 Page