Assistant Professor, Psychology
Faculty Affiliate, Institute for Human-Centered Artificial Intelligence (HAI)
Member, Wu Tsai Neurosciences Institute
Assistant Professor of Cognitive Psychology, Stanford University (2018 - Present)
Postdoctoral Associate, Massachusetts Institute of Technology (2014 - 2018)
Postdoctoral Fellow, Massachusetts Institute of Technology (2013 - 2014)
Symbolic Systems Program
PhD, University College London, Cognitive Science (2013)
MSc, University College London, Cognitive and Decision Sciences (2008)
Vordiplom, Humboldt University Berlin, Psychology (2007)
- Statistical Methods for Behavioral and Social Sciences
COMM 352, PSYCH 252 (Win)
Independent Studies (5)
- Graduate Research
PSYCH 275 (Aut, Win, Spr, Sum)
- Independent Study
SYMSYS 196 (Aut, Win, Spr)
- Practicum in Teaching
PSYCH 281 (Aut, Win, Spr)
- Reading and Special Work
PSYCH 194 (Aut, Win, Spr, Sum)
- Special Laboratory Projects
PSYCH 195 (Aut, Win, Spr, Sum)
- Graduate Research
Prior Year Courses
- Advanced Research
PSYCH 197 (Aut)
- Senior Honors Research
PSYCH 198 (Win, Spr)
- Statistical Methods for Behavioral and Social Sciences
PSYCH 252 (Win)
- Statistical Methods for Behavioral and Social Sciences
PSYCH 252 (Win)
- What makes a good explanation? Psychological and philosophical perspectives
PHIL 350, PSYCH 293 (Aut)
- Research Methods in Cognition & Development
PSYCH 187 (Spr)
- Statistical Methods for Behavioral and Social Sciences
PSYCH 252 (Win)
- Advanced Research
Doctoral Dissertation Reader (AC)
Elyse Chase, Effie Li, Marianna Zhang
Postdoctoral Faculty Sponsor
Philipp Fraenken, Lara Kirfel
Doctoral Dissertation Advisor (AC)
Ari Beller, David Rose, Sarah Wu
Postdoctoral Research Mentor
Inference From Explanation
JOURNAL OF EXPERIMENTAL PSYCHOLOGY-GENERAL
What do we communicate with causal explanations? Upon being told, "E because C", a person might learn that C and E both occurred, and perhaps that there is a causal relationship between C and E. In fact, causal explanations systematically disclose much more than this basic information. Here, we offer a communication-theoretic account of explanation that makes specific predictions about the kinds of inferences people draw from others' explanations. We test these predictions in a case study involving the role of norms and causal structure. In Experiment 1, we demonstrate that people infer the normality of a cause from an explanation when they know the underlying causal structure. In Experiment 2, we show that people infer the causal structure from an explanation if they know the normality of the cited cause. We find these patterns both for scenarios that manipulate the statistical and prescriptive normality of events. Finally, we consider how the communicative function of explanations, as highlighted in this series of experiments, may help to elucidate the distinctive roles that normality and causal structure play in causal judgment, paving the way toward a more comprehensive account of causal explanation. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
View details for DOI 10.1037/xge0001151
View details for Web of Science ID 000733088000001
View details for PubMedID 34928680
Moral dynamics: Grounding moral judgment in intuitive physics and intuitive psychology.
2021; 217: 104890
When holding others morally responsible, we care about what they did, and what they thought. Traditionally, research in moral psychology has relied on vignette studies, in which a protagonist's actions and thoughts are explicitly communicated. While this research has revealed what variables are important for moral judgment, such as actions and intentions, it is limited in providing a more detailed understanding of exactly how these variables affect moral judgment. Using dynamic visual stimuli that allow for a more fine-grained experimental control, recent studies have proposed a direct mapping from visual features to moral judgments. We embrace the use of visual stimuli in moral psychology, but question the plausibility of a feature-based theory of moral judgment. We propose that the connection from visual features to moral judgments is mediated by an inference about what the observed action reveals about the agent's mental states, and what causal role the agent's action played in bringing about the outcome. We present a computational model that formalizes moral judgments of agents in visual scenes as computations over an intuitive theory of physics combined with an intuitive theory of mind. We test the model's quantitative predictions in three experiments across a wide variety of dynamic interactions.
View details for DOI 10.1016/j.cognition.2021.104890
View details for PubMedID 34487974
A counterfactual simulation model of causation by omission.
2021; 216: 104842
When do people say that an event that did not happen was a cause? We extend the counterfactual simulation model (CSM) of causal judgment (Gerstenberg, Goodman, Lagnado, & Tenenbaum, 2021) and test it in a series of three experiments that look at people's causal judgments about omissions in dynamic physical interactions. The problem of omissive causation highlights a series of questions that need to be answered in order to give an adequate causal explanation of why something happened: what are the relevant variables, what are their possible values, how are putative causal relationships evaluated, and how is the causal responsibility for an outcome attributed to multiple causes? The CSM predicts that people make causal judgments about omissions in physical interactions by using their intuitive understanding of physics to mentally simulate what would have happened in relevant counterfactual situations. Prior work has argued that normative expectations affect judgments of omissive causation. Here we suggest a concrete mechanism of how this happens: expectations affect what counterfactuals people consider, and the more certain people are that the counterfactual outcome would have been different from what actually happened, the more causal they judge the omission to be. Our experiments show that both the structure of the physical situation as well as expectations about what will happen affect people's judgments.
View details for DOI 10.1016/j.cognition.2021.104842
View details for PubMedID 34303272
Predicting responsibility judgments from dispositional inferences and causal attributions.
2021; 129: 101412
The question of how people hold others responsible has motivated decades of theorizing and empirical work. In this paper, we develop and test a computational model that bridges the gap between broad but qualitative framework theories, and quantitative but narrow models. In our model, responsibility judgments are the result of two cognitive processes: a dispositional inference about a person's character from their action, and a causal attribution about the person's role in bringing about the outcome. We test the model in a group setting in which political committee members vote on whether or not a policy should be passed. We assessed participants' dispositional inferences and causal attributions by asking how surprising and important a committee member's vote was. Participants' answers to these questions in Experiment 1 accurately predicted responsibility judgments in Experiment 2. In Experiments 3 and 4, we show that the model also predicts moral responsibility judgments, and that importance matters more for responsibility, while surprise matters more for judgments of wrongfulness.
View details for DOI 10.1016/j.cogpsych.2021.101412
View details for PubMedID 34303092
A counterfactual simulation model of causal judgments for physical events.
How do people make causal judgments about physical events? We introduce the counterfactual simulation model (CSM) which predicts causal judgments in physical settings by comparing what actually happened with what would have happened in relevant counterfactual situations. The CSM postulates different aspects of causation that capture the extent to which a cause made a difference to whether and how the outcome occurred, and whether the cause was sufficient and robust. We test the CSM in several experiments in which participants make causal judgments about dynamic collision events. A preliminary study establishes a very close quantitative mapping between causal and counterfactual judgments. Experiment 1 demonstrates that counterfactuals are necessary for explaining causal judgments. Participants' judgments differed dramatically between pairs of situations in which what actually happened was identical, but where what would have happened differed. Experiment 2 features multiple candidate causes and shows that participants' judgments are sensitive to different aspects of causation. The CSM provides a better fit to participants' judgments than a heuristic model which uses features based on what actually happened. We discuss how the CSM can be used to model the semantics of different causal verbs, how it captures related concepts such as physical support, and how its predictions extend beyond the physical domain. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
View details for DOI 10.1037/rev0000281
View details for PubMedID 34096754
The Trajectory of Counterfactual Simulation in Development
2021; 57 (2): 253–68
Young children often struggle to answer the question "what would have happened?" particularly in cases where the adult-like "correct" answer has the same outcome as the event that actually occurred. Previous work has assumed that children fail because they cannot engage in accurate counterfactual simulations. Children have trouble considering what to change and what to keep fixed when comparing counterfactual alternatives to reality. However, most developmental studies on counterfactual reasoning have relied on binary yes/no responses to counterfactual questions about complex narratives and so have only been able to document when these failures occur but not why and how. Here, we investigate counterfactual reasoning in a domain in which specific counterfactual possibilities are very concrete: simple collision interactions. In Experiment 1, we show that 5- to 10-year-old children (recruited from schools and museums in Connecticut) succeed in making predictions but struggle to answer binary counterfactual questions. In Experiment 2, we use a multiple-choice method to allow children to select a specific counterfactual possibility. We find evidence that 4- to 6-year-old children (recruited online from across the United States) do conduct counterfactual simulations, but the counterfactual possibilities younger children consider differ from adult-like reasoning in systematic ways. Experiment 3 provides further evidence that young children engage in simulation rather than using a simpler visual matching strategy. Together, these experiments show that the developmental changes in counterfactual reasoning are not simply a matter of whether children engage in counterfactual simulation but also how they do so. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
View details for DOI 10.1037/dev0001140
View details for Web of Science ID 000618090100010
View details for PubMedID 33539131
A Causal Feeling: How Kinesthetic Haptics Affects Causal Perception
IEEE. 2021: 347
View details for DOI 10.1109/WHC49131.2021.9517133
View details for Web of Science ID 000707066600035
Expectations Affect Physical Causation Judgments
JOURNAL OF EXPERIMENTAL PSYCHOLOGY-GENERAL
2020; 149 (3): 599–607
When several causes contributed to an outcome, people often single out one as "the" cause. What explains this selection? Previous work has argued that people select abnormal events as causes, though recent work has shown that sometimes normal events are preferred over abnormal ones. Existing studies have relied on vignettes that commonly feature agents committing immoral acts. An important challenge to the thesis that norms permeate causal reasoning is that people's responses may merely reflect pragmatic or social reasoning rather than arising from causal cognition per se. We tested this hypothesis by asking whether the previously observed patterns of causal selection emerge in tasks that recruit participants' causal reasoning about physical systems. Strikingly, we found that the same patterns observed in vignette studies with intentional agents arise in visual animations of physical interactions. Our results demonstrate how deeply normative expectations affect causal cognition. (PsycINFO Database Record (c) 2020 APA, all rights reserved).
View details for DOI 10.1037/xge0000670
View details for Web of Science ID 000512302600015
View details for PubMedID 31512904
Moral Values Reveal the Causality Implicit in Verb Meaning.
2020; 44 (6): e12838
Prior work has found that moral values that build and bind groups-that is, the binding values of ingroup loyalty, respect for authority, and preservation of purity-are linked to blaming people who have been harmed. The present research investigated whether people's endorsement of binding values predicts their assignment of the causal locus of harmful events to the victims of the events. We used an implicit causality task from psycholinguistics in which participants read a sentence in the form "SUBJECT verbed OBJECT because…" where male and female proper names occupy the SUBJECT and OBJECT position. The participants were asked to predict the pronoun that follows "because"-the referent to the subject or object-which indicates their intuition about the likely cause of the event. We also collected explicit judgments of causal contributions and measured participants' moral values to investigate the relationship between moral values and the causal interpretation of events. Using two verb sets and two independent replications (N = 459, 249, 788), we found that greater endorsement of binding values was associated with a higher likelihood of selecting the object as the cause for harmful events in the implicit causality task, a result consistent with, and supportive of, previous moral psychological work on victim blaming. Endorsement of binding values also predicted explicit causal attributions to victims. Overall, these findings indicate that moral values that support the group rather than the individual reliably predict that people shift the causal locus of harmful events to those affected by the harms.
View details for DOI 10.1111/cogs.12838
View details for PubMedID 32445245
Causal Responsibility and Robust Causation.
Frontiers in psychology
2020; 11: 1069
How do people judge the degree of causal responsibility that an agent has for the outcomes of her actions? We show that a relatively unexplored factor - the robustness (or stability) of the causal chain linking the agent's action and the outcome - influences judgments of causal responsibility of the agent. In three experiments, we vary robustness by manipulating the number of background circumstances under which the action causes the effect, and find that causal responsibility judgments increase with robustness. In the first experiment, the robustness manipulation also raises the probability of the effect given the action. Experiments 2 and 3 control for probability-raising, and show that robustness still affects judgments of causal responsibility. In particular, Experiment 3 introduces an Ellsberg type of scenario to manipulate robustness, while keeping the conditional probability and the skill deployed in the action fixed. Experiment 4, replicates the results of Experiment 3, while contrasting between judgments of causal strength and of causal responsibility. The results show that in all cases, the perceived degree of responsibility (but not of causal strength) increases with the robustness of the action-outcome causal chain.
View details for DOI 10.3389/fpsyg.2020.01069
View details for PubMedID 32536893
Quantitative causal selection patterns in token causation.
2019; 14 (8): e0219704
When many events contributed to an outcome, people consistently judge some more causal than others, based in part on the prior probabilities of those events. For instance, when a tree bursts into flames, people judge the lightning strike more of a cause than the presence of oxygen in the air-in part because oxygen is so common, and lightning strikes are so rare. These effects, which play a major role in several prominent theories of token causation, have largely been studied through qualitative manipulations of the prior probabilities. Yet, there is good reason to think that people's causal judgments are on a continuum-and relatively little is known about how these judgments vary quantitatively as the prior probabilities change. In this paper, we measure people's causal judgment across parametric manipulations of the prior probabilities of antecedent events. Our experiments replicate previous qualitative findings, and also reveal several novel patterns that are not well-described by existing theories.
View details for DOI 10.1371/journal.pone.0219704
View details for PubMedID 31369584
Time in Causal Structure Learning
JOURNAL OF EXPERIMENTAL PSYCHOLOGY-LEARNING MEMORY AND COGNITION
2018; 44 (12): 1880–1910
A large body of research has explored how the time between two events affects judgments of causal strength between them. In this article, we extend this work in 4 experiments that explore the role of temporal information in causal structure induction with multiple variables. We distinguish two qualitatively different types of information: The order in which events occur, and the temporal intervals between those events. We focus on one-shot learning in Experiment 1. In Experiment 2, we explore how people integrate evidence from multiple observations of the same causal device. Participants' judgments are well predicted by a Bayesian model that rules out causal structures that are inconsistent with the observed temporal order, and favors structures that imply similar intervals between causally connected components. In Experiments 3 and 4, we look more closely at participants' sensitivity to exact event timings. Participants see three events that always occur in the same order, but the variability and correlation between the timings of the events is either more consistent with a chain or a fork structure. We show, for the first time, that even when order cues do not differentiate, people can still make accurate causal structure judgments on the basis of interval variability alone. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
View details for PubMedID 29745682
What's fair? How children assign reward to members of teams with differing causal structures
2018; 177: 234–48
How do children reward individual members of a team that has just won or lost a game? We know that from pre-school age, children consider agents' performance when allocating reward. Here we assess whether children can go further and appreciate performance in context: The same pattern of performance can contribute to a team outcome in different ways, depending on the underlying rule framework. Two experiments, with three age groups (4/5-year-olds, 6/7-year-olds, and adults), varied performance of team members, with the same performance patterns considered under three different game rules for winning or losing. These three rules created distinct underlying causal structures (additive, conjunctive, disjunctive), for how individual performance affected the overall team outcome. Even the youngest children differentiated between different game rules in their reward allocations. Rather than only rewarding individual performance, or whether the team won/lost, children were sensitive to the team structure and how players' performance contributed to the win/loss under each of the three game rules. Not only do young children consider it fair to allocate resources based on merit, but they are also sensitive to the causal structure of the situation which dictates how individual contributions combine to determine the team outcome.
View details for PubMedID 29723779
Lucky or clever? From expectations to responsibility judgments
2018; 177: 122–41
How do people hold others responsible for the consequences of their actions? We propose a computational model that attributes responsibility as a function of what the observed action reveals about the person, and the causal role that the person's action played in bringing about the outcome. The model first infers what type of person someone is from having observed their action. It then compares a prior expectation of how a person would behave with a posterior expectation after having observed the person's action. The model predicts that a person is blamed for negative outcomes to the extent that the posterior expectation is lower than the prior, and credited for positive outcomes if the posterior is greater than the prior. We model the causal role of a person's action by using a counterfactual model that considers how close the action was to having been pivotal for the outcome. The model captures participants' responsibility judgments to a high degree of quantitative accuracy across three experiments that cover a range of different situations. It also solves an existing puzzle in the literature on the relationship between action expectations and responsibility judgments. Whether an unexpected action yields more or less credit depends on whether the action was diagnostic for good or bad future performance.
View details for PubMedID 29677593
2017; 28 (12): 1731-1744
How do people make causal judgments? What role, if any, does counterfactual simulation play? Counterfactual theories of causal judgments predict that people compare what actually happened with what would have happened if the candidate cause had been absent. Process theories predict that people focus only on what actually happened, to assess the mechanism linking candidate cause and outcome. We tracked participants' eye movements while they judged whether one billiard ball caused another one to go through a gate or prevented it from going through. Both participants' looking patterns and their judgments demonstrated that counterfactual simulation played a critical role. Participants simulated where the target ball would have gone if the candidate cause had been removed from the scene. The more certain participants were that the outcome would have been different, the stronger the causal judgments. These results provide the first direct evidence for spontaneous counterfactual simulation in an important domain of high-level cognition.
View details for DOI 10.1177/0956797617713053
View details for PubMedID 29039251
Plans, Habits, and Theory of Mind.
2016; 11 (9): e0162246
Human success and even survival depends on our ability to predict what others will do by guessing what they are thinking. If I accelerate, will he yield? If I propose, will she accept? If I confess, will they forgive? Psychologists call this capacity "theory of mind." According to current theories, we solve this problem by assuming that others are rational actors. That is, we assume that others design and execute efficient plans to achieve their goals, given their knowledge. But if this view is correct, then our theory of mind is startlingly incomplete. Human action is not always a product of rational planning, and we would be mistaken to always interpret others' behaviors as such. A wealth of evidence indicates that we often act habitually-a form of behavioral control that depends not on rational planning, but rather on a history of reinforcement. We aim to test whether the human theory of mind includes a theory of habitual action and to assess when and how it is deployed. In a series of studies, we show that human theory of mind is sensitive to factors influencing the balance between habitual and planned behavior.
View details for DOI 10.1371/journal.pone.0162246
View details for PubMedID 27584041
View details for PubMedCentralID PMC5008760
Causal Conceptions in Social Explanation and Moral Evaluation: A Historical Tour
PERSPECTIVES ON PSYCHOLOGICAL SCIENCE
2015; 10 (6): 790–812
Understanding the causes of human behavior is essential for advancing one's interests and for coordinating social relations. The scientific study of how people arrive at such understandings or explanations has unfolded in four distinguishable epochs in psychology, each characterized by a different metaphor that researchers have used to represent how people think as they attribute causality and blame to other individuals. The first epoch was guided by an "intuitive scientist" metaphor, which emphasized whether observers perceived behavior to be caused by the unique tendencies of the actor or by common reactions to the requirements of the situation. This metaphor was displaced in the second epoch by an "intuitive lawyer" depiction that focused on the need to hold people responsible for their misdeeds. The third epoch was dominated by theories of counterfactual thinking, which conveyed a "person as reconstructor" approach that emphasized the antecedents and consequences of imagining alternatives to events, especially harmful ones. With the current upsurge in moral psychology, the fourth epoch emphasizes the moral-evaluative aspect of causal judgment, reflected in a "person as moralist" metaphor. By tracing the progression from the person-environment distinction in early attribution theories to present concerns with moral judgment, our goal is to clarify how causal constructs have been used, how they relate to one another, and what unique attributional problems each addresses.
View details for PubMedID 26581736
2015; 137: 196–209
When agents violate norms, they are typically judged to be more of a cause of resulting outcomes. In this paper, we suggest that norm violations also affect the causality attributed to other agents, a phenomenon we refer to as "causal superseding." We propose and test a counterfactual reasoning model of this phenomenon in four experiments. Experiments 1 and 2 provide an initial demonstration of the causal superseding effect and distinguish it from previously studied effects. Experiment 3 shows that this causal superseding effect is dependent on a particular event structure, following a prediction of our counterfactual model. Experiment 4 demonstrates that causal superseding can occur with violations of non-moral norms. We propose a model of the superseding effect based on the idea of counterfactual sufficiency.
View details for PubMedID 25698516
Concepts in a Probabilistic Language of Thought
CONCEPTUAL MIND: NEW DIRECTIONS IN THE STUDY OF CONCEPTS
View details for Web of Science ID 000378339600023
Causal Responsibility and Counterfactuals
2013; 37 (6): 1036–73
How do people attribute responsibility in situations where the contributions of multiple agents combine to produce a joint outcome? The prevalence of over-determination in such cases makes this a difficult problem for counterfactual theories of causal responsibility. In this article, we explore a general framework for assigning responsibility in multiple agent contexts. We draw on the structural model account of actual causation (e.g., Halpern & Pearl, 2005) and its extension to responsibility judgments (Chockler & Halpern, 2004). We review the main theoretical and empirical issues that arise from this literature and propose a novel model of intuitive judgments of responsibility. This model is a function of both pivotality (whether an agent made a difference to the outcome) and criticality (how important the agent is perceived to be for the outcome, before any actions are taken). The model explains empirical results from previous studies and is supported by a new experiment that manipulates both pivotality and criticality. We also discuss possible extensions of this model to deal with a broader range of causal situations. Overall, our approach emphasizes the close interrelations between causality, counterfactuals, and responsibility attributions.
View details for PubMedID 23855451
When contributions make a difference: Explaining order effects in responsibility attribution
PSYCHONOMIC BULLETIN & REVIEW
2012; 19 (4): 729–36
In two experiments, we established an order effect in responsibility attributions. In line with Spellman (Journal of Experimental Psychology: General 126: 323-348, 1997), who proposed that a person's perceived causal contribution varies with the degree to which it changes the probability of the eventual outcome, Experiment 1 showed that in a team challenge in which the players contribute sequentially, the last player's blame or credit is attenuated if the team's result has already been determined prior to her acting. Experiment 2 illustrated that this attenuation effect does not overgeneralize to situations in which the experienced order of events does not map onto the objective order of events; the level of the last person's performance is only discounted if that person knew that the result was already determined. Furthermore, Experiment 1 demonstrated that responsibility attributions remain sensitive to differences in performance, even if the outcome is already determined. We suggest a theoretical extension of Spellman's model, according to which participants' responsibility attributions are determined not only by whether a contribution made a difference in the actual situation, but also by whether it would have made a difference had things turned out somewhat differently.
View details for PubMedID 22585361
Spreading the blame: The allocation of responsibility amongst multiple agents
2010; 115 (1): 166–71
How do people assign responsibility to individuals in a group context? Participants played a repeated trial experimental game with three computer players, in which they counted triangles presented in complex diagrams. Three between-subject conditions differed in how the group outcome was computed from the individual players' answers. After each round, participants assigned responsibility for the outcome to each player. The results showed that participants' assignments varied between conditions, and were sensitive to the function that translated individual contributions into the group outcome. The predictions of different cognitive models of attribution were tested, and the Structural Model (Chockler & Halpern, 2004) predicted the data best.
View details for PubMedID 20070958