Academic Appointments
-
Assistant Professor, Psychology
-
Member, Bio-X
-
Faculty Affiliate, Institute for Human-Centered Artificial Intelligence (HAI)
-
Member, Wu Tsai Neurosciences Institute
Administrative Appointments
-
Assistant Professor of Cognitive Psychology, Stanford University (2018 - Present)
-
Postdoctoral Associate, Massachusetts Institute of Technology (2014 - 2018)
-
Postdoctoral Fellow, Massachusetts Institute of Technology (2013 - 2014)
Program Affiliations
-
Symbolic Systems Program
Professional Education
-
PhD, University College London, Cognitive Science (2013)
-
MSc, University College London, Cognitive and Decision Sciences (2008)
-
Vordiplom, Humboldt University Berlin, Psychology (2007)
2024-25 Courses
- Senior Honors Research
PSYCH 198 (Aut, Win, Spr) - Statistical Methods for Behavioral and Social Sciences
COMM 352, PSYCH 252 (Win) -
Independent Studies (7)
- Graduate Research
PSYCH 275 (Aut, Win, Spr) - Independent Study
SYMSYS 196 (Aut, Win, Spr) - Master's Degree Project
SYMSYS 290 (Aut) - Ph.D. Research Rotation
CME 391 (Aut) - Practicum in Teaching
PSYCH 281 (Aut, Win, Spr) - Reading and Special Work
PSYCH 194 (Aut, Win, Spr) - Special Laboratory Projects
PSYCH 195 (Aut, Win, Spr)
- Graduate Research
-
Prior Year Courses
2023-24 Courses
- Senior Honors Research
PSYCH 198 (Win, Spr) - Statistical Methods for Behavioral and Social Sciences
PSYCH 252 (Win)
2022-23 Courses
- Statistical Methods for Behavioral and Social Sciences
COMM 352, PSYCH 252 (Win)
2021-22 Courses
- Advanced Research
PSYCH 197 (Aut) - Senior Honors Research
PSYCH 198 (Win, Spr) - Statistical Methods for Behavioral and Social Sciences
PSYCH 252 (Win)
- Senior Honors Research
Stanford Advisees
-
Doctoral Dissertation Reader (AC)
Alex Durango, Lynde Folsom, Effie Li -
Postdoctoral Faculty Sponsor
Erik Brockbank, Philipp Fraenken, Xinyi Lu -
Doctoral Dissertation Advisor (AC)
Ari Beller, David Rose, Sarah Wu -
Doctoral (Program)
Justin Yang
All Publications
-
Children use disagreement to infer what happened.
Cognition
2024; 250: 105836
Abstract
In a rapidly changing and diverse world, the ability to reason about conflicting perspectives is critical for effective communication, collaboration, and critical thinking. The current pre-registered experiments with children ages 7 to 11years investigated the developmental foundations of this ability through a novel social reasoning paradigm and a computational approach. In the inference task, children were asked to figure out what happened based on whether two speakers agreed or disagreed in their interpretation. In the prediction task, children were provided information about what happened and asked to predict whether two speakers will agree or disagree. Together, these experiments assessed children's understanding that disagreement often results from ambiguity about what happened, and that ambiguity about what happened is often predictive of disagreement. Experiment 1 (N=52) showed that children are more likely to infer that an ambiguous utterance occurred after learning that people disagreed (versus agreed) about what happened and found that these inferences become stronger with age. Experiment 2 (N=110) similarly found age-related change in children's inferences and also showed that children could reason in the forward direction, predicting that an ambiguous utterance would lead to disagreement. A computational model indicated that although children's ability to predict when disagreements might arise may be critical for making the reverse inferences, it did not fully account for age-related change.
View details for DOI 10.1016/j.cognition.2024.105836
View details for PubMedID 38843594
-
Counterfactual simulation in causal cognition.
Trends in cognitive sciences
2024
Abstract
How do people make causal judgments and assign responsibility? In this review article, I argue that counterfactual simulations are key. To simulate counterfactuals, we need three ingredients: a generative mental model of the world, the ability to perform interventions on that model, and the capacity to simulate the consequences of these interventions. The counterfactual simulation model (CSM) uses these ingredients to capture people's intuitive understanding of the physical and social world. In the physical domain, the CSM predicts people's causal judgments about dynamic collision events, complex situations that involve multiple causes, omissions as causes, and causes that sustain physical stability. In the social domain, the CSM predicts responsibility judgments in helping and hindering scenarios.
View details for DOI 10.1016/j.tics.2024.04.012
View details for PubMedID 38777661
-
If not me, then who? Responsibility and replacement
COGNITION
2024; 242
View details for DOI 10.1016/j.cognition.2023.105646
View details for Web of Science ID 001102672800001
-
Making a positive difference: Criticality in groups.
Cognition
2023; 238: 105499
Abstract
How critical are individual members perceived to be for their group's performance? In this paper, we show that judgments of criticality are intimately linked to considering responsibility. Prospective responsibility attributions in groups are relevant across many domains and situations, and have the potential to influence motivation, performance, and allocation of resources. We develop various models that differ in how the relationship between criticality and responsibility is conceptualized. To test our models, we experimentally vary the task structure (disjunctive, conjunctive, and mixed) and the abilities of the group members (which affects their probability of success). We show that both factors influence criticality judgments, and that a model which construes criticality as anticipated credit best explains participants' judgments. Unlike prior work that has defined criticality as anticipated responsibility for both success and failures, our results suggest that people only consider the possible outcomes in which an individual contributed to a group success, but disregard group failure.
View details for DOI 10.1016/j.cognition.2023.105499
View details for PubMedID 37327565
-
Mental jenga: A counterfactual simulation model of causal judgments about physical support.
Journal of experimental psychology. General
2023
Abstract
From building towers to picking an orange from a stack of fruit, assessing support is critical for successfully interacting with the physical world. But how do people determine whether one object supports another? In this paper, we develop a counterfactual simulation model (CSM) of causal judgments about physical support. The CSM predicts that people judge physical support by mentally simulating what would happen to a scene if the object of interest was removed. Three experiments test the model by asking one group of participants to judge what would happen to a tower if one of the blocks were removed, and another group of participants how responsible that block was for the tower's stability. The CSM accurately captures participants' predictions by running noisy simulations that incorporate different sources of uncertainty. Participants' responsibility judgments are closely related to counterfactual predictions: a block is more responsible when many other blocks would fall if it were removed. By construing physical support as preventing from falling, the CSM provides a unified account of how causal judgments in dynamic and static physical scenes arise from the process of counterfactual simulation. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
View details for DOI 10.1037/xge0001392
View details for PubMedID 37093666
-
Realism of Visual, Auditory, and Haptic Cues in Phenomenal Causality
IEEE. 2023: 306-312
View details for DOI 10.1109/WHC56415.2023.10224443
View details for Web of Science ID 001082286400045
-
Understanding Social Reasoning in Language Models with Language Models
NEURAL INFORMATION PROCESSING SYSTEMS (NIPS). 2023
View details for Web of Science ID 001230083402023
-
MoCa: Measuring Human-Language Model Alignment on Causal and Moral Judgment Tasks
NEURAL INFORMATION PROCESSING SYSTEMS (NIPS). 2023
View details for Web of Science ID 001226352807022
-
Active causal structure learning in continuous time.
Cognitive psychology
2022; 140: 101542
Abstract
Research on causal cognition has largely focused on learning and reasoning about contingency data aggregated across discrete observations or experiments. However, this setting represents only the tip of the causal cognition iceberg. A more general problem lurking beneath is that of learning the latent causal structure that connects events and actions as they unfold in continuous time. In this paper, we examine how people actively learn about causal structure in a continuous-time setting, focusing on when and where they intervene and how this shapes their learning. Across two experiments, we find that participants' accuracy depends on both the informativeness and evidential complexity of the data they generate. Moreover, participants' intervention choices strike a balance between maximizing expected information and minimizing inferential complexity. People time and target their interventions to create simple yet informative causal dynamics. We discuss how the continuous-time setting challenges existing computational accounts of active causal learning, and argue that metacognitive awareness of one's inferential limitations plays a critical role for successful learning in the wild.
View details for DOI 10.1016/j.cogpsych.2022.101542
View details for PubMedID 36586246
-
What would have happened? Counterfactuals, hypotheticals and causal judgements.
Philosophical transactions of the Royal Society of London. Series B, Biological sciences
2022; 377 (1866): 20210339
Abstract
How do people make causal judgements? In this paper, I show that counterfactual simulations are necessary for explaining causal judgements about events, and that hypotheticals do not suffice. In two experiments, participants viewed video clips of dynamic interactions between billiard balls. In Experiment 1, participants either made hypothetical judgements about whether ball B would go through the gate if ball A were not present in the scene, or counterfactual judgements about whether ball B would have gone through the gate if ball A had not been present. Because the clips featured a block in front of the gate that sometimes moved and sometimes stayed put, hypothetical and counterfactual judgements came apart. A computational model that evaluates hypotheticals and counterfactuals by running noisy physical simulations accurately captured participants' judgements. In Experiment 2, participants judged whether ball A caused ball B to go through the gate. The results showed a tight fit between counterfactual and causal judgements, whereas hypotheticals did not predict causal judgements. I discuss the implications of this work for theories of causality, and for studying the development of counterfactual thinking in children. This article is part of the theme issue 'Thinking about possibilities: mechanisms, ontogeny, functions and phylogeny'.
View details for DOI 10.1098/rstb.2021.0339
View details for PubMedID 36314143
-
Do Humans Trust Advice More if it Comes from AI? An Analysis of Human-AI Interactions
ASSOC COMPUTING MACHINERY. 2022: 763-777
View details for DOI 10.1145/3514094.3534150
View details for Web of Science ID 001118017500074
-
Inference From Explanation
JOURNAL OF EXPERIMENTAL PSYCHOLOGY-GENERAL
2021
Abstract
What do we communicate with causal explanations? Upon being told, "E because C", a person might learn that C and E both occurred, and perhaps that there is a causal relationship between C and E. In fact, causal explanations systematically disclose much more than this basic information. Here, we offer a communication-theoretic account of explanation that makes specific predictions about the kinds of inferences people draw from others' explanations. We test these predictions in a case study involving the role of norms and causal structure. In Experiment 1, we demonstrate that people infer the normality of a cause from an explanation when they know the underlying causal structure. In Experiment 2, we show that people infer the causal structure from an explanation if they know the normality of the cited cause. We find these patterns both for scenarios that manipulate the statistical and prescriptive normality of events. Finally, we consider how the communicative function of explanations, as highlighted in this series of experiments, may help to elucidate the distinctive roles that normality and causal structure play in causal judgment, paving the way toward a more comprehensive account of causal explanation. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
View details for DOI 10.1037/xge0001151
View details for Web of Science ID 000733088000001
View details for PubMedID 34928680
-
Moral dynamics: Grounding moral judgment in intuitive physics and intuitive psychology.
Cognition
2021; 217: 104890
Abstract
When holding others morally responsible, we care about what they did, and what they thought. Traditionally, research in moral psychology has relied on vignette studies, in which a protagonist's actions and thoughts are explicitly communicated. While this research has revealed what variables are important for moral judgment, such as actions and intentions, it is limited in providing a more detailed understanding of exactly how these variables affect moral judgment. Using dynamic visual stimuli that allow for a more fine-grained experimental control, recent studies have proposed a direct mapping from visual features to moral judgments. We embrace the use of visual stimuli in moral psychology, but question the plausibility of a feature-based theory of moral judgment. We propose that the connection from visual features to moral judgments is mediated by an inference about what the observed action reveals about the agent's mental states, and what causal role the agent's action played in bringing about the outcome. We present a computational model that formalizes moral judgments of agents in visual scenes as computations over an intuitive theory of physics combined with an intuitive theory of mind. We test the model's quantitative predictions in three experiments across a wide variety of dynamic interactions.
View details for DOI 10.1016/j.cognition.2021.104890
View details for PubMedID 34487974
-
A counterfactual simulation model of causation by omission.
Cognition
2021; 216: 104842
Abstract
When do people say that an event that did not happen was a cause? We extend the counterfactual simulation model (CSM) of causal judgment (Gerstenberg, Goodman, Lagnado, & Tenenbaum, 2021) and test it in a series of three experiments that look at people's causal judgments about omissions in dynamic physical interactions. The problem of omissive causation highlights a series of questions that need to be answered in order to give an adequate causal explanation of why something happened: what are the relevant variables, what are their possible values, how are putative causal relationships evaluated, and how is the causal responsibility for an outcome attributed to multiple causes? The CSM predicts that people make causal judgments about omissions in physical interactions by using their intuitive understanding of physics to mentally simulate what would have happened in relevant counterfactual situations. Prior work has argued that normative expectations affect judgments of omissive causation. Here we suggest a concrete mechanism of how this happens: expectations affect what counterfactuals people consider, and the more certain people are that the counterfactual outcome would have been different from what actually happened, the more causal they judge the omission to be. Our experiments show that both the structure of the physical situation as well as expectations about what will happen affect people's judgments.
View details for DOI 10.1016/j.cognition.2021.104842
View details for PubMedID 34303272
-
Predicting responsibility judgments from dispositional inferences and causal attributions.
Cognitive psychology
2021; 129: 101412
Abstract
The question of how people hold others responsible has motivated decades of theorizing and empirical work. In this paper, we develop and test a computational model that bridges the gap between broad but qualitative framework theories, and quantitative but narrow models. In our model, responsibility judgments are the result of two cognitive processes: a dispositional inference about a person's character from their action, and a causal attribution about the person's role in bringing about the outcome. We test the model in a group setting in which political committee members vote on whether or not a policy should be passed. We assessed participants' dispositional inferences and causal attributions by asking how surprising and important a committee member's vote was. Participants' answers to these questions in Experiment 1 accurately predicted responsibility judgments in Experiment 2. In Experiments 3 and 4, we show that the model also predicts moral responsibility judgments, and that importance matters more for responsibility, while surprise matters more for judgments of wrongfulness.
View details for DOI 10.1016/j.cogpsych.2021.101412
View details for PubMedID 34303092
-
A counterfactual simulation model of causal judgments for physical events.
Psychological review
2021
Abstract
How do people make causal judgments about physical events? We introduce the counterfactual simulation model (CSM) which predicts causal judgments in physical settings by comparing what actually happened with what would have happened in relevant counterfactual situations. The CSM postulates different aspects of causation that capture the extent to which a cause made a difference to whether and how the outcome occurred, and whether the cause was sufficient and robust. We test the CSM in several experiments in which participants make causal judgments about dynamic collision events. A preliminary study establishes a very close quantitative mapping between causal and counterfactual judgments. Experiment 1 demonstrates that counterfactuals are necessary for explaining causal judgments. Participants' judgments differed dramatically between pairs of situations in which what actually happened was identical, but where what would have happened differed. Experiment 2 features multiple candidate causes and shows that participants' judgments are sensitive to different aspects of causation. The CSM provides a better fit to participants' judgments than a heuristic model which uses features based on what actually happened. We discuss how the CSM can be used to model the semantics of different causal verbs, how it captures related concepts such as physical support, and how its predictions extend beyond the physical domain. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
View details for DOI 10.1037/rev0000281
View details for PubMedID 34096754
-
The Trajectory of Counterfactual Simulation in Development
DEVELOPMENTAL PSYCHOLOGY
2021; 57 (2): 253–68
Abstract
Young children often struggle to answer the question "what would have happened?" particularly in cases where the adult-like "correct" answer has the same outcome as the event that actually occurred. Previous work has assumed that children fail because they cannot engage in accurate counterfactual simulations. Children have trouble considering what to change and what to keep fixed when comparing counterfactual alternatives to reality. However, most developmental studies on counterfactual reasoning have relied on binary yes/no responses to counterfactual questions about complex narratives and so have only been able to document when these failures occur but not why and how. Here, we investigate counterfactual reasoning in a domain in which specific counterfactual possibilities are very concrete: simple collision interactions. In Experiment 1, we show that 5- to 10-year-old children (recruited from schools and museums in Connecticut) succeed in making predictions but struggle to answer binary counterfactual questions. In Experiment 2, we use a multiple-choice method to allow children to select a specific counterfactual possibility. We find evidence that 4- to 6-year-old children (recruited online from across the United States) do conduct counterfactual simulations, but the counterfactual possibilities younger children consider differ from adult-like reasoning in systematic ways. Experiment 3 provides further evidence that young children engage in simulation rather than using a simpler visual matching strategy. Together, these experiments show that the developmental changes in counterfactual reasoning are not simply a matter of whether children engage in counterfactual simulation but also how they do so. (PsycInfo Database Record (c) 2021 APA, all rights reserved).
View details for DOI 10.1037/dev0001140
View details for Web of Science ID 000618090100010
View details for PubMedID 33539131
-
A Causal Feeling: How Kinesthetic Haptics Affects Causal Perception
IEEE. 2021: 347
View details for DOI 10.1109/WHC49131.2021.9517133
View details for Web of Science ID 000707066600035
-
Expectations Affect Physical Causation Judgments
JOURNAL OF EXPERIMENTAL PSYCHOLOGY-GENERAL
2020; 149 (3): 599–607
Abstract
When several causes contributed to an outcome, people often single out one as "the" cause. What explains this selection? Previous work has argued that people select abnormal events as causes, though recent work has shown that sometimes normal events are preferred over abnormal ones. Existing studies have relied on vignettes that commonly feature agents committing immoral acts. An important challenge to the thesis that norms permeate causal reasoning is that people's responses may merely reflect pragmatic or social reasoning rather than arising from causal cognition per se. We tested this hypothesis by asking whether the previously observed patterns of causal selection emerge in tasks that recruit participants' causal reasoning about physical systems. Strikingly, we found that the same patterns observed in vignette studies with intentional agents arise in visual animations of physical interactions. Our results demonstrate how deeply normative expectations affect causal cognition. (PsycINFO Database Record (c) 2020 APA, all rights reserved).
View details for DOI 10.1037/xge0000670
View details for Web of Science ID 000512302600015
View details for PubMedID 31512904
-
Moral Values Reveal the Causality Implicit in Verb Meaning.
Cognitive science
2020; 44 (6): e12838
Abstract
Prior work has found that moral values that build and bind groups-that is, the binding values of ingroup loyalty, respect for authority, and preservation of purity-are linked to blaming people who have been harmed. The present research investigated whether people's endorsement of binding values predicts their assignment of the causal locus of harmful events to the victims of the events. We used an implicit causality task from psycholinguistics in which participants read a sentence in the form "SUBJECT verbed OBJECT because…" where male and female proper names occupy the SUBJECT and OBJECT position. The participants were asked to predict the pronoun that follows "because"-the referent to the subject or object-which indicates their intuition about the likely cause of the event. We also collected explicit judgments of causal contributions and measured participants' moral values to investigate the relationship between moral values and the causal interpretation of events. Using two verb sets and two independent replications (N = 459, 249, 788), we found that greater endorsement of binding values was associated with a higher likelihood of selecting the object as the cause for harmful events in the implicit causality task, a result consistent with, and supportive of, previous moral psychological work on victim blaming. Endorsement of binding values also predicted explicit causal attributions to victims. Overall, these findings indicate that moral values that support the group rather than the individual reliably predict that people shift the causal locus of harmful events to those affected by the harms.
View details for DOI 10.1111/cogs.12838
View details for PubMedID 32445245
-
Causal Responsibility and Robust Causation.
Frontiers in psychology
2020; 11: 1069
Abstract
How do people judge the degree of causal responsibility that an agent has for the outcomes of her actions? We show that a relatively unexplored factor - the robustness (or stability) of the causal chain linking the agent's action and the outcome - influences judgments of causal responsibility of the agent. In three experiments, we vary robustness by manipulating the number of background circumstances under which the action causes the effect, and find that causal responsibility judgments increase with robustness. In the first experiment, the robustness manipulation also raises the probability of the effect given the action. Experiments 2 and 3 control for probability-raising, and show that robustness still affects judgments of causal responsibility. In particular, Experiment 3 introduces an Ellsberg type of scenario to manipulate robustness, while keeping the conditional probability and the skill deployed in the action fixed. Experiment 4, replicates the results of Experiment 3, while contrasting between judgments of causal strength and of causal responsibility. The results show that in all cases, the perceived degree of responsibility (but not of causal strength) increases with the robustness of the action-outcome causal chain.
View details for DOI 10.3389/fpsyg.2020.01069
View details for PubMedID 32536893
-
Quantitative causal selection patterns in token causation.
PloS one
2019; 14 (8): e0219704
Abstract
When many events contributed to an outcome, people consistently judge some more causal than others, based in part on the prior probabilities of those events. For instance, when a tree bursts into flames, people judge the lightning strike more of a cause than the presence of oxygen in the air-in part because oxygen is so common, and lightning strikes are so rare. These effects, which play a major role in several prominent theories of token causation, have largely been studied through qualitative manipulations of the prior probabilities. Yet, there is good reason to think that people's causal judgments are on a continuum-and relatively little is known about how these judgments vary quantitatively as the prior probabilities change. In this paper, we measure people's causal judgment across parametric manipulations of the prior probabilities of antecedent events. Our experiments replicate previous qualitative findings, and also reveal several novel patterns that are not well-described by existing theories.
View details for DOI 10.1371/journal.pone.0219704
View details for PubMedID 31369584
-
Time in Causal Structure Learning
JOURNAL OF EXPERIMENTAL PSYCHOLOGY-LEARNING MEMORY AND COGNITION
2018; 44 (12): 1880–1910
Abstract
A large body of research has explored how the time between two events affects judgments of causal strength between them. In this article, we extend this work in 4 experiments that explore the role of temporal information in causal structure induction with multiple variables. We distinguish two qualitatively different types of information: The order in which events occur, and the temporal intervals between those events. We focus on one-shot learning in Experiment 1. In Experiment 2, we explore how people integrate evidence from multiple observations of the same causal device. Participants' judgments are well predicted by a Bayesian model that rules out causal structures that are inconsistent with the observed temporal order, and favors structures that imply similar intervals between causally connected components. In Experiments 3 and 4, we look more closely at participants' sensitivity to exact event timings. Participants see three events that always occur in the same order, but the variability and correlation between the timings of the events is either more consistent with a chain or a fork structure. We show, for the first time, that even when order cues do not differentiate, people can still make accurate causal structure judgments on the basis of interval variability alone. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
View details for PubMedID 29745682
-
What's fair? How children assign reward to members of teams with differing causal structures
COGNITION
2018; 177: 234–48
Abstract
How do children reward individual members of a team that has just won or lost a game? We know that from pre-school age, children consider agents' performance when allocating reward. Here we assess whether children can go further and appreciate performance in context: The same pattern of performance can contribute to a team outcome in different ways, depending on the underlying rule framework. Two experiments, with three age groups (4/5-year-olds, 6/7-year-olds, and adults), varied performance of team members, with the same performance patterns considered under three different game rules for winning or losing. These three rules created distinct underlying causal structures (additive, conjunctive, disjunctive), for how individual performance affected the overall team outcome. Even the youngest children differentiated between different game rules in their reward allocations. Rather than only rewarding individual performance, or whether the team won/lost, children were sensitive to the team structure and how players' performance contributed to the win/loss under each of the three game rules. Not only do young children consider it fair to allocate resources based on merit, but they are also sensitive to the causal structure of the situation which dictates how individual contributions combine to determine the team outcome.
View details for PubMedID 29723779
-
Lucky or clever? From expectations to responsibility judgments
COGNITION
2018; 177: 122–41
Abstract
How do people hold others responsible for the consequences of their actions? We propose a computational model that attributes responsibility as a function of what the observed action reveals about the person, and the causal role that the person's action played in bringing about the outcome. The model first infers what type of person someone is from having observed their action. It then compares a prior expectation of how a person would behave with a posterior expectation after having observed the person's action. The model predicts that a person is blamed for negative outcomes to the extent that the posterior expectation is lower than the prior, and credited for positive outcomes if the posterior is greater than the prior. We model the causal role of a person's action by using a counterfactual model that considers how close the action was to having been pivotal for the outcome. The model captures participants' responsibility judgments to a high degree of quantitative accuracy across three experiments that cover a range of different situations. It also solves an existing puzzle in the literature on the relationship between action expectations and responsibility judgments. Whether an unexpected action yields more or less credit depends on whether the action was diagnostic for good or bad future performance.
View details for PubMedID 29677593
-
Eye-Tracking Causality.
Psychological science
2017; 28 (12): 1731-1744
Abstract
How do people make causal judgments? What role, if any, does counterfactual simulation play? Counterfactual theories of causal judgments predict that people compare what actually happened with what would have happened if the candidate cause had been absent. Process theories predict that people focus only on what actually happened, to assess the mechanism linking candidate cause and outcome. We tracked participants' eye movements while they judged whether one billiard ball caused another one to go through a gate or prevented it from going through. Both participants' looking patterns and their judgments demonstrated that counterfactual simulation played a critical role. Participants simulated where the target ball would have gone if the candidate cause had been removed from the scene. The more certain participants were that the outcome would have been different, the stronger the causal judgments. These results provide the first direct evidence for spontaneous counterfactual simulation in an important domain of high-level cognition.
View details for DOI 10.1177/0956797617713053
View details for PubMedID 29039251
-
Plans, Habits, and Theory of Mind.
PloS one
2016; 11 (9): e0162246
Abstract
Human success and even survival depends on our ability to predict what others will do by guessing what they are thinking. If I accelerate, will he yield? If I propose, will she accept? If I confess, will they forgive? Psychologists call this capacity "theory of mind." According to current theories, we solve this problem by assuming that others are rational actors. That is, we assume that others design and execute efficient plans to achieve their goals, given their knowledge. But if this view is correct, then our theory of mind is startlingly incomplete. Human action is not always a product of rational planning, and we would be mistaken to always interpret others' behaviors as such. A wealth of evidence indicates that we often act habitually-a form of behavioral control that depends not on rational planning, but rather on a history of reinforcement. We aim to test whether the human theory of mind includes a theory of habitual action and to assess when and how it is deployed. In a series of studies, we show that human theory of mind is sensitive to factors influencing the balance between habitual and planned behavior.
View details for DOI 10.1371/journal.pone.0162246
View details for PubMedID 27584041
View details for PubMedCentralID PMC5008760
-
Causal Conceptions in Social Explanation and Moral Evaluation: A Historical Tour
PERSPECTIVES ON PSYCHOLOGICAL SCIENCE
2015; 10 (6): 790–812
Abstract
Understanding the causes of human behavior is essential for advancing one's interests and for coordinating social relations. The scientific study of how people arrive at such understandings or explanations has unfolded in four distinguishable epochs in psychology, each characterized by a different metaphor that researchers have used to represent how people think as they attribute causality and blame to other individuals. The first epoch was guided by an "intuitive scientist" metaphor, which emphasized whether observers perceived behavior to be caused by the unique tendencies of the actor or by common reactions to the requirements of the situation. This metaphor was displaced in the second epoch by an "intuitive lawyer" depiction that focused on the need to hold people responsible for their misdeeds. The third epoch was dominated by theories of counterfactual thinking, which conveyed a "person as reconstructor" approach that emphasized the antecedents and consequences of imagining alternatives to events, especially harmful ones. With the current upsurge in moral psychology, the fourth epoch emphasizes the moral-evaluative aspect of causal judgment, reflected in a "person as moralist" metaphor. By tracing the progression from the person-environment distinction in early attribution theories to present concerns with moral judgment, our goal is to clarify how causal constructs have been used, how they relate to one another, and what unique attributional problems each addresses.
View details for PubMedID 26581736
-
Causal superseding
COGNITION
2015; 137: 196–209
Abstract
When agents violate norms, they are typically judged to be more of a cause of resulting outcomes. In this paper, we suggest that norm violations also affect the causality attributed to other agents, a phenomenon we refer to as "causal superseding." We propose and test a counterfactual reasoning model of this phenomenon in four experiments. Experiments 1 and 2 provide an initial demonstration of the causal superseding effect and distinguish it from previously studied effects. Experiment 3 shows that this causal superseding effect is dependent on a particular event structure, following a prediction of our counterfactual model. Experiment 4 demonstrates that causal superseding can occur with violations of non-moral norms. We propose a model of the superseding effect based on the idea of counterfactual sufficiency.
View details for PubMedID 25698516
-
Concepts in a Probabilistic Language of Thought
CONCEPTUAL MIND: NEW DIRECTIONS IN THE STUDY OF CONCEPTS
2015: 623–53
View details for Web of Science ID 000378339600023
-
Causal Responsibility and Counterfactuals
COGNITIVE SCIENCE
2013; 37 (6): 1036–73
Abstract
How do people attribute responsibility in situations where the contributions of multiple agents combine to produce a joint outcome? The prevalence of over-determination in such cases makes this a difficult problem for counterfactual theories of causal responsibility. In this article, we explore a general framework for assigning responsibility in multiple agent contexts. We draw on the structural model account of actual causation (e.g., Halpern & Pearl, 2005) and its extension to responsibility judgments (Chockler & Halpern, 2004). We review the main theoretical and empirical issues that arise from this literature and propose a novel model of intuitive judgments of responsibility. This model is a function of both pivotality (whether an agent made a difference to the outcome) and criticality (how important the agent is perceived to be for the outcome, before any actions are taken). The model explains empirical results from previous studies and is supported by a new experiment that manipulates both pivotality and criticality. We also discuss possible extensions of this model to deal with a broader range of causal situations. Overall, our approach emphasizes the close interrelations between causality, counterfactuals, and responsibility attributions.
View details for PubMedID 23855451
-
When contributions make a difference: Explaining order effects in responsibility attribution
PSYCHONOMIC BULLETIN & REVIEW
2012; 19 (4): 729–36
Abstract
In two experiments, we established an order effect in responsibility attributions. In line with Spellman (Journal of Experimental Psychology: General 126: 323-348, 1997), who proposed that a person's perceived causal contribution varies with the degree to which it changes the probability of the eventual outcome, Experiment 1 showed that in a team challenge in which the players contribute sequentially, the last player's blame or credit is attenuated if the team's result has already been determined prior to her acting. Experiment 2 illustrated that this attenuation effect does not overgeneralize to situations in which the experienced order of events does not map onto the objective order of events; the level of the last person's performance is only discounted if that person knew that the result was already determined. Furthermore, Experiment 1 demonstrated that responsibility attributions remain sensitive to differences in performance, even if the outcome is already determined. We suggest a theoretical extension of Spellman's model, according to which participants' responsibility attributions are determined not only by whether a contribution made a difference in the actual situation, but also by whether it would have made a difference had things turned out somewhat differently.
View details for PubMedID 22585361
-
Spreading the blame: The allocation of responsibility amongst multiple agents
COGNITION
2010; 115 (1): 166–71
Abstract
How do people assign responsibility to individuals in a group context? Participants played a repeated trial experimental game with three computer players, in which they counted triangles presented in complex diagrams. Three between-subject conditions differed in how the group outcome was computed from the individual players' answers. After each round, participants assigned responsibility for the outcome to each player. The results showed that participants' assignments varied between conditions, and were sensitive to the function that translated individual contributions into the group outcome. The predictions of different cognitive models of attribution were tested, and the Structural Model (Chockler & Halpern, 2004) predicted the data best.
View details for PubMedID 20070958