Academic Appointments

Administrative Appointments

  • Assistant Professor of Cognitive Psychology, Stanford University (2010 - Present)
  • Assistant Professor of Linguistics and of Computer Science (by courtesy), Stanford University (2010 - Present)
  • Research Scientist, Massachusetts Institute of Technology (2008 - 2010)
  • Post-Doctoral Associate, Massachusetts Institute of Technology (2005 - 2008)

Honors & Awards

  • Paper prize for computational modeling of language, Cognitive Science Society (2014)
  • Roger N. Shepard Distinguished Visiting Scholar, University of Arizona (2013 - 2014)
  • Fellow, John Philip Coghlan (2013 - 2014)
  • Fellow, John Philip Coghlan (2014 - 2015)
  • Paper prize for computational modeling of language, Cognitive Science Society (2012)
  • Best poster prize, International Joint Conference on Artificial Intelligence (2011)
  • Paper prize for computational modeling of language, Cognitive Science Society (2011)
  • Paper prize for computational modeling of higher-level cognition, Cognitive Science Society (2007)
  • Paper prize for computational modeling of perception and action, Cognitive Science Society (2007)
  • VIGRE Fellowship, National Science Foundation (2001 - 2002)
  • Continuing Graduate Study Fellowship, University of Texas (2001 - 2002)
  • Bruton Graduate Fellowship, University of Texas (2000)
  • Scholarship, National Merit Scholarship Corporation (1994 - 1997)

Boards, Advisory Committees, Professional Organizations

  • Member, Cognitive Science Society
  • Member, Psychonomic Society

Program Affiliations

  • Symbolic Systems Program

Professional Education

  • B.A., University of Arizona, Mathematics (1997)
  • B.S., University of Arizona, Physics (1997)
  • Ph.D., University of Texas at Austin, Mathematics (2003)

2018-19 Courses

Stanford Advisees

All Publications

  • The language of generalization. Psychological review Tessler, M. H., Goodman, N. D. 2019


    Language provides simple ways of communicating generalizable knowledge to each other (e.g., "Birds fly," "John hikes," and "Fire makes smoke"). Though found in every language and emerging early in development, the language of generalization is philosophically puzzling and has resisted precise formalization. Here, we propose the first formal account of generalizations conveyed with language that makes quantitative predictions about human understanding. The basic idea is that the language of generalization expresses that an event or a property occurs relatively often, where what counts as relatively often depends upon one's prior expectations. We formalize this simple idea in a probabilistic model of language understanding, which we test in 3 diverse case studies: generalizations about categories (generic language), events (habitual language), and causes (causal language). We find that the model explains the gradience in human endorsements that has perplexed previous attempts to formalize this swath of linguistic expressions. This work opens the door to understanding precisely how abstract knowledge is learned from language. (PsycINFO Database Record (c) 2019 APA, all rights reserved).

    View details for DOI 10.1037/rev0000142

    View details for PubMedID 30762385

  • A thousand studies for the price of one: Accelerating psychological science with Pushkin. Behavior research methods Hartshorne, J. K., de Leeuw, J. R., Goodman, N. D., Jennings, M., O'Donnell, T. J. 2019


    Half of the world's population has internet access. In principle, researchers are no longer limited to subjects they can recruit into the laboratory. Any study that can be run on a computer or mobile device can be run with nearly any demographic anywhere in the world, and in large numbers. This has allowed scientists to effectively run hundreds of experiments at once. Despite their transformative power, such studies remain rare for practical reasons: the need for sophisticated software, the difficulty of recruiting so many subjects, and a lack of research paradigms that make effective use of their large amounts of data, due to such realities as that they require sophisticated software in order to run effectively. We present Pushkin: an open-source platform for designing and conducting massive experiments over the internet. Pushkin allows for a wide range of behavioral paradigms, through integration with the intuitive and flexible jsPsych experiment engine. It also addresses the basic technical challenges associated with massive, worldwide studies, including auto-scaling, extensibility, machine-assisted experimental design, multisession studies, and data security.

    View details for DOI 10.3758/s13428-018-1155-z

    View details for PubMedID 30746644

  • Definitely, maybe: A new experimental paradigm for investigating the pragmatics of evidential devices across languages JOURNAL OF PRAGMATICS Degen, J., Trotzke, A., Scontras, G., Wittenberg, E., Goodman, N. D. 2019; 140: 33–48
  • The Emergence of Social Norms and Conventions. Trends in cognitive sciences Hawkins, R. X., Goodman, N. D., Goldstone, R. L. 2018


    The utility of our actions frequently depends upon the beliefs and behavior of other agents. Thankfully, through experience, we learn norms and conventions that provide stable expectations for navigating our social world. Here, we review several distinct influences on their content and distribution. At the level of individuals locally interacting in dyads, success depends on rapidly adapting pre-existing norms to the local context. Hence, norms are shaped by complex cognitive processes involved in learning and social reasoning. At the population level, norms are influenced by intergenerational transmission and the structure of the social network. As human social connectivity continues to increase, understanding and predicting how these levels and time scales interact to produce new norms will be crucial for improving communities.

    View details for DOI 10.1016/j.tics.2018.11.003

    View details for PubMedID 30522867

  • Beyond Reward Prediction Errors: Human Striatum Updates Rule Values During Learning CEREBRAL CORTEX Ballard, I., Miller, E. M., Piantadosi, S. T., Goodman, N. D., McClure, S. M. 2018; 28 (11): 3965–75


    Humans naturally group the world into coherent categories defined by membership rules. Rules can be learned implicitly by building stimulus-response associations using reinforcement learning or by using explicit reasoning. We tested if the striatum, in which activation reliably scales with reward prediction error, would track prediction errors in a task that required explicit rule generation. Using functional magnetic resonance imaging during a categorization task, we show that striatal responses to feedback scale with a "surprise" signal derived from a Bayesian rule-learning model and are inconsistent with RL prediction error. We also find that striatum and caudal inferior frontal sulcus (cIFS) are involved in updating the likelihood of discriminative rules. We conclude that the striatum, in cooperation with the cIFS, is involved in updating the values assigned to categorization rules when people learn using explicit reasoning.

    View details for DOI 10.1093/cercor/bhx259

    View details for Web of Science ID 000449432200016

    View details for PubMedID 29040494

  • Remembrance of inferences past: Amortization in human hypothesis generation COGNITION Dasgupta, I., Schulz, E., Goodman, N. D., Gershman, S. J. 2018; 178: 67–81


    Bayesian models of cognition assume that people compute probability distributions over hypotheses. However, the required computations are frequently intractable or prohibitively expensive. Since people often encounter many closely related distributions, selective reuse of computations (amortized inference) is a computationally efficient use of the brain's limited resources. We present three experiments that provide evidence for amortization in human probabilistic reasoning. When sequentially answering two related queries about natural scenes, participants' responses to the second query systematically depend on the structure of the first query. This influence is sensitive to the content of the queries, only appearing when the queries are related. Using a cognitive load manipulation, we find evidence that people amortize summary statistics of previous inferences, rather than storing the entire distribution. These findings support the view that the brain trades off accuracy and computational cost, to make efficient use of its limited cognitive resources to approximate probabilistic inference.

    View details for DOI 10.1016/j.cognition.2018.04.017

    View details for Web of Science ID 000439402400007

    View details for PubMedID 29793110

  • Learning physical parameters from dynamic scenes COGNITIVE PSYCHOLOGY Ullman, T. D., Stuhlmuller, A., Goodman, N. D., Tenenbaum, J. B. 2018; 104: 57–82


    Humans acquire their most basic physical concepts early in development, and continue to enrich and expand their intuitive physics throughout life as they are exposed to more and varied dynamical environments. We introduce a hierarchical Bayesian framework to explain how people can learn physical parameters at multiple levels. In contrast to previous Bayesian models of theory acquisition (Tenenbaum, Kemp, Griffiths, & Goodman, 2011), we work with more expressive probabilistic program representations suitable for learning the forces and properties that govern how objects interact in dynamic scenes unfolding over time. We compare our model to human learners on a challenging task of estimating multiple physical parameters in novel microworlds given short movies. This task requires people to reason simultaneously about multiple interacting physical laws and properties. People are generally able to learn in this setting and are consistent in their judgments. Yet they also make systematic errors indicative of the approximations people might make in solving this computationally demanding problem with limited computational resources. We propose two approximations that complement the top-down Bayesian approach. One approximation model relies on a more bottom-up feature-based inference scheme. The second approximation combines the strengths of the bottom-up and top-down approaches, by taking the feature-based inference as its point of departure for a search in physical-parameter space.

    View details for DOI 10.1016/j.cogpsych.2017.05.006

    View details for Web of Science ID 000437073900003

    View details for PubMedID 29653395

  • Extremely costly intensifiers are stronger than quite costly ones. Cognition Bennett, E. D., Goodman, N. D. 2018; 178: 147–61


    We show that the wide range in strengths of intensifying degree adverbs (e.g. very and extremely) can be partly explained by pragmatic inference based on differing cost, rather than differing semantics. The pragmatic theory predicts a linear relationship between the meaning of intensifiers and their length and log-frequency. We first test this prediction in three studies, using two different dependent measures, finding that higher utterance cost (i.e. higher word length or surprisal) does predict stronger meanings. In two additional studies we confirm that the relationship between length and meaning is present even for novel words. We discuss the implications for adverbial meaning and the more general question of how extensive non-arbitrary form-meaning association may be in language.

    View details for DOI 10.1016/j.cognition.2018.05.011

    View details for PubMedID 29857283

  • Empirical evidence for resource-rational anchoring and adjustment PSYCHONOMIC BULLETIN & REVIEW Lieder, F., Griffiths, T. L., Huys, Q. M., Goodman, N. D. 2018; 25 (2): 775–84


    People's estimates of numerical quantities are systematically biased towards their initial guess. This anchoring bias is usually interpreted as sign of human irrationality, but it has recently been suggested that the anchoring bias instead results from people's rational use of their finite time and limited cognitive resources. If this were true, then adjustment should decrease with the relative cost of time. To test this hypothesis, we designed a new numerical estimation paradigm that controls people's knowledge and varies the cost of time and error independently while allowing people to invest as much or as little time and effort into refining their estimate as they wish. Two experiments confirmed the prediction that adjustment decreases with time cost but increases with error cost regardless of whether the anchor was self-generated or provided. These results support the hypothesis that people rationally adapt their number of adjustments to achieve a near-optimal speed-accuracy tradeoff. This suggests that the anchoring bias might be a signature of the rational use of finite time and limited cognitive resources rather than a sign of human irrationality.

    View details for DOI 10.3758/s13423-017-1288-6

    View details for Web of Science ID 000430206900031

    View details for PubMedID 28484951

  • Happier Than Thou? A Self-Enhancement Bias in Emotion Attribution EMOTION Ong, D. C., Goodman, N. D., Zaki, J. 2018; 18 (1): 116–26


    People tend to judge themselves as exhibiting above average levels of desirable traits-including competence, kindness, and life satisfaction-but does this self-enhancement extend to emotional responses? Here, we explore this question by having people attribute emotions to themselves and others following simple gambles. We demonstrate that people display an emotional self-enhancement bias that varies with the context of the emotion-eliciting situation. People judge themselves as experiencing more positive emotional reactions on average, and they also believed that others' emotions are more sensitive to gamble outcomes, such that people judge others to experience stronger negative affect in response to negative outcomes (Study 1). This self-enhancement bias further tracks social distance, such that people attribute less positive and more negative emotion to more dissimilar, as compared with more similar others (Study 2). People also predict less favorable emotional states for themselves and others experiencing events in the future, as compared with the present (Study 3), suggesting that this attribution bias extends across multiple dimensions of psychological distance. Broadly, these data suggest that people exhibit self-enhancement in emotion attribution, but do so in subtle ways that depend on situational and social factors. (PsycINFO Database Record

    View details for DOI 10.1037/emo0000309

    View details for Web of Science ID 000425495200010

    View details for PubMedID 28406680

  • The anchoring bias reflects rational use of cognitive resources PSYCHONOMIC BULLETIN & REVIEW Lieder, F., Griffiths, T. L., Huys, Q. M., Goodman, N. D. 2018; 25 (1): 322–49


    Cognitive biases, such as the anchoring bias, pose a serious challenge to rational accounts of human cognition. We investigate whether rational theories can meet this challenge by taking into account the mind's bounded cognitive resources. We asked what reasoning under uncertainty would look like if people made rational use of their finite time and limited cognitive resources. To answer this question, we applied a mathematical theory of bounded rationality to the problem of numerical estimation. Our analysis led to a rational process model that can be interpreted in terms of anchoring-and-adjustment. This model provided a unifying explanation for ten anchoring phenomena including the differential effect of accuracy motivation on the bias towards provided versus self-generated anchors. Our results illustrate the potential of resource-rational analysis to provide formal theories that can unify a wide range of empirical results and reconcile the impressive capacities of the human mind with its apparently irrational cognitive biases.

    View details for DOI 10.3758/s13423-017-1286-8

    View details for Web of Science ID 000428081700019

    View details for PubMedID 28484952

  • Eye-Tracking Causality PSYCHOLOGICAL SCIENCE Gerstenberg, T., Peterson, M. F., Goodman, N. D., Lagnado, D. A., Tenenbaum, J. B. 2017; 28 (12): 1731–44
  • Resolving uncertainty in plural predication COGNITION Scontras, G., Goodman, N. D. 2017; 168: 294–311


    Plural predications (e.g., "the boxes are heavy") are common sources of ambiguity in everyday language, allowing both distributive and collective interpretations (e.g., the boxes each are heavy vs. the boxes together are heavy). This paper investigates the role of context in the disambiguation of plural predication. We address the key phenomenon of "stubborn distributivity," whereby certain predicates (e.g., big, tall) are claimed to lack collective interpretations altogether. We first validate a new methodology for measuring the interpretation of plural predications. Using this method, we then analyze naturally-occurring plural predications from corpora. We find a role of context, but no evidence of a distinct class of predicates that resists collective interpretations. We further explore the role of context in our final experiments, showing that both the predictability of properties and the knowledgeability of the speaker affect disambiguation. This suggests a pragmatic account of how ambiguous plural predications are interpreted. In particular, stubbornly distributive predicates are so because the collective properties they name are unpredictable, or unstable, in most contexts; this unpredictability results in a noisy collective interpretation, something speakers and listeners recognize as ineffective for communicating efficiently about their world. We formalize the pragmatics of utterance disambiguation within the Bayesian Rational Speech Act framework.

    View details for DOI 10.1016/j.cognition.2017.07.002

    View details for Web of Science ID 000411545500027

    View details for PubMedID 28756352

  • Avoiding frostbite: It helps to learn from others BEHAVIORAL AND BRAIN SCIENCES Tessler, M., Goodman, N. D., Frank, M. C. 2017; 40: e279


    Machines that learn and think like people must be able to learn from others. Social learning speeds up the learning process and - in combination with language - is a gateway to abstract and unobservable information. Social learning also facilitates the accumulation of knowledge across generations, helping people and artificial intelligences learn things that no individual could learn in a lifetime.

    View details for DOI 10.1017/S0140525X17000280

    View details for Web of Science ID 000423000000083

    View details for PubMedID 29342698

  • Learning Disentangled Representations with Semi-Supervised Deep Generative Models Siddharth, N., Paige, B., van de Meent, J., Desmaison, A., Goodman, N. D., Kohli, P., Wood, F., Torr, P. S., Guyon, Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R. NEURAL INFORMATION PROCESSING SYSTEMS (NIPS). 2017
  • Pragmatic Language Interpretation as Probabilistic Inference. Trends in cognitive sciences Goodman, N. D., Frank, M. C. 2016; 20 (11): 818-829


    Understanding language requires more than the use of fixed conventions and more than decoding combinatorial structure. Instead, comprehenders make exquisitely sensitive inferences about what utterances mean given their knowledge of the speaker, language, and context. Building on developments in game theory and probabilistic modeling, we describe the rational speech act (RSA) framework for pragmatic reasoning. RSA models provide a principled way to formalize inferences about meaning in context; they have been used to make successful quantitative predictions about human behavior in a variety of different tasks and situations, and they explain why complex phenomena, such as hyperbole and vagueness, occur. More generally, they provide a computational framework for integrating linguistic structure, world knowledge, and context in pragmatic language understanding.

    View details for DOI 10.1016/j.tics.2016.08.005

    View details for PubMedID 27692852

  • The Logical Primitives of Thought: Empirical Foundations for Compositional Cognitive Models PSYCHOLOGICAL REVIEW Piantadosi, S. T., Tenenbaum, J. B., Goodman, N. D. 2016; 123 (4): 392-424


    The notion of a compositional language of thought (LOT) has been central in computational accounts of cognition from earliest attempts (Boole, 1854; Fodor, 1975) to the present day (Feldman, 2000; Penn, Holyoak, & Povinelli, 2008; Fodor, 2008; Kemp, 2012; Goodman, Tenenbaum, & Gerstenberg, 2015). Recent modeling work shows how statistical inferences over compositionally structured hypothesis spaces might explain learning and development across a variety of domains. However, the primitive components of such representations are typically assumed a priori by modelers and theoreticians rather than determined empirically. We show how different sets of LOT primitives, embedded in a psychologically realistic approximate Bayesian inference framework, systematically predict distinct learning curves in rule-based concept learning experiments. We use this feature of LOT models to design a set of large-scale concept learning experiments that can determine the most likely primitives for psychological concepts involving Boolean connectives and quantification. Subjects' inferences are most consistent with a rich (nonminimal) set of Boolean operations, including first-order, but not second-order, quantification. Our results more generally show how specific LOT theories can be distinguished empirically. (PsycINFO Database Record

    View details for DOI 10.1037/a0039980

    View details for Web of Science ID 000379503900003

    View details for PubMedID 27077241

  • A Computational Model of Linguistic Humor in Puns COGNITIVE SCIENCE Kao, J. T., Levy, R., Goodman, N. D. 2016; 40 (5): 1270-1285


    Humor plays an essential role in human interactions. Precisely what makes something funny, however, remains elusive. While research on natural language understanding has made significant advancements in recent years, there has been little direct integration of humor research with computational models of language understanding. In this paper, we propose two information-theoretic measures-ambiguity and distinctiveness-derived from a simple model of sentence processing. We test these measures on a set of puns and regular sentences and show that they correlate significantly with human judgments of funniness. Moreover, within a set of puns, the distinctiveness measure distinguishes exceptionally funny puns from mediocre ones. Our work is the first, to our knowledge, to integrate a computational model of general language understanding and humor theory to quantitatively predict humor at a fine-grained level. We present it as an example of a framework for applying models of language processing to understand higher level linguistic and cognitive phenomena.

    View details for DOI 10.1111/cogs.12269

    View details for Web of Science ID 000383383700008

    View details for PubMedID 26235596

    View details for PubMedCentralID PMC5042108

  • Affective cognition: Exploring lay theories of emotion COGNITION Ong, D. C., Zaki, J., Goodman, N. D. 2015; 143: 141-162


    Humans skillfully reason about others' emotions, a phenomenon we term affective cognition. Despite its importance, few formal, quantitative theories have described the mechanisms supporting this phenomenon. We propose that affective cognition involves applying domain-general reasoning processes to domain-specific content knowledge. Observers' knowledge about emotions is represented in rich and coherent lay theories, which comprise consistent relationships between situations, emotions, and behaviors. Observers utilize this knowledge in deciphering social agents' behavior and signals (e.g., facial expressions), in a manner similar to rational inference in other domains. We construct a computational model of a lay theory of emotion, drawing on tools from Bayesian statistics, and test this model across four experiments in which observers drew inferences about others' emotions in a simple gambling paradigm. This work makes two main contributions. First, the model accurately captures observers' flexible but consistent reasoning about the ways that events and others' emotional responses to those events relate to each other. Second, our work models the problem of emotional cue integration-reasoning about others' emotion from multiple emotional cues-as rational inference via Bayes' rule, and we show that this model tightly tracks human observers' empirical judgments. Our results reveal a deep structural relationship between affective cognition and other forms of inference, and suggest wide-ranging applications to basic psychological theory and psychiatry.

    View details for DOI 10.1016/j.cognition.2015.06.010

    View details for Web of Science ID 000359885600017

    View details for PubMedID 26160501

  • Controlling Procedural Modeling Programs with Stochastically-Ordered Sequential Monte Carlo ACM TRANSACTIONS ON GRAPHICS Ritchie, D., Mildenhall, B., Goodman, N. D., Hanrahan, P. 2015; 34 (4)

    View details for DOI 10.1145/2766895

    View details for Web of Science ID 000358786600071

  • Generating Design Suggestions under Tight Constraints with Gradient-based Probabilistic Programming COMPUTER GRAPHICS FORUM Ritchie, D., Lin, S., Goodman, N. D., Hanrahan, P. 2015; 34 (2): 515-526

    View details for DOI 10.1111/cgf.12580

    View details for Web of Science ID 000358326600049

  • Relevant and robust: a response to Marcus and Davis (2013). Psychological science Goodman, N. D., Frank, M. C., Griffiths, T. L., Tenenbaum, J. B., Battaglia, P. W., Hamrick, J. B. 2015; 26 (4): 539-541

    View details for DOI 10.1177/0956797614559544

    View details for PubMedID 25749699

  • Rational Use of Cognitive Resources: Levels of Analysis Between the Computational and the Algorithmic TOPICS IN COGNITIVE SCIENCE Griffiths, T. L., Lieder, F., Goodman, N. D. 2015; 7 (2): 217-229


    Marr's levels of analysis-computational, algorithmic, and implementation-have served cognitive science well over the last 30 years. But the recent increase in the popularity of the computational level raises a new challenge: How do we begin to relate models at different levels of analysis? We propose that it is possible to define levels of analysis that lie between the computational and the algorithmic, providing a way to build a bridge between computational- and algorithmic-level models. The key idea is to push the notion of rationality, often used in defining computational-level models, deeper toward the algorithmic level. We offer a simple recipe for reverse-engineering the mind's cognitive strategies by deriving optimal algorithms for a series of increasingly more realistic abstract computational architectures, which we call "resource-rational analysis."

    View details for DOI 10.1111/tops.12142

    View details for Web of Science ID 000353954500005

    View details for PubMedID 25898807

  • The Strategic Use of Noise in Pragmatic Reasoning TOPICS IN COGNITIVE SCIENCE Bergen, L., Goodman, N. D. 2015; 7 (2): 336-350


    We combine two recent probabilistic approaches to natural language understanding, exploring the formal pragmatics of communication on a noisy channel. We first extend a model of rational communication between a speaker and listener, to allow for the possibility that messages are corrupted by noise. In this model, common knowledge of a noisy channel leads to the use and correct understanding of sentence fragments. A further extension of the model, which allows the speaker to intentionally reduce the noise rate on a word, is used to model prosodic emphasis. We show that the model derives several well-known changes in meaning associated with prosodic emphasis. Our results show that nominal amounts of actual noise can be leveraged for communicative purposes.

    View details for DOI 10.1111/tops.12144

    View details for Web of Science ID 000353954500014

    View details for PubMedID 25898999

  • How many kinds of reasoning? Inference, probability, and natural language semantics. Cognition Lassiter, D., Goodman, N. D. 2015; 136: 123-134


    The "new paradigm" unifying deductive and inductive reasoning in a Bayesian framework (Oaksford & Chater, 2007; Over, 2009) has been claimed to be falsified by results which show sharp differences between reasoning about necessity vs. plausibility (Heit & Rotello, 2010; Rips, 2001; Rotello & Heit, 2009). We provide a probabilistic model of reasoning with modal expressions such as "necessary" and "plausible" informed by recent work in formal semantics of natural language, and show that it predicts the possibility of non-linear response patterns which have been claimed to be problematic. Our model also makes a strong monotonicity prediction, while two-dimensional theories predict the possibility of reversals in argument strength depending on the modal word chosen. Predictions were tested using a novel experimental paradigm that replicates the previously-reported response patterns with a minimal manipulation, changing only one word of the stimulus between conditions. We found a spectrum of reasoning "modes" corresponding to different modal words, and strong support for our model's monotonicity prediction. This indicates that probabilistic approaches to reasoning can account in a clear and parsimonious way for data previously argued to falsify them, as well as new, more fine-grained, data. It also illustrates the importance of careful attention to the semantics of language employed in reasoning experiments.

    View details for DOI 10.1016/j.cognition.2014.10.016

    View details for PubMedID 25497521

  • Inferring word meanings by assuming that speakers are informative. Cognitive psychology Frank, M. C., Goodman, N. D. 2014; 75: 80-96


    Language comprehension is more than a process of decoding the literal meaning of a speaker's utterance. Instead, by making the assumption that speakers choose their words to be informative in context, listeners routinely make pragmatic inferences that go beyond the linguistic data. If language learners make these same assumptions, they should be able to infer word meanings in otherwise ambiguous situations. We use probabilistic tools to formalize these kinds of informativeness inferences-extending a model of pragmatic language comprehension to the acquisition setting-and present four experiments whose data suggest that preschool children can use informativeness to infer word meanings and that adult judgments track quantitatively with informativeness.

    View details for DOI 10.1016/j.cogpsych.2014.08.002

    View details for PubMedID 25238461

  • Nonliteral understanding of number words PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA Kao, J. T., Wu, J. Y., Bergen, L., Goodman, N. D. 2014; 111 (33): 12002-12007


    One of the most puzzling and important facts about communication is that people do not always mean what they say; speakers often use imprecise, exaggerated, or otherwise literally false descriptions to communicate experiences and attitudes. Here, we focus on the nonliteral interpretation of number words, in particular hyperbole (interpreting unlikely numbers as exaggerated and conveying affect) and pragmatic halo (interpreting round numbers imprecisely). We provide a computational model of number interpretation as social inference regarding the communicative goal, meaning, and affective subtext of an utterance. We show that our model predicts humans' interpretation of number words with high accuracy. Our model is the first to our knowledge to incorporate principles of communication and empirically measured background knowledge to quantitatively predict hyperbolic and pragmatic halo effects in number interpretation. This modeling framework provides a unified approach to nonliteral language understanding more generally.

    View details for DOI 10.1073/pnas.1407479111

    View details for Web of Science ID 000340438800037

    View details for PubMedID 25092304

  • A rational account of pedagogical reasoning: Teaching by, and learning from, examples COGNITIVE PSYCHOLOGY Shafto, P., Goodman, N. D., Griffiths, T. L. 2014; 71: 55-89


    Much of learning and reasoning occurs in pedagogical situations--situations in which a person who knows a concept chooses examples for the purpose of helping a learner acquire the concept. We introduce a model of teaching and learning in pedagogical settings that predicts which examples teachers should choose and what learners should infer given a teacher's examples. We present three experiments testing the model predictions for rule-based, prototype, and causally structured concepts. The model shows good quantitative and qualitative fits to the data across all three experiments, predicting novel qualitative phenomena in each case. We conclude by discussing implications for understanding concept learning and implications for theoretical claims about the role of pedagogy in human learning.

    View details for DOI 10.1016/j.cogpsych.2013.12.004

    View details for Web of Science ID 000336108500003

    View details for PubMedID 24607849

  • One and Done? Optimal Decisions From Very Few Samples COGNITIVE SCIENCE Vul, E., Goodman, N., Griffiths, T. L., Tenenbaum, J. B. 2014; 38 (4): 599-637

    View details for DOI 10.1111/cogs.12101

    View details for Web of Science ID 000337529500001

  • Some arguments are probably valid: Syllogistic reasoning as communication Proceedings of the Thirty-Sixth Annual Conference of the Cognitive Science Society Tessler, M. H., Goodman, N. D. 2014
  • Uncertainty and denial: a resource-rational model of the value of information. PloS one Pierson, E., Goodman, N. 2014; 9 (11)


    Classical decision theory predicts that people should be indifferent to information that is not useful for making decisions, but this model often fails to describe human behavior. Here we investigate one such scenario, where people desire information about whether an event (the gain/loss of money) will occur even though there is no obvious decision to be made on the basis of this information. We find a curious dual trend: if information is costless, as the probability of the event increases people want the information more; if information is not costless, people's desire for the information peaks at an intermediate probability. People also want information more as the importance of the event increases, and less as the cost of the information increases. We propose a model that explains these results, based on the assumption that people have limited cognitive resources and obtain information about which events will occur so they can determine whether to expend effort planning for them.

    View details for DOI 10.1371/journal.pone.0113342

    View details for PubMedID 25426631

  • The strategic use of noise in pragmatic reasoning Proceedings of the Thirty-Sixth Annual Conference of the Cognitive Science Society Bergen, L., Goodman, N. D. 2014
  • Lost your marbles? The puzzle of dependent measures in experimental pragmatics Proceedings of the Thirty-Sixth Annual Conference of the Cognitive Science Society Degen, J., Goodman, N. D. 2014
  • Solve For Standing Ovation: Should AI Researchers Bother Building A TED-Bot? Popular Science Goodman, N. D. 2014
  • Forget the Turing Test: Here’s How We Could Actually Measure AI WIRED Goodman, N. D. 2014
  • From counterfactual simulation to causal judgment Proceedings of the Thirty-Sixth Annual Conference of the Cognitive Science Society Gerstenberg, T., Goodman, N. D., Lagnado, D. A., Tenenbaum, J. B. 2014
  • Formalizing the pragmatics of metaphor understanding Proceedings of the Thirty-Sixth Annual Conference of the Cognitive Science Society Kao, J., Bergen, L., Goodman, N. D. 2014
  • Generating efficient MCMC kernels from probabilistic programs AISTATS 2014 Yang, L., Hanrahan, P., Goodman, N. D. 2014
  • Amortized inference in probabilistic reasoning Proceedings of the Thirty-Sixth Annual Conference of the Cognitive Science Society Gershman, S., Goodman, N. D. 2014
  • Rational reasoning in pedagogical contexts Cognitive Psychology Shafto, P., Goodman, N. D., Griffiths, T. L. 2014
  • The mentalistic basis of core social cognition: experiments in preverbal infants and a computational model DEVELOPMENTAL SCIENCE Hamlin, J. K., Ullman, T., Tenenbaum, J., Goodman, N., Baker, C. 2013; 16 (2): 209-226

    View details for DOI 10.1111/desc.12017

    View details for Web of Science ID 000315384700006

  • Did She Jump Because She Was the Big Sister or Because the Trampoline Was Safe? Causal Inference and the Development of Social Attribution CHILD DEVELOPMENT Seiver, E., Gopnik, A., Goodman, N. D. 2013; 84 (2): 443-454


    Children rely on both evidence and prior knowledge to make physical causal inferences; this study explores whether they make attributions about others' behavior in the same manner. A total of one hundred and fifty-nine 4- and 6-year-olds saw 2 dolls interacting with 2 activities, and explained the dolls' actions. In the person condition, each doll acted consistently across activities, but differently from each other. In the situation condition, the two dolls acted differently for each activity, but both performed the same actions. Both age groups provided more "person" explanations (citing features of the doll) in the person condition than in the situation condition. In addition, 6-year-olds showed an overall bias toward "person" explanations. As in physical causal inference, social causal inference combines covariational evidence and prior knowledge.

    View details for DOI 10.1111/j.1467-8624.2012.01865.x

    View details for Web of Science ID 000316805900005

    View details for PubMedID 23002946

  • The Principles and Practice of Probabilistic Programming ACM SIGPLAN NOTICES Goodman, N. D. 2013; 48 (1): 399-401
  • The Funny Thing About Incongruity: A Computational Model of Humor in Puns Proceedings of the Thirty-Fifth Annual Conference of the Cognitive Science Society Kao, J. T., Levy, R., Goodman, N. D. 2013
  • Learned helplessness and generalization Proceedings of the Thirty-Fifth Annual Conference of the Cognitive Science Society Lieder, F., Goodman, N. D., Huys, Q. M. 2013
  • Reasoning about Reasoning by Nested Conditioning: Modeling Theory of Mind with Probabilistic Programs J. Cognitive Systems Research Stuhlmüller, A., Goodman, N. D. 2013
  • Context, scale structure, and statistics in the interpretation of positive-form adjectives SALT23 Lassiter, D., Goodman, N. D. 2013
  • Learning and using language via recursive pragmatic reasoning about other agents Advances in Neural Information Processing Systems, 25 Smith, N. J., Goodman, N. D., Frank, M. C. 2013
  • Learning Stochastic Inverses Advances in Neural Information Processing Systems, 25 Stuhlmueller, A., Taylor, J., Goodman, N. D. 2013
  • Knowledge and Implicature: Modeling Language Understanding as Social Cognition TOPICS IN COGNITIVE SCIENCE Goodman, N. D., Stuhlmueller, A. 2013; 5 (1): 173-184


    Is language understanding a special case of social cognition? To help evaluate this view, we can formalize it as the rational speech-act theory: Listeners assume that speakers choose their utterances approximately optimally, and listeners interpret an utterance by using Bayesian inference to "invert" this model of the speaker. We apply this framework to model scalar implicature ("some" implies "not all," and "N" implies "not more than N"). This model predicts an interaction between the speaker's knowledge state and the listener's interpretation. We test these predictions in two experiments and find good fit between model predictions and human judgments.

    View details for DOI 10.1111/tops.12007

    View details for Web of Science ID 000313754300009

    View details for PubMedID 23335578

  • Theory learning as stochastic search in the language of thought COGNITIVE DEVELOPMENT Ullman, T. D., Goodman, N. D., Tenenbaum, J. B. 2012; 27 (4): 455-480
  • Synthesizing Open Worlds with Constraints using Locally Annealed Reversible Jump MCMC ACM TRANSACTIONS ON GRAPHICS Yeh, Y., Yang, L., Watson, M., Goodman, N. D., Hanrahan, P. 2012; 31 (4)
  • Learning From Others: The Consequences of Psychological Reasoning for Human Learning PERSPECTIVES ON PSYCHOLOGICAL SCIENCE Shafto, P., Goodman, N. D., Frank, M. C. 2012; 7 (4): 341-351


    From early childhood, human beings learn not only from collections of facts about the world but also from social contexts through observations of other people, communication, and explicit teaching. In these contexts, the data are the result of human actions-actions that come about because of people's goals and intentions. To interpret the implications of others' actions correctly, learners must understand the people generating the data. Most models of learning, however, assume that data are randomly collected facts about the world and cannot explain how social contexts influence learning. We provide a Bayesian analysis of learning from knowledgeable others, which formalizes how learners may use a person's actions and goals to make inferences about the actor's knowledge about the world. We illustrate this framework using two examples from causal learning and conclude by discussing the implications for cognition, social reasoning, and cognitive development.

    View details for DOI 10.1177/1745691612448481

    View details for Web of Science ID 000305837300003

  • Predicting Pragmatic Reasoning in Language Games SCIENCE Frank, M. C., Goodman, N. D. 2012; 336 (6084): 998-998


    One of the most astonishing features of human language is its capacity to convey information efficiently in context. Many theories provide informal accounts of communicative inference, yet there have been few successes in making precise, quantitative predictions about pragmatic reasoning. We examined judgments about simple referential communication games, modeling behavior in these games by assuming that speakers attempt to be informative and that listeners use Bayesian inference to recover speakers' intended referents. Our model provides a close, parameter-free fit to human judgments, suggesting that the use of information-theoretic tools to predict pragmatic reasoning may lead to more effective formal models of communication.

    View details for DOI 10.1126/science.1218633

    View details for Web of Science ID 000304406800035

    View details for PubMedID 22628647

  • Bootstrapping in a language of thought: A formal model of numerical concept learning COGNITION Piantadosi, S. T., Tenenbaum, J. B., Goodman, N. D. 2012; 123 (2): 199-217


    In acquiring number words, children exhibit a qualitative leap in which they transition from understanding a few number words, to possessing a rich system of interrelated numerical concepts. We present a computational framework for understanding this inductive leap as the consequence of statistical inference over a sufficiently powerful representational system. We provide an implemented model that is powerful enough to learn number word meanings and other related conceptual systems from naturalistic data. The model shows that bootstrapping can be made computationally and philosophically well-founded as a theory of number learning. Our approach demonstrates how learners may combine core cognitive operations to build sophisticated representations during the course of development, and how this process explains observed developmental patterns in number word learning.

    View details for DOI 10.1016/j.cognition.2011.11.005

    View details for Web of Science ID 000303178000001

    View details for PubMedID 22284806

  • Comparing pluralities COGNITION Scontras, G., Graff, P., Goodman, N. D. 2012; 123 (1): 190-197


    What does it mean to compare sets of objects along a scale, for example by saying "the men are taller than the women"? We explore comparison of pluralities in two experiments, eliciting comparison judgments while varying the properties of the members of each set. We find that a plurality is judged as "bigger" when the mean size of its members is larger than the mean size of the competing plurality. These results are incompatible with previous accounts, in which plural comparison is inferred from many instances of singular comparison between the members of the sets (Matushansky & Ruys, 2006). Our results suggest the need for a type of predication that ascribes properties to plural entities, not just individuals, based on aggregate statistics of their members. More generally, these results support the idea that sets and their properties are actively represented as single units.

    View details for DOI 10.1016/j.cognition.2011.12.012

    View details for Web of Science ID 000301474000015

    View details for PubMedID 22245032

  • Learning Design Patterns with Bayesian Grammar Induction 25th Annual ACM Symposium on User Interface Software and Technology (UIST) Talton, J. O., Yang, L., Kumar, R., Lim, M., Goodman, N., Mech, R. ASSOC COMPUTING MACHINERY. 2012: 63–73
  • Learning from others: The consequences of social context for human learning Perspectives on Psychological Science Shafto, P., Goodman, N. D., Frank, M. 2012
  • Did she jump because she was the big sister or because the trampoline was safe? Causal inference and the development of social attribution Child Development Seiver, E., Gopnik, A., Goodman, N. D. 2012
  • Context is key to making computers better conversationalists Goodman, N. D. 2012
  • Artificial Intelligence Could Be on Brink of Passing Turing Test WIRED Goodman, N. D. 2012
  • How many kinds of reasoning? Inference, probability, and natural language semantics Proceedings of the Thirty-Fourth Annual Conference of the Cognitive Science Society Lassiter, D., Goodman, N. D. 2012
  • A dynamic programming algorithm for inference in recursive probabilistic programs Second Statistical Relational AI workshop at UAI 2012 Stuhlmuller, A., Goodman, N. D. 2012
  • Quantifying pragmatic inference in language games Science Frank, M. C., Goodman, N. D. 2012
  • Noisy Newtons: Unifying process and dependency accounts of causal attribution Proceedings of the Thirty-Fourth Annual Conference of the Cognitive Science Society Gerstenberg, T., Goodman, N. D., Lagnado, D. A., Tenenbaum, J. B. 2012
  • Ping Pong in Church: Productive use of concepts in human probabilistic inference Proceedings of the Thirty-Fourth Annual Conference of the Cognitive Science Society Gerstenberg, T., Goodman, N. D. 2012
  • That’s what she (could have) said: How alternative utterances affect language use Proceedings of the Thirty-Fourth Annual Conference of the Cognitive Science Society Bergen, L., Goodman, N. D., Levy, R. 2012
  • Knowledge and implicature: Modeling language understanding as social cognition Proceedings of the Thirty-Fourth Annual Conference of the Cognitive Science Society Goodman, N. D., Stuhlmüller, A. 2012
  • Burn-in, bias, and the rationality of anchoring Advances in Neural Information Processing Systems, 24 Lieder, F., Griffiths, T. L., Goodman, N. D. 2012
  • Where science starts: Spontaneous experiments in preschoolers' exploratory play COGNITION Cook, C., Goodman, N. D., Schulz, L. E. 2011; 120 (3): 341-349


    Probabilistic models of expected information gain require integrating prior knowledge about causal hypotheses with knowledge about possible actions that might generate data relevant to those hypotheses. Here we looked at whether preschoolers (mean: 54 months) recognize "action possibilities" (affordances) in the environment that allow them to isolate variables when there is information to be gained. By manipulating the physical properties of the stimuli, we were able to affect the degree to which candidate variables could be isolated; by manipulating the base rate of candidate causes, we were able to affect the potential for information gain. Children's exploratory play was sensitive to both manipulations: given unambiguous evidence children played indiscriminately and rarely tried to isolate candidate causes; given ambiguous evidence, children both selected (Experiment 1) and designed (Experiment 2) informative interventions.

    View details for DOI 10.1016/j.cognition.2011.03.003

    View details for Web of Science ID 000293312400005

    View details for PubMedID 21561605

  • The double-edged sword of pedagogy: Instruction limits spontaneous exploration and discovery COGNITION Bonawitz, E., Shafto, P., Gweon, H., Goodman, N. D., Spelke, E., Schulz, L. 2011; 120 (3): 322-330


    Motivated by computational analyses, we look at how teaching affects exploration and discovery. In Experiment 1, we investigated children's exploratory play after an adult pedagogically demonstrated a function of a toy, after an interrupted pedagogical demonstration, after a naïve adult demonstrated the function, and at baseline. Preschoolers in the pedagogical condition focused almost exclusively on the target function; by contrast, children in the other conditions explored broadly. In Experiment 2, we show that children restrict their exploration both after direct instruction to themselves and after overhearing direct instruction given to another child; they do not show this constraint after observing direct instruction given to an adult or after observing a non-pedagogical intentional action. We discuss these findings as the result of rational inductive biases. In pedagogical contexts, a teacher's failure to provide evidence for additional functions provides evidence for their absence; such contexts generalize from child to child (because children are likely to have comparable states of knowledge) but not from adult to child. Thus, pedagogy promotes efficient learning but at a cost: children are less likely to perform potentially irrelevant actions but also less likely to discover novel information.

    View details for DOI 10.1016/j.cognition.2010.10.001

    View details for Web of Science ID 000293312400003

    View details for PubMedID 21216395

  • The imaginary fundamentalists: The unshocking truth about Bayesian cognitive science BEHAVIORAL AND BRAIN SCIENCES Chater, N., Goodman, N., Griffiths, T. L., Kemp, C., Oaksford, M., Tenenbaum, J. B. 2011; 34 (4): 194-?
  • How to Grow a Mind: Statistics, Structure, and Abstraction SCIENCE Tenenbaum, J. B., Kemp, C., Griffiths, T. L., Goodman, N. D. 2011; 331 (6022): 1279-1285


    In coming to understand the world-in learning concepts, acquiring language, and grasping causal relations-our minds make inferences that appear to go far beyond the data available. How do we do it? This review describes recent approaches to reverse-engineering human learning and cognitive development and, in parallel, engineering more humanlike machine learning systems. Computational models that perform probabilistic inference over hierarchies of flexibly structured representations can address some of the deepest questions about the nature and origins of human thought: How does abstract knowledge guide learning and reasoning from sparse data? What forms does our knowledge take, across different domains and tasks? And how is that abstract knowledge itself acquired?

    View details for DOI 10.1126/science.1192788

    View details for Web of Science ID 000288215200035

    View details for PubMedID 21393536

  • Learning and the Language of Thought IEEE International Conference on Computer Vision (ICCV) Goodman, N. D. IEEE. 2011
  • More Than Child’s Play: Ability to Think Scientifically Declines as Kids Grow Up Scientific American Goodman, N. D. 2011
  • I, algorithm New Scientist Goodman, N. D. 2011
  • Productivity and reuse in language Proceedings of the Thirty-Third Annual Conference of the Cognitive Science Society O’Donnell, T. J., Snedeker, J., Tenenbaum, J. B., Goodman, N. D. 2011
  • Nonstandard Interpretations of Probabilistic Programs for Efficient Inference Advances in Neural Information Processing Systems, 23 Wingate, D., Goodman, N. D., Stuhlmüller, A., Siskind, J. 2011
  • Bayesian Policy Search with Policy Priors IJCAI 2011 Wingate, D., Kaelbling, L., Roy, D., Goodman, N. D., Tenenbaum, J. B. 2011
  • Ad-hoc scalar implicature in adults and children Proceedings of the Thirty-Third Annual Conference of the Cognitive Science Society Stiller, A., Goodman, N. D., Frank, M. C. 2011
  • Lightweight Implementations of Probabilistic Programming Languages Via Transformational Compilation Artificial Intelligence and Statistics 2011 Wingate, D., Stuhlmüller, A., Goodman, N. D. 2011
  • Learning a Theory of Causality PSYCHOLOGICAL REVIEW Goodman, N. D., Ullman, T. D., Tenenbaum, J. B. 2011; 118 (1): 110-119


    The very early appearance of abstract knowledge is often taken as evidence for innateness. We explore the relative learning speeds of abstract and specific knowledge within a Bayesian framework and the role for innate structure. We focus on knowledge about causality, seen as a domain-general intuitive theory, and ask whether this knowledge can be learned from co-occurrence of events. We begin by phrasing the causal Bayes nets theory of causality and a range of alternatives in a logical language for relational theories. This allows us to explore simultaneous inductive learning of an abstract theory of causality and a causal model for each of several causal systems. We find that the correct theory of causality can be learned relatively quickly, often becoming available before specific causal theories have been learned--an effect we term the blessing of abstraction. We then explore the effect of providing a variety of auxiliary evidence and find that a collection of simple perceptual input analyzers can help to bootstrap abstract knowledge. Together, these results suggest that the most efficient route to causal knowledge may be to build in not an abstract notion of causality but a powerful inductive learning mechanism and a variety of perceptual supports. While these results are purely computational, they have implications for cognitive development, which we explore in the conclusion.

    View details for DOI 10.1037/a0021336

    View details for Web of Science ID 000286560500007

    View details for PubMedID 21244189

  • Optimal habits can develop spontaneously through sensitivity to local cost PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA Desrochers, T. M., Jin, D. Z., Goodman, N. D., Graybiel, A. M. 2010; 107 (47): 20512-20517


    Habits and rituals are expressed universally across animal species. These behaviors are advantageous in allowing sequential behaviors to be performed without cognitive overload, and appear to rely on neural circuits that are relatively benign but vulnerable to takeover by extreme contexts, neuropsychiatric sequelae, and processes leading to addiction. Reinforcement learning (RL) is thought to underlie the formation of optimal habits. However, this theoretic formulation has principally been tested experimentally in simple stimulus-response tasks with relatively few available responses. We asked whether RL could also account for the emergence of habitual action sequences in realistically complex situations in which no repetitive stimulus-response links were present and in which many response options were present. We exposed naïve macaque monkeys to such experimental conditions by introducing a unique free saccade scan task. Despite the highly uncertain conditions and no instruction, the monkeys developed a succession of stereotypical, self-chosen saccade sequence patterns. Remarkably, these continued to morph for months, long after session-averaged reward and cost (eye movement distance) reached asymptote. Prima facie, these continued behavioral changes appeared to challenge RL. However, trial-by-trial analysis showed that pattern changes on adjacent trials were predicted by lowered cost, and RL simulations that reduced the cost reproduced the monkeys' behavior. Ultimately, the patterns settled into stereotypical saccade sequences that minimized the cost of obtaining the reward on average. These findings suggest that brain mechanisms underlying the emergence of habits, and perhaps unwanted repetitive behaviors in clinical disorders, could follow RL algorithms capturing extremely local explore/exploit tradeoffs.

    View details for DOI 10.1073/pnas.1013470107

    View details for Web of Science ID 000284529000067

    View details for PubMedID 20974967

  • Learning to Learn Causal Models COGNITIVE SCIENCE Kemp, C., Goodman, N. D., Tenenbaum, J. B. 2010; 34 (7): 1185-1243


    Learning to understand a single causal system can be an achievement, but humans must learn about multiple causal systems over the course of a lifetime. We present a hierarchical Bayesian framework that helps to explain how learning about several causal systems can accelerate learning about systems that are subsequently encountered. Given experience with a set of objects, our framework learns a causal model for each object and a causal schema that captures commonalities among these causal models. The schema organizes the objects into categories and specifies the causal powers and characteristic features of these categories and the characteristic causal interactions between categories. A schema of this kind allows causal models for subsequent objects to be rapidly learned, and we explore this accelerated learning in four experiments. Our results confirm that humans learn rapidly about the causal powers of novel objects, and we show that our framework accounts better for our data than alternative models of causal learning.

    View details for DOI 10.1111/j.1551-6709.2010.01128.x

    View details for Web of Science ID 000281554700004

    View details for PubMedID 21564248

  • The Structure and Dynamics of Scientific Theories: A Hierarchical Bayesian Perspective PHILOSOPHY OF SCIENCE Henderson, L., Goodman, N. D., Tenenbaum, J. B., Woodward, J. F. 2010; 77 (2): 172-200
  • Beyond Boolean logic: exploring representation languages for learning complex concepts Proceedings of the Thirty-Second Annual Conference of the Cognitive Science Society Piantadosi, S. T., Tenenbaum, J. B., Goodman, N. D. 2010
  • Learning Structured Generative Concepts Proceedings of the Thirty-Second Annual Conference of the Cognitive Science Society Stuhlmüller, A., Tenenbaum, J. B., Goodman, N. D. 2010
  • Help or hinder: Bayesian models of social goal inference Advances in Neural Information Processing Systems Ullman, T., Baker, C. L., Macindoe, O., Evans, O., Goodman, N. D., Tenenbaum, J. B. 2010
  • Theory learning as stochastic search Proceedings of the Thirty-Second Annual Conference of the Cognitive Science Society Ullman, T. D., Goodman, N. D., Tenenbaum, J. B. 2010
  • Prior expectations in pedagogical situations Proceedings of the Thirty-Second Annual Conference of the Cognitive Science Society Shafto, P., Goodman, N. D., Gerstle, B., Ladusaw, F. 2010
  • Using Speakers' Referential Intentions to Model Early Cross-Situational Word Learning PSYCHOLOGICAL SCIENCE Frank, M. C., Goodman, N. D., Tenenbaum, J. B. 2009; 20 (5): 578-585


    Word learning is a "chicken and egg" problem. If a child could understand speakers' utterances, it would be easy to learn the meanings of individual words, and once a child knows what many words mean, it is easy to infer speakers' intended meanings. To the beginning learner, however, both individual word meanings and speakers' intentions are unknown. We describe a computational model of word learning that solves these two inference problems in parallel, rather than relying exclusively on either the inferred meanings of utterances or cross-situational word-meaning associations. We tested our model using annotated corpus data and found that it inferred pairings between words and object concepts with higher precision than comparison models. Moreover, as the result of making probabilistic inferences about speakers' intentions, our model explains a variety of behavioral phenomena described in the word-learning literature. These phenomena include mutual exclusivity, one-trial learning, cross-situational learning, the role of words in object individuation, and the use of inferred intentions to disambiguate reference.

    View details for DOI 10.1111/j.1467-9280.2009.02335.x

    View details for Web of Science ID 000265774700011

    View details for PubMedID 19389131

  • Informative communication in word production and word learning Proceedings of the Thirty-First Annual Conference of the Cognitive Science Society Frank, M. C., Goodman, N. D., Lai, P., Tenenbaum, J. B. 2009
  • Continuity of discourse provides information for word learning Proceedings of the Thirty-First Annual Conference of the Cognitive Science Society Frank, M. C., Goodman, N. D., Tenenbaum, J. B., Fernald, A. 2009
  • One and done: Globally optimal behavior from locally suboptimal decisions Proceedings of the Thirty-First Annual Conference of the Cognitive Science Society Vul, E., Goodman, N. D., Griffiths, T. L., Tenenbaum, J. B. 2009
  • How tall Is tall? Compositionality, statistics, and gradable adjectives Proceedings of the Thirty-First Annual Conference of the Cognitive Science Society Schmidt, L., Goodman, N. D., Barner, D., Tenenbaum, J. B. 2009
  • The infinite latent events model Uncertainty in Artificial Intelligence 2009 Wingate, D., Goodman, N. D., Roy, D. M., Tenenbaum, J. B. 2009
  • Cause and intent: Social reasoning in causal learning Proceedings of the Thirty-First Annual Conference of the Cognitive Science Society Goodman, N. D., Baker, C. L., Tenenbaum, J. B. 2009
  • Learning a theory of causality Proceedings of the Thirty-First Annual Conference of the Cognitive Science Society Goodman, N. D., Ullman, T., Tenenbaum, J. B. 2009
  • Going beyond the evidence: Abstract laws and preschoolers' responses to anomalous data COGNITION Schulz, L. E., Goodman, N. D., Tenenbaum, J. B., Jenkins, A. C. 2008; 109 (2): 211-223


    Given minimal evidence about novel objects, children might learn only relationships among the specific entities, or they might make a more abstract inference, positing classes of entities and the relations that hold among those classes. Here we show that preschoolers (mean: 57 months) can use sparse data about perceptually unique objects to infer abstract physical causal laws. These newly inferred abstract laws were robust to potentially anomalous evidence; in the face of apparent counter-evidence, children (correctly) posited the existence of an unobserved object rather than revise the abstract laws. This suggests that children's ability to learn robust, abstract principles does not depend on extensive prior experience but can occur rapidly, on-line, and in tandem with inferences about specific relations.

    View details for DOI 10.1016/j.cognition.2008.07.017

    View details for Web of Science ID 000261756000003

    View details for PubMedID 18930186

  • Compositionality in rational analysis: Grammar-based induction for concept learning The probabilistic mind: Prospects for Bayesian cognitive science Goodman, N. D., Tenenbaum, J. B., Griffiths, T. L., Feldman, J. edited by Oaksford, M., Chater, N. 2008
  • Teaching games: statistical sampling assumptions for learning in pedagogical situations Proceedings of the Thirtieth Annual Conference of the Cognitive Science Society Shafto, P., Goodman, N. D. 2008
  • Theory-based social goal induction Proceedings of the Thirtieth Annual Conference of the Cognitive Science Society Baker, C. L., Goodman, N. D., Tenenbaum, J. B. 2008
  • Learning relational theories Advances in Neural Information Processing Systems Kemp, C., Goodman, N. D., Tenenbaum, J. B. 2008
  • A Bayesian framework for cross-situational word-learning Advances in Neural Information Processing Systems, 20 Frank, M. C., Goodman, N. D., Tenenbaum, J. B. 2008
  • Theory acquisition and the language of thought Proceedings of the Thirtieth Annual Conference of the Cognitive Science Society Kemp, C., Goodman, N. D., Tenenbaum, J. B. 2008
  • Church: a language for generative models Uncertainty in Artificial Intelligence 2008 Goodman, N. D., Mansighka, V. K., Roy, D., Bonawitz, K., Tenenbaum, J. B. 2008
  • Modeling semantic cognition as logical dimensionality reduction Proceedings of the Thirtieth Annual Conference of the Cognitive Science Society Katz, Y., Goodman, N. D., Kersting, K., Kemp, C., Tenenbaum, J. B. 2008
  • Bayesian model of compositional semantics acquisition Proceedings of the Thirtieth Annual Conference of the Cognitive Science Society Piantadosi, S. T., Goodman, N. D., Ellis, B. A., Tenenbaum, J. B. 2008
  • Structured correlation from the causal background Proceedings of the Thirtieth Annual Conference of the Cognitive Science Society Mayrhofer, R., Goodman, N. D., Waldmann, M., Tenenbaum, J. B. 2008
  • A rational analysis of rule-based concept learning 29th Annnual Conference of the Cognitive-Science-Society Goodman, N. D., Tenenbaum, J. B., Feldman, J., Griffiths, T. L. PSYCHOLOGY PRESS. 2008: 108–54


    This article proposes a new model of human concept learning that provides a rational analysis of learning feature-based concepts. This model is built upon Bayesian inference for a grammatically structured hypothesis space-a concept language of logical rules. This article compares the model predictions to human generalization judgments in several well-known category learning experiments, and finds good agreement for both average and individual participant generalizations. This article further investigates judgments for a broad set of 7-feature concepts-a more natural setting in several ways-and again finds that the model explains human performance.

    View details for DOI 10.1080/03640210701802071

    View details for Web of Science ID 000254296700005

    View details for PubMedID 21635333

  • Learning causal schemata Proceedings of the Twenty- Ninth Annual Conference of the Cognitive Science Society Kemp, C., Goodman, N. D., Tenenbaum, J. B. 2007
  • Frameworks in science: a Bayesian approach LSE-Pitt Conference: Confirmation, Induction and Science Henderson, L., Goodman, N. D., Tenenbaum, J. B., Woodward, J. 2007
  • A rational analysis of rule-based concept learning Proceedings of the Twenty-Ninth Annual Conference of the Cognitive Science Society Goodman, N. D., Griffiths, T. L., Feldman, J., Tenenbaum, J. B. 2007
  • Learning grounded causal models Proceedings of the Twenty-Ninth Annual Conference of the Cognitive Science Society Goodman, N. D., Mansinghka, V. K., Tenenbaum, J. B. 2007
  • Intuitive theories of mind: A rational approach to false belief Proceedings of the Twenty-Eighth Annual Conference of the Cognitive Science Society Goodman, N. D., Baker, C. L., Bonawitz, E. B., Mansinghka, V. K., Gopnik, A., Wellman, H., Schulz, L., Tenenbaum, J. B. 2006
  • On the stable equivalence of open books in three-manifolds GEOMETRY & TOPOLOGY Giroux, E., Goodman, N. 2006; 10: 97-114
  • Overtwisted open books from sobering arcs ALGEBRAIC AND GEOMETRIC TOPOLOGY Goodman, N. 2005; 5: 1173-1195