
Elisa Kreiss
Ph.D. Student in Linguistics, admitted Autumn 2018
All Publications
-
Causal Distillation for Language Models
ASSOC COMPUTATIONAL LINGUISTICS-ACL. 2022: 4288-4295
View details for Web of Science ID 000859869504033
-
Inducing Causal Structure for Interpretable Neural Networks
JMLR-JOURNAL MACHINE LEARNING RESEARCH. 2022
View details for Web of Science ID 000922378802017
-
When redundancy is useful: A Bayesian approach to "overinformative" referring expressions.
Psychological review
2020
Abstract
Referring is one of the most basic and prevalent uses of language. How do speakers choose from the wealth of referring expressions at their disposal? Rational theories of language use have come under attack for decades for not being able to account for the seemingly irrational overinformativeness ubiquitous in referring expressions. Here we present a novel production model of referring expressions within the Rational Speech Act framework that treats speakers as agents that rationally trade off cost and informativeness of utterances. Crucially, we relax the assumption that informativeness is computed with respect to a deterministic Boolean semantics, in favor of a nondeterministic continuous semantics. This innovation allows us to capture a large number of seemingly disparate phenomena within one unified framework: the basic asymmetry in speakers' propensity to overmodify with color rather than size; the increase in overmodification in complex scenes; the increase in overmodification with atypical features; and the increase in specificity in nominal reference as a function of typicality. These findings cast a new light on the production of referring expressions: rather than being wastefully overinformative, reference is usefully redundant. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
View details for DOI 10.1037/rev0000186
View details for PubMedID 32237876