Academic Appointments


  • Assistant Professor, Organizational Behavior

Program Affiliations


  • Symbolic Systems Program

2025-26 Courses


Stanford Advisees


All Publications


  • Age and gender distortion in online media and large language models. Nature Guilbeault, D., Delecourt, S., Desikan, B. S. 2025

    Abstract

    Are widespread stereotypes accurate1-3 or socially distorted4-6? This continuing debate is limited by the lack of large-scale multimodal data on stereotypical associations and the inability to compare these to ground truth indicators. Here we overcame these challenges in the analysis of age-related gender bias7-9, for which age provides an objective anchor for evaluating stereotype accuracy. Despite there being no systematic age differences between women and men in the workforce according to the US Census, we found that women are represented as younger than men across occupations and social roles in nearly 1.4 million images and videos from Google, Wikipedia, IMDb, Flickr and YouTube, as well as in nine language models trained on billions of words from the internet. This age gap is the starkest for content depicting occupations with higher status and earnings. We demonstrate how mainstream algorithms amplify this bias. A nationally representative pre-registered experiment (n = 459) found that Googling images of occupations amplifies age-related gender bias in participants' beliefs and hiring preferences. Furthermore, when generating and evaluating resumes, ChatGPT assumes that women are younger and less experienced, rating older male applicants as of higher quality. Our study shows how gender and age are jointly distorted throughout the internet and its mediating algorithms, thereby revealing critical challenges and opportunities in the fight against inequality.

    View details for DOI 10.1038/s41586-025-09581-z

    View details for PubMedID 41062689

    View details for PubMedCentralID 10901730

  • Statistical or Embodied? Comparing Colorseeing, Colorblind, Painters, and Large Language Models in Their Processing of Color Metaphors. Cognitive science Nadler, E. O., Guilbeault, D., Ringold, S. M., Williamson, T. R., Bellemare-Pepin, A., Comșa, I. M., Jerbi, K., Narayanan, S., Aziz-Zadeh, L. 2025; 49 (7): e70083

    Abstract

    Can metaphorical reasoning involving embodied experience-such as color perception-be learned from the statistics of language alone? Recent work finds that colorblind individuals robustly understand and reason abstractly about color, implying that color associations in everyday language might contribute to the metaphorical understanding of color. However, it is unclear how much colorblind individuals' understanding of color is driven by language versus their limited (but no less embodied) visual experience. A more direct test of whether language supports the acquisition of humans' understanding of color is whether large language models (LLMs)-those trained purely on text with no visual experience-can nevertheless learn to generate consistent and coherent metaphorical responses about color. Here, we conduct preregistered surveys that compare colorseeing adults, colorblind adults, and LLMs in how they (1) associate colors to words that lack established color associations and (2) interpret conventional and novel color metaphors. Colorblind and colorseeing adults exhibited highly similar and replicable color associations with novel words and abstract concepts. Yet, while GPT (a popular LLM) also generated replicable color associations with impressive consistency, its associations departed considerably from colorseeing and colorblind participants. Moreover, GPT frequently failed to generate coherent responses about its own metaphorical color associations when asked to invert its color associations or explain novel color metaphors in context. Consistent with this view, painters who regularly work with color pigments were more likely than all other groups to understand novel color metaphors using embodied reasoning. Thus, embodied experience may play an important role in metaphorical reasoning about color and the generation of conceptual connections between embodied associations.

    View details for DOI 10.1111/cogs.70083

    View details for PubMedID 40621800

  • Information architectures: a framework for understanding socio-technical systems. Npj complexity Smaldino, P. E., Russell, A., Zefferman, M. R., Donath, J., Foster, J. G., Guilbeault, D., Hilbert, M., Hobson, E. A., Lerman, K., Miton, H., Moser, C., Lasser, J., Schmer-Galunder, S., Shapiro, J. N., Zhong, Q., Patt, D. 2025; 2 (1): 13

    Abstract

    A sequence of technological inventions over several centuries has dramatically lowered the cost of producing and distributing information. Because societies ride on a substrate of information, these changes have profoundly impacted how we live, work, and interact. This paper explores the nature of information architectures (IAs)-the features that govern how information flows within human populations. IAs include physical and digital infrastructures, norms and institutions, and algorithmic technologies for filtering, producing, and disseminating information. IAs can reinforce societal biases and lead to prosocial outcomes as well as social ills. IAs have culturally evolved rapidly with human usage, creating new affordances and new problems for the dynamics of social interaction. We explore societal outcomes instigated by shifts in IAs and call for an enhanced understanding of the social implications of increasing IA complexity, the nature of competition among IAs, and the creation of mechanisms for the beneficial use of IAs.

    View details for DOI 10.1038/s44260-025-00037-z

    View details for PubMedID 40255931

    View details for PubMedCentralID PMC12006018

  • Online images amplify gender bias NATURE Guilbeault, D., Delecourt, S., Hull, T., Desikan, B., Chu, M., Nadler, E. 2024: 1049-1055

    Abstract

    Each year, people spend less time reading and more time viewing images1, which are proliferating online2-4. Images from platforms such as Google and Wikipedia are downloaded by millions every day2,5,6, and millions more are interacting through social media, such as Instagram and TikTok, that primarily consist of exchanging visual content. In parallel, news agencies and digital advertisers are increasingly capturing attention online through the use of images7,8, which people process more quickly, implicitly and memorably than text9-12. Here we show that the rise of images online significantly exacerbates gender bias, both in its statistical prevalence and its psychological impact. We examine the gender associations of 3,495 social categories (such as 'nurse' or 'banker') in more than one million images from Google, Wikipedia and Internet Movie Database (IMDb), and in billions of words from these platforms. We find that gender bias is consistently more prevalent in images than text for both female- and male-typed categories. We also show that the documented underrepresentation of women online13-18 is substantially worse in images than in text, public opinion and US census data. Finally, we conducted a nationally representative, preregistered experiment that shows that googling for images rather than textual descriptions of occupations amplifies gender bias in participants' beliefs. Addressing the societal effect of this large-scale shift towards visual communication will be essential for developing a fair and inclusive future for the internet.

    View details for DOI 10.1038/s41586-024-07068-x

    View details for Web of Science ID 001171755900002

    View details for PubMedID 38355800

    View details for PubMedCentralID PMC10901730

  • Exposure to the Views of Opposing Others with Latent Cognitive Differences Results in Social Influence-But Only When Those Differences Remain Obscured MANAGEMENT SCIENCE Guilbeault, D., van Loon, A., Lix, K., Goldberg, A., Srivastava, S. B. 2023
  • Divergences in color perception between deep neural networks and humans. Cognition Nadler, E. O., Darragh-Ford, E., Desikan, B. S., Conaway, C., Chu, M., Hull, T., Guilbeault, D. 2023; 241: 105621

    Abstract

    Deep neural networks (DNNs) are increasingly proposed as models of human vision, bolstered by their impressive performance on image classification and object recognition tasks. Yet, the extent to which DNNs capture fundamental aspects of human vision such as color perception remains unclear. Here, we develop novel experiments for evaluating the perceptual coherence of color embeddings in DNNs, and we assess how well these algorithms predict human color similarity judgments collected via an online survey. We find that state-of-the-art DNN architectures - including convolutional neural networks and vision transformers - provide color similarity judgments that strikingly diverge from human color judgments of (i) images with controlled color properties, (ii) images generated from online searches, and (iii) real-world images from the canonical CIFAR-10 dataset. We compare DNN performance against an interpretable and cognitively plausible model of color perception based on wavelet decomposition, inspired by foundational theories in computational neuroscience. While one deep learning model - a convolutional DNN trained on a style transfer task - captures some aspects of human color perception, our wavelet algorithm provides more coherent color embeddings that better predict human color judgments compared to all DNNs we examine. These results hold when altering the high-level visual task used to train similar DNN architectures (e.g., image classification versus image segmentation), as well as when examining the color embeddings of different layers in a given DNN architecture. These findings break new ground in the effort to analyze the perceptual representations of machine learning algorithms and to improve their ability to serve as cognitively plausible models of human vision. Implications for machine learning, human perception, and embodied cognition are discussed.

    View details for DOI 10.1016/j.cognition.2023.105621

    View details for PubMedID 37716312

  • Signal in Noise: Exploring Meaning Encoded in Random Character Sequences with Character-Aware Language Models Chu, M., Desikan, B., Nadler, E. O., Sardo, R. L., Darragh-Ford, E., Guilbeault, D., Assoc Computat Linguist ASSOC COMPUTATIONAL LINGUISTICS-ACL. 2022: 7120-7134
  • Color associations in abstract semantic domains. Cognition Guilbeault, D., Nadler, E. O., Chu, M., Lo Sardo, D. R., Kar, A. A., Desikan, B. S. 2020; 201: 104306

    Abstract

    The embodied cognition paradigm has stimulated ongoing debate about whether sensory data - including color - contributes to the semantic structure of abstract concepts. Recent uses of linguistic data in the study of embodied cognition have been focused on textual corpora, which largely precludes the direct analysis of sensory information. Here, we develop an automated approach to multimodal content analysis that detects associations between words based on the color distributions of their Google Image search results. Crucially, we measure color using a transformation of colorspace that closely resembles human color perception. We find that words in the abstract domains of academic disciplines, emotions, and music genres, cluster in a statistically significant fashion according to their color distributions. Furthermore, we use the lexical ontology WordNet and crowdsourced human judgments to show that this clustering reflects non-arbitrary semantic structure, consistent with metaphor-based accounts of embodied cognition. In particular, we find that images corresponding to more abstract words exhibit higher variability in colorspace, and semantically similar words have more similar color distributions. Strikingly, we show that color associations often reflect shared affective dimensions between abstract domains, thus revealing patterns of aesthetic coherence in everyday language. We argue that these findings provide a novel way to synthesize metaphor-based and affect-based accounts of embodied semantics.

    View details for DOI 10.1016/j.cognition.2020.104306

    View details for PubMedID 32504912