Professional Education
-
PhD, University of Wisconsin-Madison, Psychology (2025)
-
MS, University of Wisconsin-Madison, Psychology (2021)
-
AB, Vassar College, Cognitive Science & Japanese (2019)
All Publications
-
Drawings of THINGS: A large-scale drawing dataset of 1854 object concepts.
Behavior research methods
2026; 58 (2): 57
Abstract
The development of large datasets of natural images has galvanized progress in psychology, neuroscience, and computer science. Notably, the THINGS database constitutes a collective effort towards understanding of human visual knowledge by accumulating rich data on a shared set of visual object concepts across several studies. In this paper, we introduce Drawing of THINGS ( DoT ), a novel dataset of 28,627 human drawings of 1854 diverse object concepts, sampled systematically from concrete picturable and nameable nouns in the American English language, mirroring the structure of the THINGS image database. In addition to data on drawings' stroke history, we further collected fine-grained recognition data for each drawing, along with metadata on participant demographics, drawing ability, and mental imagery. We characterize people's ability to communicate and recognize semantic information encoded in drawings and compare this ability to their ability to recognize real-world images of the same visual objects. We also explore the relationship between drawing understanding and the memorability and typicality of the objects contained in THINGS. In sum, we envision DoT as a powerful tool that builds on the THINGS database to advance understanding of how humans express knowledge about visual concepts.
View details for DOI 10.3758/s13428-025-02887-w
View details for PubMedID 41618073
View details for PubMedCentralID PMC12858628
-
EncQA: Benchmarking Vision-Language Models on Visual Encodings for Charts
IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS
2026; 32 (1): 648-658
Abstract
Multimodal vision-language models (VLMs) continue to achieve ever-improving scores on chart understanding benchmarks. Yet, we find that this progress does not fully capture the breadth of visual reasoning capabilities essential for interpreting charts. We introduce EncQA, a novel benchmark informed by the visualization literature, designed to provide systematic coverage of visual encodings and analytic tasks that are crucial for chart understanding. EncQA provides 2,076 synthetic question-answer pairs, enabling balanced coverage of six visual encoding channels (position, length, area, color quantitative, color nominal, and shape) and eight tasks (find extrema, retrieve value, find anomaly, filter values, compute derived value exact, compute derived value relative, correlate values, and correlate values relative). Our evaluation of 9 state-of-the-art VLMs reveals that performance varies significantly across encodings within the same task, as well as across tasks. Contrary to expectations, we observe that performance does not improve with model size for many task-encoding pairs. Our results suggest that advancing chart understanding requires targeted strategies addressing specific visual reasoning gaps, rather than solely scaling up model or dataset size.
View details for DOI 10.1109/TVCG.2025.3634249
View details for Web of Science ID 001682680900050
View details for PubMedID 41264454
-
AI-Enhanced Semantic Feature Norms for 786 Concepts.
Topics in cognitive science
2025
Abstract
Semantic feature norms have been foundational in the study of human conceptual knowledge, yet traditional methods face trade-offs between concept/feature coverage and verifiability of quality due to the labor-intensive nature of norming studies. Here, we introduce a novel approach that augments a dataset of human-generated feature norms with responses from large language models (LLMs) while verifying the quality of norms against reliable human judgments. We find that our AI-enhanced feature norm dataset, NOVA: Norms Optimized Via AI, shows much higher feature density and overlap among concepts while outperforming a comparable human-only norm dataset and word-embedding models in predicting people's semantic similarity judgments. Taken together, we demonstrate that human conceptual knowledge is richer than captured in previous norm datasets and show that, with proper validation, LLMs can serve as powerful tools for cognitive science research.
View details for DOI 10.1111/tops.70037
View details for PubMedID 41467250
-
SEVA: Leveraging sketches to evaluate alignment between human and machine visual abstraction
edited by Oh, A., Neumann, T., Globerson, A., Saenko, K., Hardt, M., Levine, S.
NEURAL INFORMATION PROCESSING SYSTEMS (NIPS). 2023
View details for Web of Science ID 001228825108009
-
Context Matters: A Theory of Semantic Discriminability for Perceptual Encoding Systems.
IEEE transactions on visualization and computer graphics
2022; 28 (1): 697-706
Abstract
People's associations between colors and concepts influence their ability to interpret the meanings of colors in information visualizations. Previous work has suggested such effects are limited to concepts that have strong, specific associations with colors. However, although a concept may not be strongly associated with any colors, its mapping can be disambiguated in the context of other concepts in an encoding system. We articulate this view in semantic discriminability theory, a general framework for understanding conditions determining when people can infer meaning from perceptual features. Semantic discriminability is the degree to which observers can infer a unique mapping between visual features and concepts. Semantic discriminability theory posits that the capacity for semantic discriminability for a set of concepts is constrained by the difference between the feature-concept association distributions across the concepts in the set. We define formal properties of this theory and test its implications in two experiments. The results show that the capacity to produce semantically discriminable colors for sets of concepts was indeed constrained by the statistical distance between color-concept association distributions (Experiment 1). Moreover, people could interpret meanings of colors in bar graphs insofar as the colors were semantically discriminable, even for concepts previously considered "non-colorable" (Experiment 2). The results suggest that colors are more robust for visual communication than previously thought.
View details for DOI 10.1109/TVCG.2021.3114780
View details for PubMedID 34587028
-
Affective Color Scales for Colormap Data Visualizations
IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS
2026; 32 (1): 692-702
Abstract
Research on affective visualization design has shown that color is an especially powerful feature for influencing the emotional connotation of visualizations. Associations between colors and emotions are largely driven by lightness (e.g., lighter colors are associated with positive emotions, whereas darker colors are associated with negative emotions). Designing visualizations to have all light or all dark colors to convey particular emotions may work well for visualizations in which colors represent categories and spatial channels encode data values. However, this approach poses a problem for visualizations that use color to represent spatial patterns in data (e.g., colormap data visualizations) because lightness contrast is needed to reveal fine details in spatial structure. In this study, we found it is possible to design colormaps that have strong lightness contrast to support spatial vision while communicating clear affective connotation. We also found that affective connotation depended not only on the color scales used to construct the colormaps, but also the frequency with which colors appeared in the map, as determined by the underlying dataset (data-dependence hypothesis). These results emphasize the importance of data-aware design, which accounts for not only the design features that encode data (e.g., colors, shapes, textures), but also how those design features are instantiated in a visualization, given the properties of the data.
View details for DOI 10.1109/TVCG.2025.3634775
View details for Web of Science ID 001682700300031
View details for PubMedID 41329592
-
Using drawings and deep neural networks to characterize the building blocks of human visual similarity.
Memory & cognition
2025; 53 (1): 219-241
Abstract
Early in life and without special training, human beings discern resemblance between abstract visual stimuli, such as drawings, and the real-world objects they represent. We used this capacity for visual abstraction as a tool for evaluating deep neural networks (DNNs) as models of human visual perception. Contrasting five contemporary DNNs, we evaluated how well each explains human similarity judgments among line drawings of recognizable and novel objects. For object sketches, human judgments were dominated by semantic category information; DNN representations contributed little additional information. In contrast, such features explained significant unique variance perceived similarity of abstract drawings. In both cases, a vision transformer trained to blend representations of images and their natural language descriptions showed the greatest ability to explain human perceptual similarity-an observation consistent with contemporary views of semantic representation and processing in the human mind and brain. Together, the results suggest that the building blocks of visual similarity may arise within systems that learn to use visual information, not for specific classification, but in service of generating semantic representations of objects.
View details for DOI 10.3758/s13421-024-01580-1
View details for PubMedID 38814385
View details for PubMedCentralID 6306249
-
Shaping vision through drawing
NATURE REVIEWS PSYCHOLOGY
2024; 3 (7): 446
View details for DOI 10.1038/s44159-024-00321-0
View details for Web of Science ID 001223787900001
-
Conceptual structure coheres in human cognition but not in large language models
edited by Bouamor, H., Pino, J., Bali, K.
ASSOC COMPUTATIONAL LINGUISTICS-ACL. 2023: 722-738
View details for Web of Science ID 001275019900047
https://orcid.org/0000-0001-5013-6983