Hariharan Subramonyam
Assistant Professor (Research) of Education and, by courtesy, of Computer Science
Graduate School of Education
Bio
Hari Subramonyam is an Assistant Professor (Research) at the Graduate School of Education and a Faculty Fellow at Stanford's Institute for Human-Centered AI. He is also a member of the HCI Group at Stanford. His research focuses on augmenting critical human tasks (such as learning, creativity, and sensemaking) with AI by incorporating principles from cognitive psychology. He also investigates support tools for multidisciplinary teams to co-design AI experiences. His work has received multiple best paper awards at top human-computer interaction conferences, including CHI and IUI.
Academic Appointments
-
Assistant Professor (Research), Graduate School of Education
-
Assistant Professor (Research) (By courtesy), Computer Science
Honors & Awards
-
Student Design Competition 3rd Place, CHI (05/2015)
-
Best Paper Award, CHI (05/2019)
-
Best Paper Award, CHI (04/2020)
Professional Education
-
Ph.D. Information, University of Michigan, Dissertation: Role of End-User Data in Co-Designing AI-Powered Applications (2021)
-
B.E. Telecommunication, CMR Institute of Technology (2008)
-
M.S. Information, University of Michigan, Human Computer Interaction (2015)
Research Interests
-
Brain and Learning Sciences
-
Collaborative Learning
-
Data Sciences
-
Science Education
-
Special Education
-
Technology and Education
2024-25 Courses
- Designing Explorable Explanations for Learning
EDUC 432 (Win) -
Independent Studies (9)
- Advanced Reading and Research
CS 499 (Aut, Sum) - Advanced Reading and Research
CS 499P (Aut, Sum) - Directed Reading
EDUC 480 (Aut, Win, Spr, Sum) - Directed Reading in Education
EDUC 180 (Aut, Win, Spr, Sum) - Directed Research
EDUC 490 (Aut, Win, Spr, Sum) - Directed Research in Education
EDUC 190 (Aut, Win, Spr, Sum) - Independent Project
CS 399 (Aut, Win, Spr, Sum) - Independent Work
CS 199 (Aut, Win, Spr, Sum) - Senior Project
CS 191 (Aut, Win, Spr, Sum)
- Advanced Reading and Research
-
Prior Year Courses
2023-24 Courses
- Designing Explorable Explanations for Learning
EDUC 432 (Win)
2022-23 Courses
- Data Visualization
CS 448B, EDUC 458 (Win) - Designing Explorable Explanations for Learning
EDUC 432 (Win)
2021-22 Courses
- Designing Explorable Explanations for Learning
EDUC 432 (Spr)
- Designing Explorable Explanations for Learning
Stanford Advisees
-
Doctoral Dissertation Reader (AC)
Alberto Tono -
Master's Program Advisor
Madhumitha Cherukuri, Vryan Feliciano, Matías Hoyl, Maho Kohga -
Doctoral (Program)
Neha Rajagopalan
All Publications
-
Are We Closing the Loop Yet? Gaps in the Generalizability of VIS4ML Research.
IEEE transactions on visualization and computer graphics
2024; 30 (1): 672-682
Abstract
Visualization for machine learning (VIS4ML) research aims to help experts apply their prior knowledge to develop, understand, and improve the performance of machine learning models. In conceiving VIS4ML systems, researchers characterize the nature of human knowledge to support human-in-the-loop tasks, design interactive visualizations to make ML components interpretable and elicit knowledge, and evaluate the effectiveness of human-model interchange. We survey recent VIS4ML papers to assess the generalizability of research contributions and claims in enabling human-in-the-loop ML. Our results show potential gaps between the current scope of VIS4ML research and aspirations for its use in practice. We find that while papers motivate that VIS4ML systems are applicable beyond the specific conditions studied, conclusions are often overfitted to non-representative scenarios, are based on interactions with a small set of ML experts and well-understood datasets, fail to acknowledge crucial dependencies, and hinge on decisions that lack justification. We discuss approaches to close the gap between aspirations and research claims and suggest documentation practices to report generality constraints that better acknowledge the exploratory nature of VIS4ML research.
View details for DOI 10.1109/TVCG.2023.3326591
View details for PubMedID 37871059
-
Bridging the Gulf of Envisioning: Cognitive Challenges in Prompt Based Interactions with LLMs
ASSOC COMPUTING MACHINERY. 2024
View details for DOI 10.1145/3613904.3642754
View details for Web of Science ID 001266059701009
-
Human-Computer Interaction and AI: What Practitioners Need to Know to Design and Build Effective AI systems from a Human Perspective
ASSOC COMPUTING MACHINERY. 2024
View details for DOI 10.1145/3613905.3636270
View details for Web of Science ID 001227587700005
-
More than Model Documentation: Uncovering Teachers' Bespoke Information Needs for Informed Classroom Integration of ChatGPT
ASSOC COMPUTING MACHINERY. 2024
View details for DOI 10.1145/3613904.3642592
View details for Web of Science ID 001259864903035
-
AI-Driven Support for People with Speech & Language Difficulties
ASSOC COMPUTING MACHINERY. 2024
View details for DOI 10.1145/3613905.3643984
View details for Web of Science ID 001227587701002
-
Leveraging Large Language Models to Enhance Domain Expert Inclusion in Data Science Workflows
ASSOC COMPUTING MACHINERY. 2024
View details for DOI 10.1145/3613905.3651115
View details for Web of Science ID 001227587704077
-
Why and When LLM-Based Assistants Can GoWrong: Investigating the Effectiveness of Prompt-Based Interactions for Software Help-Seeking
ASSOC COMPUTING MACHINERY. 2024: 288-303
View details for DOI 10.1145/3640543.3645200
View details for Web of Science ID 001209687500019
-
Evaluating longitudinal relationships between parental monitoring and substance use in a multi-year, intensive longitudinal study of 670 adolescent twins.
Frontiers in psychiatry
2023; 14: 1149079
Abstract
Parental monitoring is a key intervention target for adolescent substance use, however this practice is largely supported by causally uninformative cross-sectional or sparse-longitudinal observational research designs.We therefore evaluated relationships between adolescent substance use (assessed weekly) and parental monitoring (assessed every two months) in 670 adolescent twins for two years. This allowed us to assess how individual-level parental monitoring and substance use trajectories were related and, via the twin design, to quantify genetic and environmental contributions to these relationships. Furthermore, we attempted to devise additional measures of parental monitoring by collecting quasi-continuous GPS locations and calculating a) time spent at home between midnight and 5am and b) time spent at school between 8am-3pm.ACE-decomposed latent growth models found alcohol and cannabis use increased with age while parental monitoring, time at home, and time at school decreased. Baseline alcohol and cannabis use were correlated (r = .65) and associated with baseline parental monitoring (r = -.24 to -.29) but not with baseline GPS measures (r = -.06 to -.16). Longitudinally, changes in substance use and parental monitoring were not significantly correlated. Geospatial measures were largely unrelated to parental monitoring, though changes in cannabis use and time at home were highly correlated (r = -.53 to -.90), with genetic correlations suggesting their relationship was substantially genetically mediated. Due to power constraints, ACE estimates and biometric correlations were imprecisely estimated. Most of the substance use and parental monitoring phenotypes were substantially heritable, but genetic correlations between them were not significantly different from 0.Overall, we found developmental changes in each phenotype, baseline correlations between substance use and parental monitoring, co-occurring changes and mutual genetic influences for time at home and cannabis use, and substantial genetic influences on many substance use and parental monitoring phenotypes. However, our geospatial variables were mostly unrelated to parental monitoring, suggesting they poorly measured this construct. Furthermore, though we did not detect evidence of genetic confounding, changes in parental monitoring and substance use were not significantly correlated, suggesting that, at least in community samples of mid-to-late adolescents, the two may not be causally related.
View details for DOI 10.3389/fpsyt.2023.1149079
View details for PubMedID 37252134
View details for PubMedCentralID PMC10213319
-
fAIlureNotes: Supporting Designers in Understanding the Limits of AI Models for Computer Vision Tasks
ASSOC COMPUTING MACHINERY. 2023
View details for DOI 10.1145/3544548.3581242
View details for Web of Science ID 001048393802008
-
Designerly Understanding: Information Needs for Model Transparency to Support Design Ideation for AI-Powered User Experience
ASSOC COMPUTING MACHINERY. 2023
View details for DOI 10.1145/3544548.3580652
View details for Web of Science ID 001037809500016
-
How Do Viewers Synthesize Conflicting Information from Data Visualizations?
IEEE transactions on visualization and computer graphics
2022; PP
Abstract
Scientific knowledge develops through cumulative discoveries that build on, contradict, contextualize, or correct prior findings. Scientists and journalists often communicate these incremental findings to lay people through visualizations and text (e.g., the positive and negative effects of caffeine intake). Consequently, readers need to integrate diverse and contrasting evidence from multiple sources to form opinions or make decisions. However, the underlying mechanism for synthesizing information from multiple visualizations remains under-explored. To address this knowledge gap, we conducted a series of four experiments (N = 1166) in which participants synthesized empirical evidence from a pair of line charts presented sequentially. In Experiment 1, we administered a baseline condition with charts depicting no specific context where participants held no strong belief. To test for the generalizability, we introduced real-world scenarios to our visualizations in Experiment 2 and added accompanying text descriptions similar to online news articles or blog posts in Experiment 3. In all three experiments, we varied the relative direction and magnitude of line slopes within the chart pairs. We found that participants tended to weigh the positive slope more when the two charts depicted relationships in the opposite direction (e.g., one positive slope and one negative slope). Participants tended to weigh the less steep slope more when the two charts depicted relationships in the same direction (e.g., both positive). Through these experiments, we characterize participants' synthesis behaviors depending on the relationship between the information they viewed, contribute to theories describing underlying cognitive mechanisms in information synthesis, and describe design implications for data storytelling.
View details for DOI 10.1109/TVCG.2022.3209467
View details for PubMedID 36166526
-
Solving Separation-of-Concerns Problems in Collaborative Design of Human-AI Systems through Leaky Abstractions
ASSOC COMPUTING MACHINERY. 2022
View details for DOI 10.1145/3491102.3517537
View details for Web of Science ID 000922929502004
-
VideoSticker: A Tool for Active Viewing and Visual Note-taking from Videos
ASSOC COMPUTING MACHINERY. 2022: 672-690
View details for DOI 10.1145/3490099.3511132
View details for Web of Science ID 000889340800047
-
Composites: A Tangible Interaction Paradigm for Visual Data Analysis in Design Practice
ASSOC COMPUTING MACHINERY. 2022
View details for DOI 10.1145/3531073.3531091
View details for Web of Science ID 001051742000007
-
Towards A Process Model for Co-Creating AI Experiences
ASSOC COMPUTING MACHINERY. 2021: 1529-1543
View details for DOI 10.1145/3461778.3462012
View details for Web of Science ID 000747486000114
-
ProtoAI Model-Informed Prototyping for AI-Powered Interfaces
ASSOC COMPUTING MACHINERY. 2021: 48-58
View details for DOI 10.1145/3397481.3450640
View details for Web of Science ID 000747690200010
-
texSketch: Active Diagramming through Pen-and-Ink Annotations
ASSOC COMPUTING MACHINERY. 2020
View details for DOI 10.1145/3313831.3376155
View details for Web of Science ID 000695432500028
-
Explore, Create, Annotate: Designing Digital Drawing Tools with Visually Impaired People
ASSOC COMPUTING MACHINERY. 2020
View details for DOI 10.1145/3313831.3376349
View details for Web of Science ID 000695438100022
-
Affinity Lens Data-Assisted Affinity Diagramming with Augmented Reality
ASSOC COMPUTING MACHINERY. 2019
View details for DOI 10.1145/3290605.3300628
View details for Web of Science ID 000474467905012
-
SmartCues: A Multitouch Query Approach for Details-on-Demand through Dynamically Computed Overlays
IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS
2019; 25 (1): 597-607
Abstract
Details-on-demand is a crucial feature in the visual information-seeking process but is often only implemented in highly constrained settings. The most common solution, hover queries (i.e., tooltips), are fast and expressive but are usually limited to single mark (e.g., a bar in a bar chart). 'Queries' to retrieve details for more complex sets of objects (e.g., comparisons between pairs of elements, averages across multiple items, trend lines, etc.) are difficult for end-users to invoke explicitly. Further, the output of these queries require complex annotations and overlays which need to be displayed and dismissed on demand to avoid clutter. In this work we introduce SmartCues, a library to support details-on-demand through dynamically computed overlays. For end-users, SmartCues provides multitouch interactions to construct complex queries for a variety of details. For designers, SmartCues offers an interaction library that can be used out-of-the-box, and can be extended for new charts and detail types. We demonstrate how SmartCues can be implemented across a wide array of visualization types and, through a lab study, show that end users can effectively use SmartCues.
View details for DOI 10.1109/TVCG.2018.2865231
View details for Web of Science ID 000452640000057
View details for PubMedID 30136998
-
Designing Interactive Intelligent Systems for Human Learning, Creativity, and Sensemaking
ASSOC COMPUTING MACHINERY. 2019: 158-161
View details for DOI 10.1145/3332167.3356878
View details for Web of Science ID 000518192300053
-
TakeToons: Script-driven Performance Animation
ASSOC COMPUTING MACHINERY. 2018: 663-674
View details for DOI 10.1145/3242587.3242618
View details for Web of Science ID 000494260500056
-
The application of ecological momentary assessment and geolocation to a longitudinal twin study of substance use
SPRINGER. 2017: 676-677
View details for Web of Science ID 000415813600114
-
Agency in Assistive Technology Adoption: Visual Impairment and Smartphone Use in Bangalore
ASSOC COMPUTING MACHINERY. 2017: 5929-5940
View details for DOI 10.1145/3025453.3025895
View details for Web of Science ID 000426970505065