Bio


Hari Subramonyam is an Assistant Professor (Research) at the Graduate School of Education and a Faculty Fellow at Stanford's Institute for Human-Centered AI. He is also a member of the HCI Group at Stanford. His research focuses on augmenting critical human tasks (such as learning, creativity, and sensemaking) with AI by incorporating principles from cognitive psychology. He also investigates support tools for multidisciplinary teams to co-design AI experiences. His work has received multiple best paper awards at top human-computer interaction conferences, including CHI and IUI.

Academic Appointments


Honors & Awards


  • Student Design Competition 3rd Place, CHI (05/2015)
  • Best Paper Award, CHI (05/2019)
  • Best Paper Award, CHI (04/2020)

Professional Education


  • Ph.D. Information, University of Michigan, Dissertation: Role of End-User Data in Co-Designing AI-Powered Applications (2021)
  • B.E. Telecommunication, CMR Institute of Technology (2008)
  • M.S. Information, University of Michigan, Human Computer Interaction (2015)

Research Interests


  • Brain and Learning Sciences
  • Collaborative Learning
  • Data Sciences
  • Science Education
  • Special Education
  • Technology and Education

2024-25 Courses


Stanford Advisees


All Publications


  • Are We Closing the Loop Yet? Gaps in the Generalizability of VIS4ML Research. IEEE transactions on visualization and computer graphics Subramonyam, H., Hullman, J. 2024; 30 (1): 672-682

    Abstract

    Visualization for machine learning (VIS4ML) research aims to help experts apply their prior knowledge to develop, understand, and improve the performance of machine learning models. In conceiving VIS4ML systems, researchers characterize the nature of human knowledge to support human-in-the-loop tasks, design interactive visualizations to make ML components interpretable and elicit knowledge, and evaluate the effectiveness of human-model interchange. We survey recent VIS4ML papers to assess the generalizability of research contributions and claims in enabling human-in-the-loop ML. Our results show potential gaps between the current scope of VIS4ML research and aspirations for its use in practice. We find that while papers motivate that VIS4ML systems are applicable beyond the specific conditions studied, conclusions are often overfitted to non-representative scenarios, are based on interactions with a small set of ML experts and well-understood datasets, fail to acknowledge crucial dependencies, and hinge on decisions that lack justification. We discuss approaches to close the gap between aspirations and research claims and suggest documentation practices to report generality constraints that better acknowledge the exploratory nature of VIS4ML research.

    View details for DOI 10.1109/TVCG.2023.3326591

    View details for PubMedID 37871059

  • Human-Computer Interaction and AI: What Practitioners Need to Know to Design and Build Effective AI systems from a Human Perspective Russell, D. M., Kulkarni, C., Glassman, E. L., Subramonyam, H., Martelaro, N., ASSOC COMPUTING MACHINERY ASSOC COMPUTING MACHINERY. 2024
  • AI-Driven Support for People with Speech & Language Difficulties Dangol, A., Huang, Y., Setlur, S., Smolansky, A., Subramonyam, H., Suh, H., Xiong, J., Kientz, J. A., ASSOC COMPUTING MACHINERY ASSOC COMPUTING MACHINERY. 2024
  • Why and When LLM-Based Assistants Can GoWrong: Investigating the Effectiveness of Prompt-Based Interactions for Software Help-Seeking Khurana, A., Subramonyam, H., Chilana, P. K., Assoc Computing Machinery ASSOC COMPUTING MACHINERY. 2024: 288-303
  • Evaluating longitudinal relationships between parental monitoring and substance use in a multi-year, intensive longitudinal study of 670 adolescent twins. Frontiers in psychiatry Alexander, J. D., Freis, S. M., Zellers, S. M., Corley, R., Ledbetter, A., Schneider, R. K., Phelan, C., Subramonyam, H., Frieser, M., Rea-Sandin, G., Stocker, M. E., Vernier, H., Jiang, M., Luo, Y., Zhao, Q., Rhea, S. A., Hewitt, J., Luciana, M., McGue, M., Wilson, S., Resnick, P., Friedman, N. P., Vrieze, S. I. 2023; 14: 1149079

    Abstract

    Parental monitoring is a key intervention target for adolescent substance use, however this practice is largely supported by causally uninformative cross-sectional or sparse-longitudinal observational research designs.We therefore evaluated relationships between adolescent substance use (assessed weekly) and parental monitoring (assessed every two months) in 670 adolescent twins for two years. This allowed us to assess how individual-level parental monitoring and substance use trajectories were related and, via the twin design, to quantify genetic and environmental contributions to these relationships. Furthermore, we attempted to devise additional measures of parental monitoring by collecting quasi-continuous GPS locations and calculating a) time spent at home between midnight and 5am and b) time spent at school between 8am-3pm.ACE-decomposed latent growth models found alcohol and cannabis use increased with age while parental monitoring, time at home, and time at school decreased. Baseline alcohol and cannabis use were correlated (r = .65) and associated with baseline parental monitoring (r = -.24 to -.29) but not with baseline GPS measures (r = -.06 to -.16). Longitudinally, changes in substance use and parental monitoring were not significantly correlated. Geospatial measures were largely unrelated to parental monitoring, though changes in cannabis use and time at home were highly correlated (r = -.53 to -.90), with genetic correlations suggesting their relationship was substantially genetically mediated. Due to power constraints, ACE estimates and biometric correlations were imprecisely estimated. Most of the substance use and parental monitoring phenotypes were substantially heritable, but genetic correlations between them were not significantly different from 0.Overall, we found developmental changes in each phenotype, baseline correlations between substance use and parental monitoring, co-occurring changes and mutual genetic influences for time at home and cannabis use, and substantial genetic influences on many substance use and parental monitoring phenotypes. However, our geospatial variables were mostly unrelated to parental monitoring, suggesting they poorly measured this construct. Furthermore, though we did not detect evidence of genetic confounding, changes in parental monitoring and substance use were not significantly correlated, suggesting that, at least in community samples of mid-to-late adolescents, the two may not be causally related.

    View details for DOI 10.3389/fpsyt.2023.1149079

    View details for PubMedID 37252134

    View details for PubMedCentralID PMC10213319

  • fAIlureNotes: Supporting Designers in Understanding the Limits of AI Models for Computer Vision Tasks Moore, S., Liao, Q., Subramonyam, H., ACM ASSOC COMPUTING MACHINERY. 2023
  • Designerly Understanding: Information Needs for Model Transparency to Support Design Ideation for AI-Powered User Experience Liao, Q., Subramonyam, H., Wang, J., Vaughan, J., ACM ASSOC COMPUTING MACHINERY. 2023
  • How Do Viewers Synthesize Conflicting Information from Data Visualizations? IEEE transactions on visualization and computer graphics Mantri, P., Subramonyam, H., Michal, A. L., Xiong, C. 2022; PP

    Abstract

    Scientific knowledge develops through cumulative discoveries that build on, contradict, contextualize, or correct prior findings. Scientists and journalists often communicate these incremental findings to lay people through visualizations and text (e.g., the positive and negative effects of caffeine intake). Consequently, readers need to integrate diverse and contrasting evidence from multiple sources to form opinions or make decisions. However, the underlying mechanism for synthesizing information from multiple visualizations remains under-explored. To address this knowledge gap, we conducted a series of four experiments (N = 1166) in which participants synthesized empirical evidence from a pair of line charts presented sequentially. In Experiment 1, we administered a baseline condition with charts depicting no specific context where participants held no strong belief. To test for the generalizability, we introduced real-world scenarios to our visualizations in Experiment 2 and added accompanying text descriptions similar to online news articles or blog posts in Experiment 3. In all three experiments, we varied the relative direction and magnitude of line slopes within the chart pairs. We found that participants tended to weigh the positive slope more when the two charts depicted relationships in the opposite direction (e.g., one positive slope and one negative slope). Participants tended to weigh the less steep slope more when the two charts depicted relationships in the same direction (e.g., both positive). Through these experiments, we characterize participants' synthesis behaviors depending on the relationship between the information they viewed, contribute to theories describing underlying cognitive mechanisms in information synthesis, and describe design implications for data storytelling.

    View details for DOI 10.1109/TVCG.2022.3209467

    View details for PubMedID 36166526

  • Composites: A Tangible Interaction Paradigm for Visual Data Analysis in Design Practice Subramonyam, H., Adar, E., Drucker, S. M., Bottoni, P., Panizzi, E. ASSOC COMPUTING MACHINERY. 2022
  • Explore, Create, Annotate: Designing Digital Drawing Tools with Visually Impaired People Pandey, M., Subramonyam, H., Sasia, B., Oney, S., O'Modhrain, S., ACM ASSOC COMPUTING MACHINERY. 2020
  • Affinity Lens Data-Assisted Affinity Diagramming with Augmented Reality Subramonyam, H., Drucker, S. M., Adar, E., Assoc Comp Machinery ASSOC COMPUTING MACHINERY. 2019
  • SmartCues: A Multitouch Query Approach for Details-on-Demand through Dynamically Computed Overlays IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS Subramonyam, H., Adar, E. 2019; 25 (1): 597-607

    Abstract

    Details-on-demand is a crucial feature in the visual information-seeking process but is often only implemented in highly constrained settings. The most common solution, hover queries (i.e., tooltips), are fast and expressive but are usually limited to single mark (e.g., a bar in a bar chart). 'Queries' to retrieve details for more complex sets of objects (e.g., comparisons between pairs of elements, averages across multiple items, trend lines, etc.) are difficult for end-users to invoke explicitly. Further, the output of these queries require complex annotations and overlays which need to be displayed and dismissed on demand to avoid clutter. In this work we introduce SmartCues, a library to support details-on-demand through dynamically computed overlays. For end-users, SmartCues provides multitouch interactions to construct complex queries for a variety of details. For designers, SmartCues offers an interaction library that can be used out-of-the-box, and can be extended for new charts and detail types. We demonstrate how SmartCues can be implemented across a wide array of visualization types and, through a lab study, show that end users can effectively use SmartCues.

    View details for DOI 10.1109/TVCG.2018.2865231

    View details for Web of Science ID 000452640000057

    View details for PubMedID 30136998

  • Designing Interactive Intelligent Systems for Human Learning, Creativity, and Sensemaking Subramonyam, H., ACM ASSOC COMPUTING MACHINERY. 2019: 158-161
  • TakeToons: Script-driven Performance Animation Subramonyam, H., Li, W., Adar, E., Dontcheva, M., Assoc Comp Machinery ASSOC COMPUTING MACHINERY. 2018: 663-674
  • The application of ecological momentary assessment and geolocation to a longitudinal twin study of substance use Brazel, D., Corley, R., Phelan, C., Frieser, M., Subramonyam, H., Rhea, S., Vernier, H., Hewitt, J., Resnick, P., Vrieze, S. SPRINGER. 2017: 676-677
  • Agency in Assistive Technology Adoption: Visual Impairment and Smartphone Use in Bangalore Pal, J., Viswanathan, A., Chandra, P., Nazareth, A., Kameshwaran, V., Subramonyam, H., Johri, A., Ackerman, M. S., O'Modhrain, S., ACM ASSOC COMPUTING MACHINERY. 2017: 5929-5940