Akshay Paruchuri
Postdoctoral Scholar, Psychiatry
Bio
I'm currently a postdoctoral scholar in the Stanford Translational AI (STAI) lab led by Professor Ehsan Adeli. I earned my PhD in computer science at UNC Chapel Hill under the advisement of Professor Henry Fuchs. My research interests are at the intersection of health AI, computer vision, and machine learning. Currently, I'm working toward a future where next-generation healthcare systems improve the entire patient journey, from advanced diagnostic imaging and surgical support to all-day health monitoring and management, to achieve better therapeutic outcomes for cancer and aging-related diseases. I'm generally interested in opportunities that would allow me to continue to deepen my research expertise while leading and working on projects with meaningful, positive real-world impact, especially with respect to areas such as healthcare and environmental sustainability.
Previously, I was a visiting researcher at IDSIA USI-SUPSI working with Professor Piotr Didyk on the interpretability of multimodal language models (MLMs) with respect to capabilities such as visual perception. I've published in leading venues on topics such as remote health sensing (WACV, NeurIPS), 3D reconstruction (ECCV, MICCAI), LLM-based conversational agents for personal health (EMNLP, Nature Communications), and energy-efficient operation of smart glasses (ISMAR). I've done internships at Google AR/VR, Google Consumer Health Research, and Kitware.
Professional Education
-
Ph.D., University of North Carolina at Chapel Hill, Computer Science (2025)
All Publications
-
Transforming wearable data into personal health insights using large language model agents
NATURE COMMUNICATIONS
2026; 17 (1): 1143
Abstract
Deriving personalized insights from popular wearable trackers requires complex numerical reasoning that challenges standard LLMs, necessitating tool-based approaches like code generation. Large language model (LLM) agents present a promising yet largely untapped solution for this analysis at scale. We introduce the Personal Health Insights Agent (PHIA), a system leveraging multistep reasoning with code generation and information retrieval to analyze and interpret behavioral health data. To test its capabilities, we create and share two benchmark datasets with over 4000 health insights questions. A 650-hour human expert evaluation shows that PHIA significantly outperforms a strong code generation baseline, achieving 84% accuracy on objective, numerical questions and, for open-ended ones, earning 83% favorable ratings while being twice as likely to achieve the highest quality rating. This work can advance behavioral health by empowering individuals to understand their data, enabling a new era of accessible, personalized, and data-driven wellness for the wider population.
View details for DOI 10.1038/s41467-025-67922-y
View details for Web of Science ID 001674380300001
View details for PubMedID 41526380
View details for PubMedCentralID PMC12855967
https://orcid.org/0000-0003-4664-3186