Bio


Dr. Serena Yeung-Levy is an Assistant Professor of Biomedical Data Science and, by courtesy, of Computer Science and of Electrical Engineering at Stanford University. Her research focus is on developing artificial intelligence and machine learning algorithms to enable new capabilities in biomedicine and healthcare. She has extensive expertise in deep learning and computer vision, and has developed computer vision algorithms for analyzing diverse types of visual data ranging from video capture of human behavior, to medical images and cell microscopy images.

Dr. Yeung-Levy leads the Medical AI and Computer Vision Lab at Stanford. She is affiliated with the Stanford Artificial Intelligence Laboratory, the Clinical Excellence Research Center, and the Center for Artificial Intelligence in Medicine & Imaging. She is also a Chan Zuckerberg Biohub Investigator and has served on the NIH Advisory Committee to the Director Working Group on Artificial Intelligence.

Academic Appointments


Honors & Awards


  • Harvard Technology for Equitable and Accessible Medicine Fellowship, Harvard University (2018 - 2019)

Professional Education


  • Postdoctoral Fellow, Harvard University (2019)
  • Ph.D., Stanford University (2018)

2023-24 Courses


Stanford Advisees


All Publications


  • Hyperbolic Deep Learning in Computer Vision: A Survey INTERNATIONAL JOURNAL OF COMPUTER VISION Mettes, P., Atigh, M., Keller-Ressel, M., Gu, J., Yeung, S. 2024
  • Self-supervised learning for medical image classification: a systematic review and implementation guidelines. NPJ digital medicine Huang, S., Pareek, A., Jensen, M., Lungren, M. P., Yeung, S., Chaudhari, A. S. 2023; 6 (1): 74

    Abstract

    Advancements in deep learning and computer vision provide promising solutions for medical image analysis, potentially improving healthcare and patient outcomes. However, the prevailing paradigm of training deep learning models requires large quantities of labeled training data, which is both time-consuming and cost-prohibitive to curate for medical images. Self-supervised learning has the potential to make significant contributions to the development of robust medical imaging models through its ability to learn useful insights from copious medical datasets without labels. In this review, we provide consistent descriptions of different self-supervised learning strategies and compose a systematic review of papers published between 2012 and 2022 on PubMed, Scopus, and ArXiv that applied self-supervised learning to medical imaging classification. We screened a total of 412 relevant studies and included 79 papers for data extraction and analysis. With this comprehensive effort, we synthesize the collective knowledge of prior work and provide implementation guidelines for future researchers interested in applying self-supervised learning to their development of medical imaging classification models.

    View details for DOI 10.1038/s41746-023-00811-0

    View details for PubMedID 37100953

  • Author Correction: Prostate cancer therapy personalization via multi-modal deep learning on randomized phase III clinical trials. NPJ digital medicine Esteva, A., Feng, J., van der Wal, D., Huang, S., Simko, J. P., DeVries, S., Chen, E., Schaeffer, E. M., Morgan, T. M., Sun, Y., Ghorbani, A., Naik, N., Nathawani, D., Socher, R., Michalski, J. M., Roach, M. 3., Pisansky, T. M., Monson, J. M., Naz, F., Wallace, J., Ferguson, M. J., Bahary, J., Zou, J., Lungren, M., Yeung, S., Ross, A. E., NRG Prostate Cancer AI Consortium, Sandler, H. M., Tran, P. T., Spratt, D. E., Pugh, S., Feng, F. Y., Mohamad, O., Kucharczyk, M., Souhami, L., Ballas, L., Peters, C. A., Liu, S., Balogh, A. G., Randolph-Jackson, P. D., Schwartz, D. L., Girvigian, M. R., Saito, N. G., Raben, A., Rabinovitch, R. A., Katato, K. 2023; 6 (1): 27

    View details for DOI 10.1038/s41746-023-00769-z

    View details for PubMedID 36813827

  • CryoET reveals organelle phenotypes in huntington disease patient iPSC-derived and mouse primary neurons. Nature communications Wu, G. H., Smith-Geater, C., Galaz-Montoya, J. G., Gu, Y., Gupte, S. R., Aviner, R., Mitchell, P. G., Hsu, J., Miramontes, R., Wang, K. Q., Geller, N. R., Hou, C., Danita, C., Joubert, L. M., Schmid, M. F., Yeung, S., Frydman, J., Mobley, W., Wu, C., Thompson, L. M., Chiu, W. 2023; 14 (1): 692

    Abstract

    Huntington's disease (HD) is caused by an expanded CAG repeat in the huntingtin gene, yielding a Huntingtin protein with an expanded polyglutamine tract. While experiments with patient-derived induced pluripotent stem cells (iPSCs) can help understand disease, defining pathological biomarkers remains challenging. Here, we used cryogenic electron tomography to visualize neurites in HD patient iPSC-derived neurons with varying CAG repeats, and primary cortical neurons from BACHD, deltaN17-BACHD, and wild-type mice. In HD models, we discovered sheet aggregates in double membrane-bound organelles, and mitochondria with distorted cristae and enlarged granules, likely mitochondrial RNA granules. We used artificial intelligence to quantify mitochondrial granules, and proteomics experiments reveal differential protein content in isolated HD mitochondria. Knockdown of Protein Inhibitor of Activated STAT1 ameliorated aberrant phenotypes in iPSC- and BACHD neurons. We show that integrated ultrastructural and proteomic approaches may uncover early HD phenotypes to accelerate diagnostics and the development of targeted therapeutics for HD.

    View details for DOI 10.1038/s41467-023-36096-w

    View details for PubMedID 36754966

  • Comparing spatial patterns of marine vessels between vessel-tracking data and satellite imagery FRONTIERS IN MARINE SCIENCE Nakayama, S., Dong, W., Correro, R. G., Selig, E. R., Wabnitz, C. C., Hastie, T. J., Leape, J., Yeung, S., Micheli, F. 2023; 9
  • <i>NeMo</i>: 3D <i>Ne</i>ural <i>Mo</i>tion Fields from Multiple Video Instances of the Same Action Wang, K., Weng, Z., Xenochristou, M., Araujo, J., Gu, J., Liu, C., Yeung, S., IEEE IEEE COMPUTER SOC. 2023: 22129-22138
  • PROB: Probabilistic Objectness for Open World Object Detection Zohar, O., Wang, K., Yeung, S., IEEE IEEE COMPUTER SOC. 2023: 11444-11453
  • Generalizable Neural Fields as Partially Observed Neural Processes Gu, J., Wang, K., Yeung, S., IEEE IEEE COMPUTER SOC. 2023: 5307-5316
  • Using AI and computer vision to analyze technical proficiency in robotic surgery. Surgical endoscopy Yang, J. H., Goodman, E. D., Dawes, A. J., Gahagan, J. V., Esquivel, M. M., Liebert, C. A., Kin, C., Yeung, S., Gurland, B. H. 2022

    Abstract

    BACKGROUND: Intraoperative skills assessment is time-consuming and subjective; an efficient and objective computer vision-based approach for feedback is desired. In this work, we aim to design and validate an interpretable automated method to evaluate technical proficiency using colorectal robotic surgery videos with artificial intelligence.METHODS: 92 curated clips of peritoneal closure were characterized by both board-certified surgeons and a computer vision AI algorithm to compare the measures of surgical skill. For human ratings, six surgeons graded clips according to the GEARS assessment tool; for AI assessment, deep learning computer vision algorithms for surgical tool detection and tracking were developed and implemented.RESULTS: For the GEARS category of efficiency, we observe a positive correlation between human expert ratings of technical efficiency and AI-determined total tool movement (r=-0.72). Additionally, we show that more proficient surgeons perform closure with significantly less tool movement compared to less proficient surgeons (p<0.001). For the GEARS category of bimanual dexterity, a positive correlation between expert ratings of bimanual dexterity and the AI model's calculated measure of bimanual movement based on simultaneous tool movement (r=0.48) was also observed. On average, we also find that higher skill clips have significantly more simultaneous movement in both hands compared to lower skill clips (p<0.001).CONCLUSIONS: In this study, measurements of technical proficiency extracted from AI algorithms are shown to correlate with those given by expert surgeons. Although we target measurements of efficiency and bimanual dexterity, this work suggests that artificial intelligence through computer vision holds promise for efficiently standardizing grading of surgical technique, which may help in surgical skills training.

    View details for DOI 10.1007/s00464-022-09781-y

    View details for PubMedID 36536082

  • Developing medical imaging AI for emerging infectious diseases. Nature communications Huang, S., Chaudhari, A. S., Langlotz, C. P., Shah, N., Yeung, S., Lungren, M. P. 2022; 13 (1): 7060

    View details for DOI 10.1038/s41467-022-34234-4

    View details for PubMedID 36400764

  • Prostate cancer therapy personalization via multi-modal deep learning on randomized phase III clinical trials. NPJ digital medicine Esteva, A., Feng, J., van der Wal, D., Huang, S., Simko, J. P., DeVries, S., Chen, E., Schaeffer, E. M., Morgan, T. M., Sun, Y., Ghorbani, A., Naik, N., Nathawani, D., Socher, R., Michalski, J. M., Roach, M. 3., Pisansky, T. M., Monson, J. M., Naz, F., Wallace, J., Ferguson, M. J., Bahary, J., Zou, J., Lungren, M., Yeung, S., Ross, A. E., NRG Prostate Cancer AI Consortium, Sandler, H. M., Tran, P. T., Spratt, D. E., Pugh, S., Feng, F. Y., Mohamad, O., Kucharczyk, M., Souhami, L., Ballas, L., Peters, C. A., Liu, S., Balogh, A. G., Randolph-Jackson, P. D., Schwartz, D. L., Girvigian, M. R., Saito, N. G., Raben, A., Rabinovitch, R. A., Katato, K. 2022; 5 (1): 71

    Abstract

    Prostate cancer is the most frequent cancer in men and a leading cause of cancer death. Determining a patient's optimal therapy is a challenge, where oncologists must select a therapy with the highest likelihood of success and the lowest likelihood of toxicity. International standards for prognostication rely on non-specific and semi-quantitative tools, commonly leading to over- and under-treatment. Tissue-based molecular biomarkers have attempted to address this, but most have limited validation in prospective randomized trials and expensive processing costs, posing substantial barriers to widespread adoption. There remains a significant need for accurate and scalable tools to support therapy personalization. Here we demonstrate prostate cancer therapy personalization by predicting long-term, clinically relevant outcomes using a multimodal deep learning architecture and train models using clinical data and digital histopathology from prostate biopsies. We train and validate models using five phase III randomized trials conducted across hundreds of clinical centers. Histopathological data was available for 5654 of 7764 randomized patients (71%) with a median follow-up of 11.4years. Compared to the most common risk-stratification tool-risk groups developed by the National Cancer Center Network (NCCN)-our models have superior discriminatory performance across all endpoints, ranging from 9.2% to 14.6% relative improvement in a held-out validation set. This artificial intelligence-based tool improves prognostication over standard tools and allows oncologists to computationally predict the likeliest outcomes of specific patients to determine optimal treatment. Outfitted with digital scanners and internet access, any clinic could offer such capabilities, enabling global access to therapy personalization.

    View details for DOI 10.1038/s41746-022-00613-w

    View details for PubMedID 35676445

  • AUTOMATED DETECTION OF ISOLATED REM SLEEP BEHAVIOR DISORDER (IRBD) DURING SINGLE NIGHT IN-LAB VIDEO-POLYSOMNOGRAPHY (PSG) USING COMPUTER VISION Adaimi, G., Gupta, N., Mottaghi, A., Yeung, S., Mignot, E., Alahi, A., During, E. OXFORD UNIV PRESS INC. 2022: A282
  • Adaptation of Surgical Activity Recognition Models Across Operating Rooms Mottaghi, A., Sharghi, A., Yeung, S., Mohareri, O., Wang, L., Dou, Q., Fletcher, P. T., Speidel, S., Li, S. SPRINGER INTERNATIONAL PUBLISHING AG. 2022: 530-540
  • Domain Adaptive 3D Pose Augmentation for In-the-wild Human Mesh Recovery Weng, Z., Wang, K., Kanazawa, A., Yeung, S., IEEE IEEE. 2022: 261-270
  • Using deep learning to identify the recurrent laryngeal nerve during thyroidectomy. Scientific reports Gong, J., Holsinger, F. C., Noel, J. E., Mitani, S., Jopling, J., Bedi, N., Koh, Y. W., Orloff, L. A., Cernea, C. R., Yeung, S. 2021; 11 (1): 14306

    Abstract

    Surgeons must visually distinguish soft-tissues, such as nerves, from surrounding anatomy to prevent complications and optimize patient outcomes. An accurate nerve segmentation and analysis tool could provide useful insight for surgical decision-making. Here, we present an end-to-end, automatic deep learning computer vision algorithm to segment and measure nerves. Unlike traditional medical imaging, our unconstrained setup with accessible handheld digital cameras, along with the unstructured open surgery scene, makes this task uniquely challenging. We investigate one common procedure, thyroidectomy, during which surgeons must avoid damaging the recurrent laryngeal nerve (RLN), which is responsible for human speech. We evaluate our segmentation algorithm on a diverse dataset across varied and challenging settings of operating room image capture, and show strong segmentation performance in the optimal image capture condition. This work lays the foundation for future research in real-time tissue discrimination and integration of accessible, intelligent tools into open surgery to provide actionable insights.

    View details for DOI 10.1038/s41598-021-93202-y

    View details for PubMedID 34253767

  • Deep Convolutional Neural Networks as a Diagnostic Aid-A Step Toward Minimizing Undetected Scaphoid Fractures on Initial Hand Radiographs. JAMA network open Jopling, J. K., Pridgen, B. C., Yeung, S. 2021; 4 (5): e216393

    View details for DOI 10.1001/jamanetworkopen.2021.6393

    View details for PubMedID 33956135

  • Setting Assessment Standards for Artificial Intelligence Computer Vision Wound Annotations. JAMA network open Jopling, J. K., Pridgen, B. C., Yeung, S. 2021; 4 (5): e217851

    View details for DOI 10.1001/jamanetworkopen.2021.7851

    View details for PubMedID 34009356

  • Parents' Perspectives on Using Artificial Intelligence to Reduce Technology Interference During Early Childhood: Cross-sectional Online Survey. Journal of medical Internet research Glassman, J., Humphreys, K., Yeung, S., Smith, M., Jauregui, A., Milstein, A., Sanders, L. 2021; 23 (3): e19461

    Abstract

    BACKGROUND: Parents' use of mobile technologies may interfere with important parent-child interactions that are critical to healthy child development. This phenomenon is known as technoference. However, little is known about the population-wide awareness of this problem and the acceptability of artificial intelligence (AI)-based tools that help with mitigating technoference.OBJECTIVE: This study aims to assess parents' awareness of technoference and its harms, the acceptability of AI tools for mitigating technoference, and how each of these constructs vary across sociodemographic factors.METHODS: We administered a web-based survey to a nationally representative sample of parents of children aged ≤5 years. Parents' perceptions that their own technology use had risen to potentially problematic levels in general, their perceptions of their own parenting technoference, and the degree to which they found AI tools for mitigating technoference acceptable were assessed by using adaptations of previously validated scales. Multiple regression and mediation analyses were used to assess the relationships between these scales and each of the 6 sociodemographic factors (parent age, sex, language, ethnicity, educational attainment, and family income).RESULTS: Of the 305 respondents, 280 provided data that met the established standards for analysis. Parents reported that a mean of 3.03 devices (SD 2.07) interfered daily in their interactions with their child. Almost two-thirds of the parents agreed with the statements "I am worried about the impact of my mobile electronic device use on my child" and "Using a computer-assisted coach while caring for my child would help me notice more quickly when my device use is interfering with my caregiving" (187/281, 66.5% and 184/282, 65.1%, respectively). Younger age, Hispanic ethnicity, and Spanish language spoken at home were associated with increased technoference awareness. Compared to parents' perceived technoference and sociodemographic factors, parents' perceptions of their own problematic technology use was the factor that was most associated with the acceptance of AI tools.CONCLUSIONS: Parents reported high levels of mobile device use and technoference around their youngest children. Most parents across a wide sociodemographic spectrum, especially younger parents, found the use of AI tools to help mitigate technoference during parent-child daily interaction acceptable and useful.

    View details for DOI 10.2196/19461

    View details for PubMedID 33720026

  • Deep learning-enabled medical computer vision. NPJ digital medicine Esteva, A., Chou, K., Yeung, S., Naik, N., Madani, A., Mottaghi, A., Liu, Y., Topol, E., Dean, J., Socher, R. 2021; 4 (1): 5

    Abstract

    A decade of unprecedented progress in artificial intelligence (AI) has demonstrated the potential for many fields-including medicine-to benefit from the insights that AI techniques can extract from data. Here we survey recent progress in the development of modern computer vision techniques-powered by deep learning-for medical applications, focusing on medical imaging, medical video, and clinical deployment. We start by briefly summarizing a decade of progress in convolutional neural networks, including the vision tasks they enable, in the context of healthcare. Next, we discuss several example medical imaging applications that stand to benefit-including cardiology, pathology, dermatology, ophthalmology-and propose new avenues for continued work. We then expand into general medical video, highlighting ways in which clinical workflows can integrate computer vision to enhance care. Finally, we discuss the challenges and hurdles required for real-world clinical deployment of these technologies.

    View details for DOI 10.1038/s41746-020-00376-2

    View details for PubMedID 33420381

  • Achieving Trustworthy Biomedical Data Solutions. Pacific Symposium on Biocomputing. Pacific Symposium on Biocomputing Washington, P., Yeung, S., Percha, B., Tatonetti, N., Liphardt, J., Wall, D. P. 2021; 26: 1–13

    Abstract

    Privacy and trust of biomedical solutions that capture and share data is an issue rising to the center of public attention and discourse. While large-scale academic, medical, and industrial research initiatives must collect increasing amounts of personal biomedical data from patient stakeholders, central to ensuring precision health becomes a reality, methods for providing sufficient privacy in biomedical databases and conveying a sense of trust to the user is equally crucial for the field of biocomputing to advance with the grace of those stakeholders. If the intended audience does not trust new precision health innovations, funding and support for these efforts will inevitably be limited. It is therefore crucial for the field to address these issues in a timely manner. Here we describe current research directions towards achieving trustworthy biomedical informatics solutions.

    View details for PubMedID 33690999

  • GLoRIA: A Multimodal Global-Local Representation Learning Framework for Label-efficient Medical Image Recognition Huang, S., Shen, L., Lungren, M. P., Yeung, S., IEEE IEEE. 2021: 3922-3931
  • Holistic 3D Human and Scene Mesh Estimation from Single View Images Weng, Z., Yeung, S., IEEE COMP SOC IEEE COMPUTER SOC. 2021: 334-343
  • DARCNN: Domain Adaptive Region-based Convolutional Neural Network for Unsupervised Instance Segmentation in Biomedical Images Hsu, J., Chiu, W., Yeung, S., IEEE COMP SOC IEEE COMPUTER SOC. 2021: 1003-1012
  • Unsupervised Discovery of the Long-Tail in Instance Segmentation Using Hierarchical Self-Supervision Weng, Z., Ogut, M., Limonchik, S., Yeung, S., IEEE COMP SOC IEEE COMPUTER SOC. 2021: 2603-2612
  • Achieving Trustworthy Biomedical Data Solutions Washington, P., Yeung, S., Percha, B., Tatonetti, N., Liphardt, J., Wall, D. P., Altman, R. B., Dunker, A. K., Hunter, L., Ritchie, M. D., Murray, T., Klein, T. E. WORLD SCIENTIFIC PUBL CO PTE LTD. 2021: 1-13
  • Ethical and Legal Aspects of Ambient Intelligence in Hospitals. JAMA Gerke, S. n., Yeung, S. n., Cohen, I. G. 2020

    View details for DOI 10.1001/jama.2019.21699

    View details for PubMedID 31977033

  • Using Computer Vision to Automate Hand Detection and Tracking of Surgeon Movements in Videos of Open Surgery. AMIA ... Annual Symposium proceedings. AMIA Symposium Zhang, M., Cheng, X., Copeland, D., Desai, A., Guan, M. Y., Brat, G. A., Yeung, S. 2020; 2020: 1373-1382

    Abstract

    Open, or non-laparoscopic surgery, represents the vast majority of all operating room procedures, but few tools exist to objectively evaluate these techniques at scale. Current efforts involve human expert-based visual assessment. We leverage advances in computer vision to introduce an automated approach to video analysis of surgical execution. A state-of-the-art convolutional neural network architecture for object detection was used to detect operating hands in open surgery videos. Automated assessment was expanded by combining model predictions with a fast object tracker to enable surgeon-specific hand tracking. To train our model, we used publicly available videos of open surgery from YouTube and annotated these with spatial bounding boxes of operating hands. Our model's spatial detections of operating hands significantly outperforms the detections achieved using pre-existing hand-detection datasets, and allow for insights into intra-operative movement patterns and economy of motion.

    View details for PubMedID 34025905

  • Automatic detection of hand hygiene using computer vision technology. Journal of the American Medical Informatics Association : JAMIA Singh, A. n., Haque, A. n., Alahi, A. n., Yeung, S. n., Guo, M. n., Glassman, J. R., Beninati, W. n., Platchek, T. n., Fei-Fei, L. n., Milstein, A. n. 2020

    Abstract

    Hand hygiene is essential for preventing hospital-acquired infections but is difficult to accurately track. The gold-standard (human auditors) is insufficient for assessing true overall compliance. Computer vision technology has the ability to perform more accurate appraisals. Our primary objective was to evaluate if a computer vision algorithm could accurately observe hand hygiene dispenser use in images captured by depth sensors.Sixteen depth sensors were installed on one hospital unit. Images were collected continuously from March to August 2017. Utilizing a convolutional neural network, a machine learning algorithm was trained to detect hand hygiene dispenser use in the images. The algorithm's accuracy was then compared with simultaneous in-person observations of hand hygiene dispenser usage. Concordance rate between human observation and algorithm's assessment was calculated. Ground truth was established by blinded annotation of the entire image set. Sensitivity and specificity were calculated for both human and machine-level observation.A concordance rate of 96.8% was observed between human and algorithm (kappa = 0.85). Concordance among the 3 independent auditors to establish ground truth was 95.4% (Fleiss's kappa = 0.87). Sensitivity and specificity of the machine learning algorithm were 92.1% and 98.3%, respectively. Human observations showed sensitivity and specificity of 85.2% and 99.4%, respectively.A computer vision algorithm was equivalent to human observation in detecting hand hygiene dispenser use. Computer vision monitoring has the potential to provide a more complete appraisal of hand hygiene activity in hospitals than the current gold-standard given its ability for continuous coverage of a unit in space and time.

    View details for DOI 10.1093/jamia/ocaa115

    View details for PubMedID 32712656

  • A computer vision system for deep learning-based detection of patient mobilization activities in the ICU. NPJ digital medicine Yeung, S., Rinaldo, F., Jopling, J., Liu, B., Mehra, R., Downing, N. L., Guo, M., Bianconi, G. M., Alahi, A., Lee, J., Campbell, B., Deru, K., Beninati, W., Fei-Fei, L., Milstein, A. 2019; 2: 11

    Abstract

    Early and frequent patient mobilization substantially mitigates risk for post-intensive care syndrome and long-term functional impairment. We developed and tested computer vision algorithms to detect patient mobilization activities occurring in an adult ICU. Mobility activities were defined as moving the patient into and out of bed, and moving the patient into and out of a chair. A data set of privacy-safe-depth-video images was collected in the Intermountain LDS Hospital ICU, comprising 563 instances of mobility activities and 98,801 total frames of video data from seven wall-mounted depth sensors. In all, 67% of the mobility activity instances were used to train algorithms to detect mobility activity occurrence and duration, and the number of healthcare personnel involved in each activity. The remaining 33% of the mobility instances were used for algorithm evaluation. The algorithm for detecting mobility activities attained a mean specificity of 89.2% and sensitivity of 87.2% over the four activities; the algorithm for quantifying the number of personnel involved attained a mean accuracy of 68.8%.

    View details for DOI 10.1038/s41746-019-0087-z

    View details for PubMedID 31304360

    View details for PubMedCentralID PMC6550251

  • A computer vision system for deep learning-based detection of patient mobilization activities in the ICU NPJ DIGITAL MEDICINE Yeung, S., Rinaldo, F., Jopling, J., Liu, B., Mehra, R., Downing, N., Guo, M., Bianconi, G. M., Alahi, A., Lee, J., Campbell, B., Deru, K., Beninati, W., Fei-Fei, L., Milstein, A. 2019; 2
  • Every Moment Counts: Dense Detailed Labeling of Actions in Complex Videos INTERNATIONAL JOURNAL OF COMPUTER VISION Yeung, S., Russakovsky, O., Jin, N., Andriluka, M., Mori, G., Li Fei-Fei 2018; 126 (2-4): 375–89
  • Scaling Human-Object Interaction Recognition through Zero-Shot Learning Shen, L., Yeung, S., Hoffman, J., Mori, G., Li Fei-Fei, IEEE IEEE. 2018: 1568–76
  • Neural Graph Matching Networks for Fewshot 3D Action Recognition Guo, M., Chou, E., Huang, D., Song, S., Yeung, S., Li Fei-Fei, Ferrari, Hebert, M., Sminchisescu, C., Weiss, Y. SPRINGER INTERNATIONAL PUBLISHING AG. 2018: 673-689
  • Dynamic Task Prioritization for Multitask Learning Guo, M., Haque, A., Huang, D., Yeung, S., Li Fei-Fei, Ferrari, Hebert, M., Sminchisescu, C., Weiss, Y. SPRINGER INTERNATIONAL PUBLISHING AG. 2018: 282-299
  • Temporal Modular Networks for Retrieving Complex Compositional Activities in Videos European Conference on Computer Vision Liu, B., Yeung, S., Chou, E., Huang, D., Fei-Fei, L., Niebles, J. 2018: 569–86
  • 3D Point Cloud-Based Visual Prediction of ICU Mobility Care Activities Machine Learning in Healthcare Liu, B., Guo, M., Chou, E., Mehra, R., Yeung, S., Downing, N. L., Salipur, F., Jopling, J., Campbell, B., Deru, K., Beninati, W., Milstein, A., Fei-Fei, L. 2018
  • Computer Vision-based Descriptive Analytics of Seniors’ Daily Activities for Long-term Health Monitoring Machine Learning in Healthcare Hsieh, J., Luo, Z., Balachandar, N., Yeung, S., Pusiol, G., Luxenberg, J., Li, G., Li, L., Downing, N. L., Milstein, A., Fei-Fei, L. 2018
  • Dynamic Task Prioritization for Multitask Learning European Conference on Computer Vision Guo, M., Haque, A., Huang, D., Yeung, S., Fei-Fei, L. 2018
  • Neural Graph Matching Networks for Fewshot 3D Action Recognition European Conference on Computer Vision Guo, M., Chou, E., Song, S., Huang, D., Yeung, S., Fei-Fei, L. 2018
  • Bedside Computer Vision - Moving Artificial Intelligence from Driver Assistance to Patient Safety. The New England journal of medicine Yeung, S. n., Downing, N. L., Fei-Fei, L. n., Milstein, A. n. 2018; 378 (14): 1271–73

    View details for PubMedID 29617592

  • Tool Detection and Operative Skill Assessment in Surgical Videos Using Region-Based Convolutional Neural Networks Jin, A., Yeung, S., Jopling, J., Krause, J., Azagury, D., Milstein, A., Li Fei-Fei, IEEE IEEE. 2018: 691–99
  • Learning to Learn from Noisy Web Videos Yeung, S., Ramanathan, V., Russakovsky, O., Shen, L., Mori, G., Li Fei-Fei, IEEE IEEE. 2017: 7455–63
  • Towards Vision-Based Smart Hospitals: A System for Tracking and Monitoring Hand Hygiene Compliance Machine Learning in Healthcare Haque, A., Guo, M., Alahi, A., Yeung, S., Luo, Z., Rege, A., Jopling, J., Downing, N. L., Beninati, W., Singh, A., Platchek, T., Milstein, A., Fei-Fei, L. 2017
  • Jointly Learning Energy Expenditures and Activities using Egocentric Multimodal Signals Nakamura, K., Yeung, S., Alahi, A., Li Fei-Fei, IEEE IEEE. 2017: 6817–26
  • End-to-end Learning of Action Detection from Frame Glimpses in Video Computer Vision and Pattern Recognition Yeung, S., Russakovsky, O., Mori, G., Fei-Fei, L. 2016: 2678–87

    View details for DOI 10.1109/cvpr.2016.293

  • Towards Viewpoint Invariant 3D Human Pose Estimation Haque, A., Peng, B., Luo, Z., Alahi, A., Yeung, S., Li Fei-Fei, Leibe, B., Matas, J., Sebe, N., Welling, M. SPRINGER INTERNATIONAL PUBLISHING AG. 2016: 160-177
  • Towards Viewpoint Invariant 3D Human Pose Estimation European Conference on Computer Vision Haque, A., Peng, B., Luo, Z., Alahi, A., Yeung, S., Fei-Fei, L. 2016
  • Learning hierarchical invariant spatio-temporal features for action recognition with independent subspace analysis IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Le, Q. V., Zou, W. Y., Yeung, S. Y., Ng, A. Y. IEEE. 2011