All Publications


  • Automatic detection of hand hygiene using computer vision technology. Journal of the American Medical Informatics Association : JAMIA Singh, A., Haque, A., Alahi, A., Yeung, S., Guo, M., Glassman, J. R., Beninati, W., Platchek, T., Fei-Fei, L., Milstein, A. 2020

    Abstract

    Hand hygiene is essential for preventing hospital-acquired infections but is difficult to accurately track. The gold-standard (human auditors) is insufficient for assessing true overall compliance. Computer vision technology has the ability to perform more accurate appraisals. Our primary objective was to evaluate if a computer vision algorithm could accurately observe hand hygiene dispenser use in images captured by depth sensors.Sixteen depth sensors were installed on one hospital unit. Images were collected continuously from March to August 2017. Utilizing a convolutional neural network, a machine learning algorithm was trained to detect hand hygiene dispenser use in the images. The algorithm's accuracy was then compared with simultaneous in-person observations of hand hygiene dispenser usage. Concordance rate between human observation and algorithm's assessment was calculated. Ground truth was established by blinded annotation of the entire image set. Sensitivity and specificity were calculated for both human and machine-level observation.A concordance rate of 96.8% was observed between human and algorithm (kappa = 0.85). Concordance among the 3 independent auditors to establish ground truth was 95.4% (Fleiss's kappa = 0.87). Sensitivity and specificity of the machine learning algorithm were 92.1% and 98.3%, respectively. Human observations showed sensitivity and specificity of 85.2% and 99.4%, respectively.A computer vision algorithm was equivalent to human observation in detecting hand hygiene dispenser use. Computer vision monitoring has the potential to provide a more complete appraisal of hand hygiene activity in hospitals than the current gold-standard given its ability for continuous coverage of a unit in space and time.

    View details for DOI 10.1093/jamia/ocaa115

    View details for PubMedID 32712656

  • A computer vision system for deep learning-based detection of patient mobilization activities in the ICU NPJ DIGITAL MEDICINE Yeung, S., Rinaldo, F., Jopling, J., Liu, B., Mehra, R., Downing, N., Guo, M., Bianconi, G. M., Alahi, A., Lee, J., Campbell, B., Deru, K., Beninati, W., Fei-Fei, L., Milstein, A. 2019; 2
  • A computer vision system for deep learning-based detection of patient mobilization activities in the ICU. NPJ digital medicine Yeung, S., Rinaldo, F., Jopling, J., Liu, B., Mehra, R., Downing, N. L., Guo, M., Bianconi, G. M., Alahi, A., Lee, J., Campbell, B., Deru, K., Beninati, W., Fei-Fei, L., Milstein, A. 2019; 2: 11

    Abstract

    Early and frequent patient mobilization substantially mitigates risk for post-intensive care syndrome and long-term functional impairment. We developed and tested computer vision algorithms to detect patient mobilization activities occurring in an adult ICU. Mobility activities were defined as moving the patient into and out of bed, and moving the patient into and out of a chair. A data set of privacy-safe-depth-video images was collected in the Intermountain LDS Hospital ICU, comprising 563 instances of mobility activities and 98,801 total frames of video data from seven wall-mounted depth sensors. In all, 67% of the mobility activity instances were used to train algorithms to detect mobility activity occurrence and duration, and the number of healthcare personnel involved in each activity. The remaining 33% of the mobility instances were used for algorithm evaluation. The algorithm for detecting mobility activities attained a mean specificity of 89.2% and sensitivity of 87.2% over the four activities; the algorithm for quantifying the number of personnel involved attained a mean accuracy of 68.8%.

    View details for DOI 10.1038/s41746-019-0087-z

    View details for PubMedID 31304360

    View details for PubMedCentralID PMC6550251

  • AUDIO-LINGUISTIC EMBEDDINGS FOR SPOKEN SENTENCES Haque, A., Guo, M., Verma, P., Li Fei-Fei, IEEE IEEE. 2019: 7355–59
  • Conditional End-to-End Audio Transforms Haque, A., Guo, M., Verma, P., Int Speech Commun Assoc ISCA-INT SPEECH COMMUNICATION ASSOC. 2018: 2295–99
  • KNOWLEDGE DISTILLATION FOR SMALL-FOOTPRINT HIGHWAY NETWORKS Lu, L., Guo, M., Renals, S., IEEE IEEE. 2017: 4820–24
  • LITHIUM-RICH GIANTS IN GLOBULAR CLUSTERS ASTROPHYSICAL JOURNAL Kirby, E. N., Guhathakurta, P., Zhang, A. J., Hong, J., Guo, M., Guo, R., Cohen, J. G., Cunha, K. 2016; 819 (2)
  • CARBON IN RED GIANTS IN GLOBULAR CLUSTERS AND DWARF SPHEROIDAL GALAXIES ASTROPHYSICAL JOURNAL Kirby, E. N., Guo, M., Zhang, A. J., Deng, M., Cohen, J. G., Guhathakurta, P., Shetrone, M. D., Lee, Y. S., Rizzi, L. 2015; 801 (2)