Bio


Derek received his MD degree and completed his Internal Medicine training at Western University. He is interested in point-of-care ultrasound for managing and resuscitating critically ill patients. Derek has investigated deep learning applications for automated interpretation of lung ultrasound and he is interested in medical device innovation.

Clinical Focus


  • Fellow

Boards, Advisory Committees, Professional Organizations


  • Member, Canadian Society for Clinical Investigation (2022 - Present)
  • Testamur, CCEeXAM, National Board of Echocardiography (2023 - Present)

Professional Education


  • Residency, Western University, Internal Medicine (2024)
  • MD, Western University, Medicine (2021)
  • BMSc, Western University, Medical Sciences (2017)

Current Clinical Interests


  • Critical Care
  • Imaging, Ultrasound
  • Artificial Intelligence
  • Deep Learning

Graduate and Fellowship Programs


  • Critical Care Medicine (Fellowship Program)

All Publications


  • Medico-legal risks of point-of-care ultrasound: a closed-case analysis of Canadian Medical Protective Association medico-legal cases. The ultrasound journal Prager, R., Wu, D., Garber, G., Finestone, P. J., Zang, C., Aslanova, R., Arntfield, R. 2024; 16 (1): 16

    Abstract

    Point-of-care ultrasound (POCUS) has become a core diagnostic tool for many physicians due to its portability, excellent safety profile, and diagnostic utility. Despite its growing use, the potential risks of POCUS use should be considered by providers. We analyzed the Canadian Medical Protective Association (CMPA) repository to identify medico-legal cases arising from the use of POCUS.We retrospectively searched the CMPA closed-case repository for cases involving diagnostic POCUS between January 1st, 2012 and December 31st, 2021. Cases included civil-legal actions, medical regulatory authority (College) cases, and hospital complaints. Patient and physician demographics, outcomes, reason for complaint, and expert-identified contributing factors were analyzed.From 2012 to 2021, there were 58,626 closed medico-legal cases in the CMPA repository with POCUS determined to be a contributing factor for medico-legal action in 15 cases; in all cases the medico-legal outcome was decided against the physicians. The most common reasons for patient complaints were diagnostic error, deficient assessment, and failure to perform a test or intervention. Expert analysis of these cases determined the most common contributing factors for medico-legal action was failure to perform POCUS when indicated (7 cases, 47%); however, medico-legal action also resulted from diagnostic error, incorrect sonographic approach, deficient assessment, inadequate skill, inadequate documentation, or inadequate reporting.Although the most common reason associated with the medico-legal action in these cases is failure to perform POCUS when indicated, inappropriate use of POCUS may lead to medico-legal action. Due to limitations in granularity of data, the exact number of civil-legal, College cases, and hospital complaints for each contributing factor is unavailable. To enhance patient care and mitigate risk for providers, POCUS should be carefully integrated with other clinical information, performed by providers with adequate skill, and carefully documented.

    View details for DOI 10.1186/s13089-024-00364-7

    View details for PubMedID 38396310

    View details for PubMedCentralID PMC10891006

  • Automated Real-Time Detection of Lung Sliding Using Artificial Intelligence: A Prospective Diagnostic Accuracy Study. Chest Fiedler, H. C., Prager, R., Smith, D., Wu, D., Dave, C., Tschirhart, J., Wu, B., Van Berlo, B., Malthaner, R., Arntfield, R. 2024

    Abstract

    Rapid evaluation for pneumothorax is a common clinical priority. Although lung ultrasound (LUS) often is used to assess for pneumothorax, its diagnostic accuracy varies based on patient and provider factors. To enhance the performance of LUS for pulmonary pathologic features, artificial intelligence (AI)-assisted imaging has been adopted; however, the diagnostic accuracy of AI-assisted LUS (AI-LUS) deployed in real time to diagnose pneumothorax remains unknown.In patients with suspected pneumothorax, what is the real-time diagnostic accuracy of AI-LUS to recognize the absence of lung sliding?We performed a prospective AI-assisted diagnostic accuracy study of AI-LUS to recognize the absence of lung sliding in a convenience sample of patients with suspected pneumothorax. After calibrating the model parameters and imaging settings for bedside deployment, we prospectively evaluated its diagnostic accuracy for lung sliding compared with a reference standard of expert consensus.Two hundred forty-one lung sliding evaluations were derived from 62 patients. AI-LUS showed a sensitivity of 0.921 (95% CI, 0.792-0.973), specificity of 0.802 (95% CI, 0.735-0.856), area under the receiver operating characteristic curve of 0.885 (95% CI, 0.828-0.956), and accuracy of 0.824 (95% CI, 0.766-0.870) for the diagnosis of absent lung sliding.Real-time AI-LUS shows high sensitivity and moderate specificity to identify the absence of lung sliding. Further research to improve model performance and optimize the integration of AI-LUS into existing diagnostic pathways is warranted.

    View details for DOI 10.1016/j.chest.2024.02.011

    View details for PubMedID 38365174

  • Prospective Real-Time Validation of a Lung Ultrasound Deep Learning Model in the ICU. Critical care medicine Dave, C., Wu, D., Tschirhart, J., Smith, D., VanBerlo, B., Deglint, J., Ali, F., Chaudhary, R., VanBerlo, B., Ford, A., Rahman, M. A., McCauley, J., Wu, B., Ho, J., Li, B., Arntfield, R. 2023; 51 (2): 301-309

    Abstract

    To evaluate the accuracy of a bedside, real-time deployment of a deep learning (DL) model capable of distinguishing between normal (A line pattern) and abnormal (B line pattern) lung parenchyma on lung ultrasound (LUS) in critically ill patients.Prospective, observational study evaluating the performance of a previously trained LUS DL model. Enrolled patients received a LUS examination with simultaneous DL model predictions using a portable device. Clip-level model predictions were analyzed and compared with blinded expert review for A versus B line pattern. Four prediction thresholding approaches were applied to maximize model sensitivity and specificity at bedside.Academic ICU.One-hundred critically ill patients admitted to ICU, receiving oxygen therapy, and eligible for respiratory imaging were included. Patients who were unstable or could not undergo an LUS examination were excluded.None.A total of 100 unique ICU patients (400 clips) were enrolled from two tertiary-care sites. Fifty-six patients were mechanically ventilated. When compared with gold standard expert annotation, the real-time inference yielded an accuracy of 95%, sensitivity of 93%, and specificity of 96% for identification of the B line pattern. Varying prediction thresholds showed that real-time modification of sensitivity and specificity according to clinical priorities is possible.A previously validated DL classification model performs equally well in real-time at the bedside when platformed on a portable device. As the first study to test the feasibility and performance of a DL classification model for LUS in a dedicated ICU environment, our results justify further inquiry into the impact of employing real-time automation of medical imaging into the care of the critically ill.

    View details for DOI 10.1097/CCM.0000000000005759

    View details for PubMedID 36661454

  • Acquisition and retention of lung ultrasound skills by respiratory therapists: A curriculum for respiratory therapists. Canadian journal of respiratory therapy : CJRT = Revue canadienne de la therapie respiratoire : RCTR Young, A., Wu, D., Myslik, F., Burke, D., Stephens, M., Arntfield, R. 2023; 59: 26-32

    Abstract

    Lung point-of-care ultrasound (POCUS) is a versatile bedside tool. The utility of POCUS has grown during the coronavirus disease 2019 pandemic, as it allows clinicians to obtain real-time images without requiring transport of the patient outside the intensive care unit. As respiratory therapists (RTs) are involved in caring for those with respiratory failure, there is a good rationale for their adoption of lung ultrasound. However, no training standards have been defined. Our objective was to develop and implement a training programme for RTs to achieve and sustain competence in lung ultrasound.This was a single-centre, prospective, single-cohort observational study. A total of 10 RTs completed our curriculum and were tasked with independently completing and interpreting 10 initial lung ultrasound exams and 3 subsequent exams after a 6-week interim period. All exams were blindly overread by a local expert in lung ultrasound.After completing the curriculum, RTs were able to acquire and accurately interpret their images over 85% of the time. They were more successful in the upper lung zone image acquisition and interpretation compared with the lower lung zones. After 6 weeks, the RTs' lung POCUS skills remained stable, and their lower lung zone image interpretation improved. The RTs reported that their confidence improved throughout the study.The RTs in our study have demonstrated competence in acquisition and interpretation of upper lung zone images. They have also reported confidence in acquiring and interpreting upper lung zone images. More experience appears to be required to gain competence and confidence in lower lung zone ultrasound. Next steps would be to repeat the present study with a higher number of RTs completing at least 20 lung POCUS studies.

    View details for DOI 10.29390/cjrt-2021-077

    View details for PubMedID 36741306

    View details for PubMedCentralID PMC9854384

  • Enhancing Annotation Efficiency with Machine Learning: Automated Partitioning of a Lung Ultrasound Dataset by View. Diagnostics (Basel, Switzerland) VanBerlo, B., Smith, D., Tschirhart, J., VanBerlo, B., Wu, D., Ford, A., McCauley, J., Wu, B., Chaudhary, R., Dave, C., Ho, J., Deglint, J., Li, B., Arntfield, R. 2022; 12 (10)

    Abstract

    Annotating large medical imaging datasets is an arduous and expensive task, especially when the datasets in question are not organized according to deep learning goals. Here, we propose a method that exploits the hierarchical organization of annotating tasks to optimize efficiency.We trained a machine learning model to accurately distinguish between one of two classes of lung ultrasound (LUS) views using 2908 clips from a larger dataset. Partitioning the remaining dataset by view would reduce downstream labelling efforts by enabling annotators to focus on annotating pathological features specific to each view.In a sample view-specific annotation task, we found that automatically partitioning a 780-clip dataset by view saved 42 min of manual annotation time and resulted in 55±6 additional relevant labels per hour.Automatic partitioning of a LUS dataset by view significantly increases annotator efficiency, resulting in higher throughput relevant to the annotating task at hand. The strategy described in this work can be applied to other hierarchical annotation schemes.

    View details for DOI 10.3390/diagnostics12102351

    View details for PubMedID 36292042

    View details for PubMedCentralID PMC9601089

  • Accurate assessment of the lung sliding artefact on lung ultrasonography using a deep learning approach. Computers in biology and medicine VanBerlo, B., Wu, D., Li, B., Rahman, M. A., Hogg, G., VanBerlo, B., Tschirhart, J., Ford, A., Ho, J., McCauley, J., Wu, B., Deglint, J., Hargun, J., Chaudhary, R., Dave, C., Arntfield, R. 2022; 148: 105953

    Abstract

    Pneumothorax is a potentially life-threatening condition that can be rapidly and accurately assessed via the lung sliding artefact generated using lung ultrasound (LUS). Access to LUS is challenged by user dependence and shortage of training. Image classification using deep learning methods can automate interpretation in LUS and has not been thoroughly studied for lung sliding. Using a labelled LUS dataset from 2 academic hospitals, clinical B-mode (also known as brightness or two-dimensional mode) videos featuring both presence and absence of lung sliding were transformed into motion (M) mode images. These images were subsequently used to train a deep neural network binary classifier that was evaluated using a holdout set comprising 15% of the total data. Grad-CAM explanations were examined. Our binary classifier using the EfficientNetB0 architecture was trained using 2535 LUS clips from 614 patients. When evaluated on a test set of data uninvolved in training (540 clips from 124 patients), the model performed with a sensitivity of 93.5%, specificity of 87.3% and an area under the receiver operating characteristic curve (AUC) of 0.973. Grad-CAM explanations confirmed the model's focus on relevant regions on M-mode images. Our solution accurately distinguishes between the presence and absence of lung sliding artefacts on LUS.

    View details for DOI 10.1016/j.compbiomed.2022.105953

    View details for PubMedID 35985186

  • Automation of Lung Ultrasound Interpretation via Deep Learning for the Classification of Normal versus Abnormal Lung Parenchyma: A Multicenter Study. Diagnostics (Basel, Switzerland) Arntfield, R., Wu, D., Tschirhart, J., VanBerlo, B., Ford, A., Ho, J., McCauley, J., Wu, B., Deglint, J., Chaudhary, R., Dave, C., VanBerlo, B., Basmaji, J., Millington, S. 2021; 11 (11)

    Abstract

    Lung ultrasound (LUS) is an accurate thoracic imaging technique distinguished by its handheld size, low-cost, and lack of radiation. User dependence and poor access to training have limited the impact and dissemination of LUS outside of acute care hospital environments. Automated interpretation of LUS using deep learning can overcome these barriers by increasing accuracy while allowing point-of-care use by non-experts. In this multicenter study, we seek to automate the clinically vital distinction between A line (normal parenchyma) and B line (abnormal parenchyma) on LUS by training a customized neural network using 272,891 labelled LUS images. After external validation on 23,393 frames, pragmatic clinical application at the clip level was performed on 1162 videos. The trained classifier demonstrated an area under the receiver operating curve (AUC) of 0.96 (±0.02) through 10-fold cross-validation on local frames and an AUC of 0.93 on the external validation dataset. Clip-level inference yielded sensitivities and specificities of 90% and 92% (local) and 83% and 82% (external), respectively, for detecting the B line pattern. This study demonstrates accurate deep-learning-enabled LUS interpretation between normal and abnormal lung parenchyma on ultrasound frames while rendering diagnostically important sensitivity and specificity at the video clip level.

    View details for DOI 10.3390/diagnostics11112049

    View details for PubMedID 34829396

    View details for PubMedCentralID PMC8621216

  • Development of a convolutional neural network to differentiate among the etiology of similar appearing pathological B lines on lung ultrasound: a deep learning study. BMJ open Arntfield, R., VanBerlo, B., Alaifan, T., Phelps, N., White, M., Chaudhary, R., Ho, J., Wu, D. 2021; 11 (3): e045120

    Abstract

    Lung ultrasound (LUS) is a portable, low-cost respiratory imaging tool but is challenged by user dependence and lack of diagnostic specificity. It is unknown whether the advantages of LUS implementation could be paired with deep learning (DL) techniques to match or exceed human-level, diagnostic specificity among similar appearing, pathological LUS images.A convolutional neural network (CNN) was trained on LUS images with B lines of different aetiologies. CNN diagnostic performance, as validated using a 10% data holdback set, was compared with surveyed LUS-competent physicians.Two tertiary Canadian hospitals.612 LUS videos (121 381 frames) of B lines from 243 distinct patients with either (1) COVID-19 (COVID), non-COVID acute respiratory distress syndrome (NCOVID) or (3) hydrostatic pulmonary edema (HPE).The trained CNN performance on the independent dataset showed an ability to discriminate between COVID (area under the receiver operating characteristic curve (AUC) 1.0), NCOVID (AUC 0.934) and HPE (AUC 1.0) pathologies. This was significantly better than physician ability (AUCs of 0.697, 0.704, 0.967 for the COVID, NCOVID and HPE classes, respectively), p<0.01.A DL model can distinguish similar appearing LUS pathology, including COVID-19, that cannot be distinguished by humans. The performance gap between humans and the model suggests that subvisible biomarkers within ultrasound images could exist and multicentre research is merited.

    View details for DOI 10.1136/bmjopen-2020-045120

    View details for PubMedID 33674378

    View details for PubMedCentralID PMC7939003