Dr. Lida Safarnejad is a postdoctoral research fellow at Stanford University School of Medicine. She currently works in Ross lab, directed by Dr. Elsie Gyang Ross. Her research focuses on applying Computer Vision and NLP techniques to build models that detect Peripheral Artery Disease PAD. Dr. Safarnejad obtained her Ph.D. in Software and Information Systems from the University of North Carolina at Charlotte. Her doctoral dissertation particularly focused on devising novel computational methods to effectively employ social media in the public health, especially for healthcare crisis management.
Honors & Awards
Graduate School Summer Fellowship (GSSF), University of North Carolina at Charlotte (2019)
Teaching and Learning Certificate, Center for Teaching and Learning, University of North Carolina at Charlotte (2018)
Provost's Doctoral Teaching Fellowship,, University of North Carolina at Charlotte (2016-2017)
Graduate Assistant Support Plan Award (GASP), University of North Carolina at Charlotte (2014-2019)
Elsie Ross, Postdoctoral Faculty Sponsor
- A Multiple Feature Category Data Mining and Machine Learning Approach to Characterize and Detect Health Misinformation on Social Media IEEE INTERNET COMPUTING 2021; 25 (5): 43-51
Accurately Differentiating Between Patients With COVID-19, Patients With Other Viral Infections, and Healthy Individuals: Multimodal Late Fusion Learning Approach.
Journal of medical Internet research
2021; 23 (1): e25535
BACKGROUND: Effectively identifying patients with COVID-19 using nonpolymerase chain reaction biomedical data is critical for achieving optimal clinical outcomes. Currently, there is a lack of comprehensive understanding in various biomedical features and appropriate analytical approaches for enabling the early detection and effective diagnosis of patients with COVID-19.OBJECTIVE: We aimed to combine low-dimensional clinical and lab testing data, as well as high-dimensional computed tomography (CT) imaging data, to accurately differentiate between healthy individuals, patients with COVID-19, and patients with non-COVID viral pneumonia, especially at the early stage of infection.METHODS: In this study, we recruited 214 patients with nonsevere COVID-19, 148 patients with severe COVID-19, 198 noninfected healthy participants, and 129 patients with non-COVID viral pneumonia. The participants' clinical information (ie, 23 features), lab testing results (ie, 10 features), and CT scans upon admission were acquired and used as 3 input feature modalities. To enable the late fusion of multimodal features, we constructed a deep learning model to extract a 10-feature high-level representation of CT scans. We then developed 3 machine learning models (ie, k-nearest neighbor, random forest, and support vector machine models) based on the combined 43 features from all 3 modalities to differentiate between the following 4 classes: nonsevere, severe, healthy, and viral pneumonia.RESULTS: Multimodal features provided substantial performance gain from the use of any single feature modality. All 3 machine learning models had high overall prediction accuracy (95.4%-97.7%) and high class-specific prediction accuracy (90.6%-99.9%).CONCLUSIONS: Compared to the existing binary classification benchmarks that are often focused on single-feature modality, this study's hybrid deep learning-machine learning framework provided a novel and effective breakthrough for clinical applications. Our findings, which come from a relatively large sample size, and analytical workflow will supplement and assist with clinical decision support for current COVID-19 diagnostic methods and other clinical applications with high-dimensional multimodal biomedical features.
View details for DOI 10.2196/25535
View details for PubMedID 33404516