Bernard Mawuli Cobbinah
Postdoctoral Scholar, Anesthesiology, Perioperative and Pain Medicine
Bio
Cobbinah Bernard Mawuli is a Postdoctoral Scholar at Stanford University in the Department of Anesthesiology, Perioperative and Pain Medicine, School of Medicine. He is passionate about the intersection of AI and medicine, focusing on developing robust and effective approaches for preventive and predictive healthcare. His research aims to deepen the understanding of high-dimensional multi-omics medical data using advanced machine learning techniques. By exploring innovative ways to analyze this data, his work contributes to improved treatments and enhanced patient care. Through the analysis of large patient datasets, his goal is to create tools that empower clinicians to make more informed decisions, ultimately improving healthcare outcomes for all.
Prior to joining Stanford, he pioneered robust federated learning techniques for evolving data streams and developed methods to reduce multi-center MRI variability in diagnosing brain disorders.
Professional Education
-
Master of Engineering, UniversityofElectronicScienceandTechnologyofChina (2019)
-
Bachelor of Science, Kwame Nkrumah University of Science and Technology (2015)
-
Doctor of Science, UniversityofElectronicScienceandTechnologyofChina (2023)
All Publications
-
Federated Fusion of Magnified Histopathological Images for Breast Tumor Classification in the Internet of Medical Things.
IEEE journal of biomedical and health informatics
2024; 28 (6): 3389-3400
Abstract
Breast tumor detection and classification on the Internet of Medical Things (IoMT) can be automated with the potential of Artificial Intelligence (AI). Deep learning models rely on large datasets, however, challenges arise when dealing with sensitive medical data. Restrictions on sharing these medical data result in limited publicly available datasets thereby impacting the performance of the deep learning models. To address this issue, we propose an approach that combines different magnification factors of histopathological images using a residual network and information fusion in Federated Learning (FL). FL is employed to preserve the privacy of patient data, while enabling the creation of a global model. Using the BreakHis dataset, we compare the performance of FL with centralized learning (CL). We also performed visualizations for explainable AI. The final models obtained become available for deployment on internal IoMT systems in healthcare institutions for timely diagnosis and treatment. Our results demonstrate that the proposed approach outperforms existing works in the literature on multiple metrics.
View details for DOI 10.1109/JBHI.2023.3256974
View details for PubMedID 37028353
-
FedSULP: A communication-efficient federated learning framework with selective updating and loss penalization
INFORMATION SCIENCES
2023; 651
View details for DOI 10.1016/j.ins.2023.119725
View details for Web of Science ID 001088395500001
-
FedStream: Prototype-Based Federated Learning on Distributed Concept-Drifting Data Streams
IEEE TRANSACTIONS ON SYSTEMS MAN CYBERNETICS-SYSTEMS
2023; 53 (11): 7112-7124
View details for DOI 10.1109/TSMC.2023.3293462
View details for Web of Science ID 001043258800001
-
Semi-supervised federated learning on evolving data streams
INFORMATION SCIENCES
2023; 643
View details for DOI 10.1016/j.ins.2023.119235
View details for Web of Science ID 001012335200001
-
Reducing variations in multi-center Alzheimer's disease classification with convolutional adversarial autoencoder.
Medical image analysis
2022; 82: 102585
Abstract
Based on brain magnetic resonance imaging (MRI), multiple variations ranging from MRI scanners to center-specific parameter settings, imaging protocols, and brain region-of-interest (ROI) definitions pose a big challenge for multi-center Alzheimer's disease characterization and classification. Existing approaches to reduce such variations require intricate multi-step, often manual preprocessing pipelines, including skull stripping, segmentation, registration, cortical reconstruction, and ROI outlining. Such procedures are time-consuming, and more importantly, tend to be user biased. Contrasting costly and biased preprocessing pipelines, the question arises whether we can design a deep learning model to automatically reduce these variations from multiple centers for Alzheimer's disease classification? In this study, we used T1 and T2-weighted structural MRI from Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset based on three groups with 375 subjects, respectively: patients with Alzheimer's disease (AD) dementia, with mild cognitive impairment (MCI), and healthy controls (HC); to test our approach, we defined AD classification as classifying an individual's structural image to one of the three group labels. We first introduced a convolutional adversarial autoencoder (CAAE) to reduce the variations existing in multi-center raw MRI scans by automatically registering them into a common aligned space. Afterward, a convolutional residual soft attention network (CRAT) was further proposed for AD classification. Canonical classification procedures demonstrated that our model achieved classification accuracies of 91.8%, 90.05%, and 88.10% for the 2-way classification tasks using the RAW aligned MRI scans, including AD vs. HC, AD vs. MCI, and MCI vs. HC, respectively. Thus, our automated approach achieves comparable or even better classification performance by comparing it with many baselines with dedicated conventional preprocessing pipelines. Furthermore, the uncovered brain hotpots, i.e., hippocampus, amygdala, and temporal pole, are consistent with previous studies.
View details for DOI 10.1016/j.media.2022.102585
View details for PubMedID 36057187
-
Learning High-Dimensional Evolving Data Streams With Limited Labels.
IEEE transactions on cybernetics
2022; 52 (11): 11373-11384
Abstract
In the context of streaming data, learning algorithms often need to confront several unique challenges, such as concept drift, label scarcity, and high dimensionality. Several concept drift-aware data stream learning algorithms have been proposed to tackle these issues over the past decades. However, most existing algorithms utilize a supervised learning framework and require all true class labels to update their models. Unfortunately, in the streaming environment, requiring all labels is unfeasible and not realistic in many real-world applications. Therefore, learning data streams with minimal labels is a more practical scenario. Considering the problem of the curse of dimensionality and label scarcity, in this article, we present a new semisupervised learning technique for streaming data. To cure the curse of dimensionality, we employ a denoising autoencoder to transform the high-dimensional feature space into a reduced, compact, and more informative feature representation. Furthermore, we use a cluster-and-label technique to reduce the dependency on true class labels. We employ a synchronization-based dynamic clustering technique to summarize the streaming data into a set of dynamic microclusters that are further used for classification. In addition, we employ a disagreement-based learning method to cope with concept drift. Extensive experiments performed on many real-world datasets demonstrate the superior performance of the proposed method compared to several state-of-the-art methods.
View details for DOI 10.1109/TCYB.2021.3070420
View details for PubMedID 34033560