Muhammad Usman
Postdoctoral Scholar, Anesthesiology, Perioperative and Pain Medicine
All Publications
-
LDMRes-Net: A Lightweight Neural Network for Efficient Medical Image Segmentation on IoT and Edge Devices
IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS
2024; 28 (7): 3860-3871
Abstract
In this study, we propose LDMRes-Net, a lightweight dual-multiscale residual block-based convolutional neural network tailored for medical image segmentation on IoT and edge platforms. Conventional U-Net-based models face challenges in meeting the speed and efficiency demands of real-time clinical applications, such as disease monitoring, radiation therapy, and image-guided surgery. In this study, we present the Lightweight Dual Multiscale Residual Block-based Convolutional Neural Network (LDMRes-Net), which is specifically designed to overcome these difficulties. LDMRes-Net overcomes these limitations with its remarkably low number of learnable parameters (0.072 M), making it highly suitable for resource-constrained devices. The model's key innovation lies in its dual multiscale residual block architecture, which enables the extraction of refined features on multiple scales, enhancing overall segmentation performance. To further optimize efficiency, the number of filters is carefully selected to prevent overlap, reduce training time, and improve computational efficiency. The study includes comprehensive evaluations, focusing on the segmentation of the retinal image of vessels and hard exudates crucial for the diagnosis and treatment of ophthalmology. The results demonstrate the robustness, generalizability, and high segmentation accuracy of LDMRes-Net, positioning it as an efficient tool for accurate and rapid medical image segmentation in diverse clinical applications, particularly on IoT and edge platforms. Such advances hold significant promise for improving healthcare outcomes and enabling real-time medical image analysis in resource-limited settings.
View details for DOI 10.1109/JBHI.2023.3331278
View details for Web of Science ID 001263692800037
View details for PubMedID 37938951
-
Intelligent healthcare system for IoMT-integrated sonography: Leveraging multi-scale self-guided attention networks and dynamic self-distillation
INTERNET OF THINGS
2024; 25
View details for DOI 10.1016/j.iot.2024.101065
View details for Web of Science ID 001167576000001
-
MEDS-Net: Multi-encoder based self-distilled network with bidirectional maximum fusion for nodule detection
ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE
2024; 129
View details for DOI 10.1016/j.engappai.2023.107597
View details for Web of Science ID 001132366100001
-
SSMD-UNet: semi-supervised multi-task decoders network for diabetic retinopathy segmentation.
Scientific reports
2023; 13 (1): 9087
Abstract
Diabetic retinopathy (DR) is a diabetes complication that can cause vision loss among patients due to damage to blood vessels in the retina. Early retinal screening can avoid the severe consequences of DR and enable timely treatment. Nowadays, researchers are trying to develop automated deep learning-based DR segmentation tools using retinal fundus images to help Ophthalmologists with DR screening and early diagnosis. However, recent studies are unable to design accurate models due to the unavailability of larger training data with consistent and fine-grained annotations. To address this problem, we propose a semi-supervised multitask learning approach that exploits widely available unlabelled data (i.e., Kaggle-EyePACS) to improve DR segmentation performance. The proposed model consists of novel multi-decoder architecture and involves both unsupervised and supervised learning phases. The model is trained for the unsupervised auxiliary task to effectively learn from additional unlabelled data and improve the performance of the primary task of DR segmentation. The proposed technique is rigorously evaluated on two publicly available datasets (i.e., FGADR and IDRiD) and results show that the proposed technique not only outperforms existing state-of-the-art techniques but also exhibits improved generalisation and robustness for cross-data evaluation.
View details for DOI 10.1038/s41598-023-36311-0
View details for PubMedID 37277554
View details for PubMedCentralID PMC10240139
-
MTSS-AAE: Multi-task semi-supervised adversarial autoencoding for COVID-19 detection based on chest X-ray images.
Expert systems with applications
2023; 216: 119475
Abstract
Efficient diagnosis of COVID-19 plays an important role in preventing the spread of the disease. There are three major modalities to diagnose COVID-19 which include polymerase chain reaction tests, computed tomography scans, and chest X-rays (CXRs). Among these, diagnosis using CXRs is the most economical approach; however, it requires extensive human expertise to diagnose COVID-19 in CXRs, which may deprive it of cost-effectiveness. The computer-aided diagnosis with deep learning has the potential to perform accurate detection of COVID-19 in CXRs without human intervention while preserving its cost-effectiveness. Many efforts have been made to develop a highly accurate and robust solution. However, due to the limited amount of labeled data, existing solutions are evaluated on a small set of test dataset. In this work, we proposed a solution to this problem by using a multi-task semi-supervised learning (MTSSL) framework that utilized auxiliary tasks for which adequate data is publicly available. Specifically, we utilized Pneumonia, Lung Opacity, and Pleural Effusion as additional tasks using the ChesXpert dataset. We illustrated that the primary task of COVID-19 detection, for which only limited labeled data is available, can be improved by using this additional data. We further employed an adversarial autoencoder (AAE), which has a strong capability to learn powerful and discriminative features, within our MTSSL framework to maximize the benefit of multi-task learning. In addition, the supervised classification networks in combination with the unsupervised AAE empower semi-supervised learning, which includes a discriminative part in the unsupervised AAE training pipeline. The generalization of our framework is improved due to this semi-supervised learning and thus it leads to enhancement in COVID-19 detection performance. The proposed model is rigorously evaluated on the largest publicly available COVID-19 dataset and experimental results show that the proposed model attained state-of-the-art performance.
View details for DOI 10.1016/j.eswa.2022.119475
View details for PubMedID 36619348
View details for PubMedCentralID PMC9810379
-
Selective Deeply Supervised Multi-Scale Attention Network for Brain Tumor Segmentation.
Sensors (Basel, Switzerland)
2023; 23 (4)
Abstract
Brain tumors are among the deadliest forms of cancer, characterized by abnormal proliferation of brain cells. While early identification of brain tumors can greatly aid in their therapy, the process of manual segmentation performed by expert doctors, which is often time-consuming, tedious, and prone to human error, can act as a bottleneck in the diagnostic process. This motivates the development of automated algorithms for brain tumor segmentation. However, accurately segmenting the enhanced and core tumor regions is complicated due to high levels of inter- and intra-tumor heterogeneity in terms of texture, morphology, and shape. This study proposes a fully automatic method called the selective deeply supervised multi-scale attention network (SDS-MSA-Net) for segmenting brain tumor regions using a multi-scale attention network with novel selective deep supervision (SDS) mechanisms for training. The method utilizes a 3D input composed of five consecutive slices, in addition to a 2D slice, to maintain sequential information. The proposed multi-scale architecture includes two encoding units to extract meaningful global and local features from the 3D and 2D inputs, respectively. These coarse features are then passed through attention units to filter out redundant information by assigning lower weights. The refined features are fed into a decoder block, which upscales the features at various levels while learning patterns relevant to all tumor regions. The SDS block is introduced to immediately upscale features from intermediate layers of the decoder, with the aim of producing segmentations of the whole, enhanced, and core tumor regions. The proposed framework was evaluated on the BraTS2020 dataset and showed improved performance in brain tumor region segmentation, particularly in the segmentation of the core and enhancing tumor regions, demonstrating the effectiveness of the proposed approach. Our code is publicly available.
View details for DOI 10.3390/s23042346
View details for PubMedID 36850942
View details for PubMedCentralID PMC9964702
-
DEHA-Net: A Dual-Encoder-Based Hard Attention Network with an Adaptive ROI Mechanism for Lung Nodule Segmentation.
Sensors (Basel, Switzerland)
2023; 23 (4)
Abstract
Measuring pulmonary nodules accurately can help the early diagnosis of lung cancer, which can increase the survival rate among patients. Numerous techniques for lung nodule segmentation have been developed; however, most of them either rely on the 3D volumetric region of interest (VOI) input by radiologists or use the 2D fixed region of interest (ROI) for all the slices of computed tomography (CT) scan. These methods only consider the presence of nodules within the given VOI, which limits the networks' ability to detect nodules outside the VOI and can also encompass unnecessary structures in the VOI, leading to potentially inaccurate segmentation. In this work, we propose a novel approach for 3D lung nodule segmentation that utilizes the 2D region of interest (ROI) inputted from a radiologist or computer-aided detection (CADe) system. Concretely, we developed a two-stage lung nodule segmentation technique. Firstly, we designed a dual-encoder-based hard attention network (DEHA-Net) in which the full axial slice of thoracic computed tomography (CT) scan, along with an ROI mask, were considered as input to segment the lung nodule in the given slice. The output of DEHA-Net, the segmentation mask of the lung nodule, was inputted to the adaptive region of interest (A-ROI) algorithm to automatically generate the ROI masks for the surrounding slices, which eliminated the need for any further inputs from radiologists. After extracting the segmentation along the axial axis, at the second stage, we further investigated the lung nodule along sagittal and coronal views by employing DEHA-Net. All the estimated masks were inputted into the consensus module to obtain the final volumetric segmentation of the nodule. The proposed scheme was rigorously evaluated on the lung image database consortium and image database resource initiative (LIDC/IDRI) dataset, and an extensive analysis of the results was performed. The quantitative analysis showed that the proposed method not only improved the existing state-of-the-art methods in terms of dice score but also showed significant robustness against different types, shapes, and dimensions of the lung nodules. The proposed framework achieved the average dice score, sensitivity, and positive predictive value of 87.91%, 90.84%, and 89.56%, respectively.
View details for DOI 10.3390/s23041989
View details for PubMedID 36850583
View details for PubMedCentralID PMC9960760
-
Densely attention mechanism based network for COVID-19 detection in chest X-rays.
Scientific reports
2023; 13 (1): 261
Abstract
Automatic COVID-19 detection using chest X-ray (CXR) can play a vital part in large-scale screening and epidemic control. However, the radiographic features of CXR have different composite appearances, for instance, diffuse reticular-nodular opacities and widespread ground-glass opacities. This makes the automatic recognition of COVID-19 using CXR imaging a challenging task. To overcome this issue, we propose a densely attention mechanism-based network (DAM-Net) for COVID-19 detection in CXR. DAM-Net adaptively extracts spatial features of COVID-19 from the infected regions with various appearances and scales. Our proposed DAM-Net is composed of dense layers, channel attention layers, adaptive downsampling layer, and label smoothing regularization loss function. Dense layers extract the spatial features and the channel attention approach adaptively builds up the weights of major feature channels and suppresses the redundant feature representations. We use the cross-entropy loss function based on label smoothing to limit the effect of interclass similarity upon feature representations. The network is trained and tested on the largest publicly available dataset, i.e., COVIDx, consisting of 17,342 CXRs. Experimental results demonstrate that the proposed approach obtains state-of-the-art results for COVID-19 classification with an accuracy of 97.22%, a sensitivity of 96.87%, a specificity of 99.12%, and a precision of 95.54%.
View details for DOI 10.1038/s41598-022-27266-9
View details for PubMedID 36609667
View details for PubMedCentralID PMC9816547
-
Dual-Stage Deeply Supervised Attention-Based Convolutional Neural Networks for Mandibular Canal Segmentation in CBCT Scans.
Sensors (Basel, Switzerland)
2022; 22 (24)
Abstract
Accurate segmentation of mandibular canals in lower jaws is important in dental implantology. Medical experts manually determine the implant position and dimensions from 3D CT images to avoid damaging the mandibular nerve inside the canal. In this paper, we propose a novel dual-stage deep learning-based scheme for the automatic segmentation of the mandibular canal. In particular, we first enhance the CBCT scans by employing the novel histogram-based dynamic windowing scheme, which improves the visibility of mandibular canals. After enhancement, we designed 3D deeply supervised attention UNet architecture for localizing the Volumes Of Interest (VOIs), which contain the mandibular canals (i.e., left and right canals). Finally, we employed the Multi-Scale input Residual UNet (MSiR-UNet) architecture to segment the mandibular canals using VOIs accurately. The proposed method has been rigorously evaluated on 500 and 15 CBCT scans from our dataset and from the public dataset, respectively. The results demonstrate that our technique improves the existing performance of mandibular canal segmentation to a clinically acceptable range. Moreover, it is robust against the types of CBCT scans in terms of field of view.
View details for DOI 10.3390/s22249877
View details for PubMedID 36560251
View details for PubMedCentralID PMC9785834
-
Cascade multiscale residual attention CNNs with adaptive ROI for automatic brain tumor segmentation
INFORMATION SCIENCES
2022; 608: 1541-1556
View details for DOI 10.1016/j.ins.2022.07.044
View details for Web of Science ID 000834603000003
-
Evaluation of the feasibility of explainable computer-aided detection of cardiomegaly on chest radiographs using deep learning.
Scientific reports
2021; 11 (1): 16885
Abstract
We examined the feasibility of explainable computer-aided detection of cardiomegaly in routine clinical practice using segmentation-based methods. Overall, 793 retrospectively acquired posterior-anterior (PA) chest X-ray images (CXRs) of 793 patients were used to train deep learning (DL) models for lung and heart segmentation. The training dataset included PA CXRs from two public datasets and in-house PA CXRs. Two fully automated segmentation-based methods using state-of-the-art DL models for lung and heart segmentation were developed. The diagnostic performance was assessed and the reliability of the automatic cardiothoracic ratio (CTR) calculation was determined using the mean absolute error and paired t-test. The effects of thoracic pathological conditions on performance were assessed using subgroup analysis. One thousand PA CXRs of 1000 patients (480 men, 520 women; mean age 63 ± 23 years) were included. The CTR values derived from the DL models and diagnostic performance exhibited excellent agreement with reference standards for the whole test dataset. Performance of segmentation-based methods differed based on thoracic conditions. When tested using CXRs with lesions obscuring heart borders, the performance was lower than that for other thoracic pathological findings. Thus, segmentation-based methods using DL could detect cardiomegaly; however, the feasibility of computer-aided detection of cardiomegaly without human intervention was limited.
View details for DOI 10.1038/s41598-021-96433-1
View details for PubMedID 34413405
View details for PubMedCentralID PMC8376868
-
Leveraging Data Science to Combat COVID-19: A Comprehensive Review.
IEEE transactions on artificial intelligence
2020; 1 (1): 85-103
Abstract
COVID-19, an infectious disease caused by the SARS-CoV-2 virus, was declared a pandemic by the World Health Organisation (WHO) in March 2020. By mid-August 2020, more than 21 million people have tested positive worldwide. Infections have been growing rapidly and tremendous efforts are being made to fight the disease. In this paper, we attempt to systematise the various COVID-19 research activities leveraging data science, where we define data science broadly to encompass the various methods and tools-including those from artificial intelligence (AI), machine learning (ML), statistics, modeling, simulation, and data visualization-that can be used to store, process, and extract insights from data. In addition to reviewing the rapidly growing body of recent research, we survey public datasets and repositories that can be used for further work to track COVID-19 spread and mitigation strategies. As part of this, we present a bibliometric analysis of the papers produced in this short span of time. Finally, building on these insights, we highlight common challenges and pitfalls observed across the surveyed works. We also created a live resource repository at https://github.com/Data-Science-and-COVID-19/Leveraging-Data-Science-To-Combat-COVID-19-A-Comprehensive-Review that we intend to keep updated with the latest resources including new papers and datasets.
View details for DOI 10.1109/TAI.2020.3020521
View details for PubMedID 37982070
View details for PubMedCentralID PMC8545032
-
Volumetric lung nodule segmentation using adaptive ROI with multi-view residual learning.
Scientific reports
2020; 10 (1): 12839
Abstract
Accurate quantification of pulmonary nodules can greatly assist the early diagnosis of lung cancer, enhancing patient survival possibilities. A number of nodule segmentation techniques, which either rely on a radiologist-provided 3-D volume of interest (VOI) or use the constant region of interests (ROIs) for all the slices, are proposed; however, these techniques can only investigate the presence of nodule voxels within the given VOI. Such approaches restrain the solutions to freely investigate the nodule presence outside the given VOI and also include the redundant structures (non-nodule) into VOI, which limits the segmentation accuracy. In this work, a novel semi-automated approach for 3-D segmentation of lung nodule in computerized tomography scans, has been proposed. The technique is segregated into two stages. In the first stage, a 2-D ROI containing the nodule is provided as an input to perform a patch-wise exploration along the axial axis using a novel adaptive ROI algorithm. This strategy enables the dynamic selection of the ROI in the surrounding slices to investigate the presence of nodules using a Deep Residual U-Net architecture. This stage provides the initial estimation of the nodule utilized to extract the VOI. In the second stage, the extracted VOI is further explored along the coronal and sagittal axes, in patchwise fashion, with Residual U-Nets. All the estimated masks are then fed into a consensus module to produce a final volumetric segmentation of the nodule. The algorithm is rigorously evaluated on LIDC-IDRI dataset, which is the largest publicly available dataset. The proposed approach achieved the average dice score of 87.5%, which is significantly higher than the existing state-of-the-art techniques.
View details for DOI 10.1038/s41598-020-69817-y
View details for PubMedID 32732963
View details for PubMedCentralID PMC7393083
-
Retrospective Motion Correction in Multishot MRI using Generative Adversarial Network.
Scientific reports
2020; 10 (1): 4786
Abstract
Multishot Magnetic Resonance Imaging (MRI) is a promising data acquisition technique that can produce a high-resolution image with relatively less data acquisition time than the standard spin echo. The downside of multishot MRI is that it is very sensitive to subject motion and even small levels of motion during the scan can produce artifacts in the final magnetic resonance (MR) image, which may result in a misdiagnosis. Numerous efforts have focused on addressing this issue; however, all of these proposals are limited in terms of how much motion they can correct and require excessive computational time. In this paper, we propose a novel generative adversarial network (GAN)-based conjugate gradient SENSE (CG-SENSE) reconstruction framework for motion correction in multishot MRI. First CG-SENSE reconstruction is employed to reconstruct an image from the motion-corrupted k-space data and then the GAN-based proposed framework is applied to correct the motion artifacts. The proposed method has been rigorously evaluated on synthetically corrupted data on varying degrees of motion, numbers of shots, and encoding trajectories. Our analyses (both quantitative as well as qualitative/visual analysis) establish that the proposed method is robust and reduces several-fold the computational time reported by the current state-of-the-art technique.
View details for DOI 10.1038/s41598-020-61705-9
View details for PubMedID 32179823
View details for PubMedCentralID PMC7075875
-
Phonocardiographic Sensing Using Deep Learning for Abnormal Heartbeat Detection
IEEE SENSORS JOURNAL
2018; 18 (22): 9393-9400
View details for DOI 10.1109/JSEN.2018.2870759
View details for Web of Science ID 000448514000042
-
Cross Lingual Speech Emotion Recognition: Urdu vs. Western Languages
IEEE. 2018: 88-93
View details for DOI 10.1109/FIT.2018.00023
View details for Web of Science ID 000458430500016