Postdoctoral Scholar at Stanford University | AI in Medical Imaging | GoHawks
Master of Science, Suzhou University (2018)
Bachelor of Science, Suzhou University (2015)
Doctor of Philosophy, University of Iowa (2023)
Ph.D., University of Iowa, ECE (2023)
Mirabela Rusu, Postdoctoral Faculty Sponsor
Assisted annotation in Deep LOGISMOS: Simultaneous multi-compartment 3D MRI segmentation of calf muscles
Automated segmentation of individual calf muscle compartments in 3D MR images is gaining importance in diagnosing muscle disease, monitoring its progression, and prediction of the disease course. Although deep convolutional neural networks have ushered in a revolution in medical image segmentation, achieving clinically acceptable results is a challenging task and the availability of sufficiently large annotated datasets still limits their applicability.In this paper, we present a novel approach combing deep learning and graph optimization in the paradigm of assisted annotation for solving general segmentation problems in 3D, 4D, and generally n-D with limited annotation cost.Deep LOGISMOS combines deep-learning-based pre-segmentation of objects of interest provided by our convolutional neural network, FilterNet+, and our 3D multi-objects LOGISMOS framework (layered optimal graph image segmentation of multiple objects and surfaces) that uses newly designed trainable machine-learned cost functions. In the paradigm of assisted annotation, multi-object JEI for efficient editing of automated Deep LOGISMOS segmentation was employed to form a new larger training set with significant decrease of manual tracing effort.We have evaluated our method on 350 lower leg (left/right) T1-weighted MR images from 93 subjects (47 healthy, 46 patients with muscular morbidity) by fourfold cross-validation. Compared with the fully manual annotation approach, the annotation cost with assisted annotation is reduced by 95%, from 8 h to 25 min in this study. The experimental results showed average Dice similarity coefficient (DSC) of 96.56 ± 0.26 % $96.56\pm 0.26 \%$ and average absolute surface positioning error of 0.63 pixels (0.44 mm) for the five 3D muscle compartments for each leg. These results significantly improve our previously reported method and outperform the state-of-the-art nnUNet method.Our proposed approach can not only dramatically reduce the expert's annotation efforts but also significantly improve the segmentation performance compared to the state-of-the-art nnUNet method. The notable performance improvements suggest the clinical-use potential of our new fully automated simultaneous segmentation of calf muscle compartments.
View details for DOI 10.1002/mp.16284
View details for Web of Science ID 000937329900001
View details for PubMedID 36750977
KCB-Net: A 3D knee cartilage and bone segmentation network via sparse annotation
MEDICAL IMAGE ANALYSIS
2022; 82: 102574
Knee cartilage and bone segmentation is critical for physicians to analyze and diagnose articular damage and knee osteoarthritis (OA). Deep learning (DL) methods for medical image segmentation have largely outperformed traditional methods, but they often need large amounts of annotated data for model training, which is very costly and time-consuming for medical experts, especially on 3D images. In this paper, we report a new knee cartilage and bone segmentation framework, KCB-Net, for 3D MR images based on sparse annotation. KCB-Net selects a small subset of slices from 3D images for annotation, and seeks to bridge the performance gap between sparse annotation and full annotation. Specifically, it first identifies a subset of the most effective and representative slices with an unsupervised scheme; it then trains an ensemble model using the annotated slices; next, it self-trains the model using 3D images containing pseudo-labels generated by the ensemble method and improved by a bi-directional hierarchical earth mover's distance (bi-HEMD) algorithm; finally, it fine-tunes the segmentation results using the primal-dual Internal Point Method (IPM). Experiments on four 3D MR knee joint datasets (the SKI10 dataset, OAI ZIB dataset, Iowa dataset, and iMorphics dataset) show that our new framework outperforms state-of-the-art methods on full annotation, and yields high quality results for small annotation ratios even as low as 10%.
View details for DOI 10.1016/j.media.2022.102574
View details for Web of Science ID 000861119300002
View details for PubMedID 36126403
CMC-Net: 3D calf muscle compartment segmentation with sparse annotation
MEDICAL IMAGE ANALYSIS
2022; 79: 102460
Accurate 3D segmentation of calf muscle compartments in volumetric MR images is essential to diagnose as well as assess progression of muscular diseases. Recently, good segmentation performance was achieved using state-of-the-art deep learning approaches, which, however, require large amounts of annotated data for training. Considering that obtaining sufficiently large medical image annotation datasets is often difficult, time-consuming, and requires expert knowledge, minimizing the necessary sizes of expert-annotated training datasets is of great importance. This paper reports CMC-Net, a new deep learning framework for calf muscle compartment segmentation in 3D MR images that selects an effective small subset of 2D slices from the 3D images to be labelled, while also utilizing unannotated slices to facilitate proper generalization of the subsequent training steps. Our model consists of three parts: (1) an unsupervised method to select the most representative 2D slices on which expert annotation is performed; (2) ensemble model training employing these annotated as well as additional unannotated 2D slices; (3) a model-tuning method using pseudo-labels generated by the ensemble model that results in a trained deep network capable of accurate 3D segmentations. Experiments on segmentation of calf muscle compartments in 3D MR images show that our new approach achieves good performance with very small annotation ratios, and when utilizing full annotation, it outperforms state-of-the-art full annotation segmentation methods. Additional experiments on a 3D MR thigh dataset further verify the ability of our method in segmenting leg muscle groups with sparse annotation.
View details for DOI 10.1016/j.media.2022.102460
View details for Web of Science ID 000804537800003
View details for PubMedID 35598519
OIPAV: an Integrated Software System for Ophthalmic Image Processing, Analysis, and Visualization
JOURNAL OF DIGITAL IMAGING
2019; 32 (1): 183-197
Ophthalmic medical images, such as optical coherence tomography (OCT) images and color photo of fundus, provide valuable information for clinical diagnosis and treatment of ophthalmic diseases. In this paper, we introduce a software system specially oriented to ophthalmic images processing, analysis, and visualization (OIPAV) to assist users. OIPAV is a cross-platform system built on a set of powerful and widely used toolkit libraries. Based on the plugin mechanism, the system has an extensible framework. It provides rich functionalities including data I/O, image processing, interaction, ophthalmic diseases detection, data analysis, and visualization. By using OIPAV, users can easily access to the ophthalmic image data manufactured from different imaging devices, facilitate workflows of processing ophthalmic images, and improve quantitative evaluations. With a satisfying function scalability and expandability, the software is applicable for both ophthalmic researchers and clinicians.
View details for DOI 10.1007/s10278-017-0047-6
View details for Web of Science ID 000459945300019
View details for PubMedID 30187316
View details for PubMedCentralID PMC6382642
Fast segmentation of kidney components using random forests and ferns
2017; 44 (12): 6353-6363
This paper studies the feasibility of developing a fast and accurate automatic kidney component segmentation method. The proposed method segments the kidney into four components: renal cortex, renal column, renal medulla, and renal pelvis.In this article, we have proposed a highly efficient approach which strategically combines random forests and random ferns methods to segment the kidney into four components: renal cortex, renal column, renal medulla, and renal pelvis. The proposed method is designed following a coarse-to-fine strategy. The initial segmentation applies random forests and random ferns with a variety of features, and combines their results to obtain a coarse renal cortex region. Then the fine segmentation of four kidney components is achieved using the weighted forests-ferns approach with the well-designed potential energy features which are calculated based on the initial segmentation result. The proposed method was validated on a dataset with 37 contrast-enhanced CT images. Evaluation indices including Dice similarity coefficient (DSC), true positive volume fraction (TPVF), and false positive volume fraction (FPVF) are used to assess the segmentation accuracy. The proposed method was implemented and tested on a 64-bit system computer (Intel Core i7-3770 CPU, 3.4 GHz and 8 GB RAM).The experimental results demonstrated the high accuracy and efficiency for segmenting the kidney components: the mean Dice similarity coefficients were 89.85%, 80.60%, 86.63%, and 77.75% for renal cortex, column, medulla, and pelvis, respectively, for right and left kidneys. The computational time of segmenting the whole kidney into four components was about 3 s.The experimental results showed the feasibility and efficacy of the proposed automatic kidney component segmentation method. The proposed method applied an efficient weighted strategy to combine random forests and ferns, making full use of the advantages of both methods. The novel potential energy features help random forests effectively segment the kidney components and the background. The high accuracy and efficiency of our method make it practicable in clinical applications.
View details for DOI 10.1002/mp.12594
View details for Web of Science ID 000425379200024
View details for PubMedID 28940607