
Xi Wang
Postdoctoral Scholar, Radiation Physics
Bio
My research interests cover medical image analysis and deep learning, with special emphasis on weakly supervised learning. Specifically, I am dedicated to designing deep weakly supervised learning algorithms by leveraging coarse labeled unlabeled medical data, including gigapixel whole slide image classification, volumetric optical coherence tomography image analysis, and semi-supervised medical image classification. Currently, I mainly focus on improving the prediction performance of the immunotherapy response by using very limited pre-treatment multi-modality data.
Honors & Awards
-
China National Scholarship, Ministry of Education of the People's Republic of China (2010)
-
China National Encouragement Scholarship, Ministry of Education of the People's Republic of China (2011)
-
China National Scholarship, Ministry of Education of the People's Republic of China (2012)
-
Graduate Studentship, Sichuan University (2013-2016)
-
Champion, Abnormalities Detection (Endoscopic Vision Challenge in MICCAI’15), MICCAI’15 (2015)
-
Champion, Intervertebral Disc localization (Intervertebral Disc Localization Challenge in MICCAI’15), MICCAI’15 (2015)
-
Best Paper Award, International Conference on Medical Imaging and Virtual Reality (MIAR) (2016)
-
Ph.D. Studentship, The Chinese University of Hong Kong (2016-2020)
-
Excellent Teaching Assistant (3 times), Department of Computer Science and Engineering, The Chinese University of Hong Kong (2018-2019)
-
Student Travel Award, The first MIDL Conference (2019)
-
2nd Prize of Best Free Paper, The first APOIS Meeting (2020)
-
Student Travel Award, The first APOIS Meeting (2020)
Boards, Advisory Committees, Professional Organizations
-
Reviewer, Medical Image Computing and Computer Assisted Intervention (2020 - Present)
-
Reviewer, IEEE Transactions on Cybernetics (2020 - Present)
-
Reviewer, IEEE Transactions on Medical Imaging (2020 - Present)
-
Reviewer, Medical Image Analysis (2020 - Present)
-
Reviewer, IEEE Journal of Biomedical and Health Informatics (2020 - Present)
-
Reviewer, Scientific Reports (2020 - Present)
-
Reviewer, World Journal of Radiology (2021 - Present)
-
Reviewer, International Journal of Intelligent Systems (2021 - Present)
-
Reviewer, Artificial Intelligence In Medicine (2021 - Present)
-
Reviewer, IEEE Access (2021 - Present)
-
Reviewer, IEEE Transactions on Multimedia (2021 - Present)
-
Meta-reviewer, 26th UK Conference on Medical Image Understanding and Analysis (2022 - Present)
-
Review Editor, Frontiers in Artificial Intelligence (2022 - Present)
-
Reviewer, Computers & Graphics (2022 - Present)
-
Reviewer, Applied Sciences (2022 - Present)
-
Reviewer, BMC Medical Imaging (2022 - Present)
-
Reviewer, Frontiers in Radiology (2022 - Present)
-
Reviewer, Sensors (2022 - Present)
-
Reviewer, Frontiers in Oncology (2022 - Present)
-
Reviewer, Frontiers in Physics (2022 - Present)
-
Reviewer, Electronics (2022 - Present)
Professional Education
-
Doctor of Philosophy, Chinese University of Hong Kong (2020)
-
Master of Science, Sichuan University (2016)
-
Bachelor of Engineering, Southwest University (2013)
-
Bachelor, Southwest University, Software Engineering (2013)
-
Master, Sichuan University, Computer Science and Technology (2016)
-
Ph.D., The Chinese University of Hong Kong, Computer Science and Engineering (2020)
Patents
-
Xi Wang. "United States Patent 202210321271.1 A Percolation-based Evolutionary Method for Diffusion Source Localization in Large Networks", Yang Liu, Xiaoqi Chen, Zhen Wang, Xi Wang, Xuelong Li, Mar 30, 2022
-
Yang Liu, Guangbo Liang, Zhen Wang, Xi Wang, Chao Gao, Xuelong Li. "United States Patent 202210024022.6 A Method Based on Graph Partition to Suppress Diffusions on Complex Networks", Jan 11, 2022
-
Yang Liu, Xiaoqi Chen, Zhen Wang, Xi Wang, Xuelong Li.. "China P.Rep. Patent 202111518210.6 A Bounded-Percolation Greedy Method for Epidemic Containment on Complex Networks", Dec 14, 2021
Research Interests
-
Professional Development
-
Teachers and Teaching
Current Research and Scholarly Interests
Multi-modal deep learning for precision oncology
Projects
-
Immunotherapy treatment response prediction for gastric cancer using semi-supervised multi-modal deep learning, Stanford University (1/3/2022 - 12/31/2022)
Location
Stanford
All Publications
-
Deep semi-supervised multiple instance learning with self-correction for DME classification from OCT images.
Medical image analysis
2022; 83: 102673
Abstract
Supervised deep learning has achieved prominent success in various diabetic macular edema (DME) recognition tasks from optical coherence tomography (OCT) volumetric images. A common problematic issue that frequently occurs in this field is the shortage of labeled data due to the expensive fine-grained annotations, which increases substantial difficulty in accurate analysis by supervised learning. The morphological changes in the retina caused by DME might be distributed sparsely in B-scan images of the OCT volume, and OCT data is often coarsely labeled at the volume level. Hence, the DME identification task can be formulated as a multiple instance classification problem that could be addressed by multiple instance learning (MIL) techniques. Nevertheless, none of previous studies utilize unlabeled data simultaneously to promote the classification accuracy, which is particularly significant for a high quality of analysis at the minimum annotation cost. To this end, we present a novel deep semi-supervised multiple instance learning framework to explore the feasibility of leveraging a small amount of coarsely labeled data and a large amount of unlabeled data to tackle this problem. Specifically, we come up with several modules to further improve the performance according to the availability and granularity of their labels. To warm up the training, we propagate the bag labels to the corresponding instances as the supervision of training, and propose a self-correction strategy to handle the label noise in the positive bags. This strategy is based on confidence-based pseudo-labeling with consistency regularization. The model uses its prediction to generate the pseudo-label for each weakly augmented input only if it is highly confident about the prediction, which is subsequently used to supervise the same input in a strongly augmented version. This learning scheme is also applicable to unlabeled data. To enhance the discrimination capability of the model, we introduce the Student-Teacher architecture and impose consistency constraints between two models. For demonstration, the proposed approach was evaluated on two large-scale DME OCT image datasets. Extensive results indicate that the proposed method improves DME classification with the incorporation of unlabeled data and outperforms competing MIL methods significantly, which confirm the feasibility of deep semi-supervised multiple instance learning at a low annotation cost.
View details for DOI 10.1016/j.media.2022.102673
View details for PubMedID 36403310
-
Three-Dimensional Multi-Task Deep Learning Model to Detect Glaucomatous Optic Neuropathy and Myopic Features From Optical Coherence Tomography Scans: A Retrospective Multi-Centre Study.
Frontiers in medicine
2022; 9: 860574
Abstract
We aim to develop a multi-task three-dimensional (3D) deep learning (DL) model to detect glaucomatous optic neuropathy (GON) and myopic features (MF) simultaneously from spectral-domain optical coherence tomography (SDOCT) volumetric scans.Each volumetric scan was labelled as GON according to the criteria of retinal nerve fibre layer (RNFL) thinning, with a structural defect that correlated in position with the visual field defect (i.e., reference standard). MF were graded by the SDOCT en face images, defined as presence of peripapillary atrophy (PPA), optic disc tilting, or fundus tessellation. The multi-task DL model was developed by ResNet with output of Yes/No GON and Yes/No MF. SDOCT scans were collected in a tertiary eye hospital (Hong Kong SAR, China) for training (80%), tuning (10%), and internal validation (10%). External testing was performed on five independent datasets from eye centres in Hong Kong, the United States, and Singapore, respectively. For GON detection, we compared the model to the average RNFL thickness measurement generated from the SDOCT device. To investigate whether MF can affect the model's performance on GON detection, we conducted subgroup analyses in groups stratified by Yes/No MF. The area under the receiver operating characteristic curve (AUROC), sensitivity, specificity, and accuracy were reported.A total of 8,151 SDOCT volumetric scans from 3,609 eyes were collected. For detecting GON, in the internal validation, the proposed 3D model had significantly higher AUROC (0.949 vs. 0.913, p < 0.001) than average RNFL thickness in discriminating GON from normal. In the external testing, the two approaches had comparable performance. In the subgroup analysis, the multi-task DL model performed significantly better in the group of "no MF" (0.883 vs. 0.965, p-value < 0.001) in one external testing dataset, but no significant difference in internal validation and other external testing datasets. The multi-task DL model's performance to detect MF was also generalizable in all datasets, with the AUROC values ranging from 0.855 to 0.896.The proposed multi-task 3D DL model demonstrated high generalizability in all the datasets and the presence of MF did not affect the accuracy of GON detection generally.
View details for DOI 10.3389/fmed.2022.860574
View details for PubMedID 35783623
View details for PubMedCentralID PMC9240220
-
Federated Deep Learning for Classifying Glaucomatous Optic Neuropathy from Optical Coherence Tomography Volumetric Scans: A Privacy-preserving Multi-national Study
ASSOC RESEARCH VISION OPHTHALMOLOGY INC. 2022
View details for Web of Science ID 000844401302179
-
Using Deep Learning for Assessing Image-Quality of 3D Macular Scans from Spectral-Domain Optical Coherence Tomography
ASSOC RESEARCH VISION OPHTHALMOLOGY INC. 2022
View details for Web of Science ID 000844401300206
-
A Deep Learning System to Predict Response to Anti-Vascular Endothelial Growth Factor (VEGF) Therapy in Eyes with Diabetic Macular Edema for Optical Coherence Tomography Images
ASSOC RESEARCH VISION OPHTHALMOLOGY INC. 2022
View details for Web of Science ID 000844401306217
- Deep semi-supervised multiple instance learning with self-correction for DME classification from OCT images Medical image analysis 2022
- Detailed annotation improves deep learning generalization for interpretable chest radiograph diagnosis: A retrospective study Radiology 2022
-
A Multitask Deep-Learning System to Classify Diabetic Macular Edema for Different Optical Coherence Tomography Devices: A Multicenter Analysis.
Diabetes care
2021
Abstract
OBJECTIVE: Diabetic macular edema (DME) is the primary cause of vision loss among individuals with diabetes mellitus (DM). We developed, validated, and tested a deep learning (DL) system for classifying DME using images from three common commercially available optical coherence tomography (OCT) devices.RESEARCH DESIGN AND METHODS: We trained and validated two versions of a multitask convolution neural network (CNN) to classify DME (center-involved DME [CI-DME], non-CI-DME, or absence of DME) using three-dimensional (3D) volume scans and 2D B-scans, respectively. For both 3D and 2D CNNs, we used the residual network (ResNet) as the backbone. For the 3D CNN, we used a 3D version of ResNet-34 with the last fully connected layer removed as the feature extraction module. A total of 73,746 OCT images were used for training and primary validation. External testing was performed using 26,981 images across seven independent data sets from Singapore, Hong Kong, the U.S., China, and Australia.RESULTS: In classifying the presence or absence of DME, the DL system achieved area under the receiver operating characteristic curves (AUROCs) of 0.937 (95% CI 0.920-0.954), 0.958 (0.930-0.977), and 0.965 (0.948-0.977) for the primary data set obtained from CIRRUS, SPECTRALIS, and Triton OCTs, respectively, in addition to AUROCs >0.906 for the external data sets. For further classification of the CI-DME and non-CI-DME subgroups, the AUROCs were 0.968 (0.940-0.995), 0.951 (0.898-0.982), and 0.975 (0.947-0.991) for the primary data set and >0.894 for the external data sets.CONCLUSIONS: We demonstrated excellent performance with a DL system for the automated classification of DME, highlighting its potential as a promising second-line screening tool for patients with DM, which may potentially create a more effective triaging mechanism to eye clinics.
View details for DOI 10.2337/dc20-3064
View details for PubMedID 34315698
-
A Multi-Task Deep-Learning System to Classify Diabetic Macular Edema for Different Optical Coherence Tomography Devices: A Multi-Center Analysis
ASSOC RESEARCH VISION OPHTHALMOLOGY INC. 2021
View details for Web of Science ID 000690761100112
-
Deep virtual adversarial self-training with consistency regularization for semi-supervised medical image classification
MEDICAL IMAGE ANALYSIS
2021; 70: 102010
Abstract
Convolutional neural networks have achieved prominent success on a variety of medical imaging tasks when a large amount of labeled training data is available. However, the acquisition of expert annotations for medical data is usually expensive and time-consuming, which poses a great challenge for supervised learning approaches. In this work, we proposed a novel semi-supervised deep learning method, i.e., deep virtual adversarial self-training with consistency regularization, for large-scale medical image classification. To effectively exploit useful information from unlabeled data, we leverage self-training and consistency regularization to harness the underlying knowledge, which helps improve the discrimination capability of training models. More concretely, the model first uses its prediction for pseudo-labeling on the weakly-augmented input image. A pseudo-label is kept only if the corresponding class probability is of high confidence. Then the model prediction is encouraged to be consistent with the strongly-augmented version of the same input image. To improve the robustness of the network against virtual adversarial perturbed input, we incorporate virtual adversarial training (VAT) on both labeled and unlabeled data into the course of training. Hence, the network is trained by minimizing a combination of three types of losses, including a standard supervised loss on labeled data, a consistency regularization loss on unlabeled data, and a VAT loss on both labeled and labeled data. We extensively evaluate the proposed semi-supervised deep learning methods on two challenging medical image classification tasks: breast cancer screening from ultrasound images and multi-class ophthalmic disease classification from optical coherence tomography B-scan images. Experimental results demonstrate that the proposed method outperforms both supervised baseline and other state-of-the-art methods by a large margin on all tasks.
View details for DOI 10.1016/j.media.2021.102010
View details for Web of Science ID 000639337100008
View details for PubMedID 33677262
-
Dual-path network with synergistic grouping loss and evidence driven risk stratification for whole slide cervical image analysis
MEDICAL IMAGE ANALYSIS
2021; 69: 101955
Abstract
Cervical cancer has been one of the most lethal cancers threatening women's health. Nevertheless, the incidence of cervical cancer can be effectively minimized with preventive clinical management strategies, including vaccines and regular screening examinations. Screening cervical smears under microscope by cytologist is a widely used routine in regular examination, which consumes cytologists' large amount of time and labour. Computerized cytology analysis appropriately caters to such an imperative need, which alleviates cytologists' workload and reduce potential misdiagnosis rate. However, automatic analysis of cervical smear via digitalized whole slide images (WSIs) remains a challenging problem, due to the extreme huge image resolution, existence of tiny lesions, noisy dataset and intricate clinical definition of classes with fuzzy boundaries. In this paper, we design an efficient deep convolutional neural network (CNN) with dual-path (DP) encoder for lesion retrieval, which ensures the inference efficiency and the sensitivity on both tiny and large lesions. Incorporated with synergistic grouping loss (SGL), the network can be effectively trained on noisy dataset with fuzzy inter-class boundaries. Inspired by the clinical diagnostic criteria from the cytologists, a novel smear-level classifier, i.e., rule-based risk stratification (RRS), is proposed for accurate smear-level classification and risk stratification, which aligns reasonably with intricate cytological definition of the classes. Extensive experiments on the largest dataset including 19,303 WSIs from multiple medical centers validate the robustness of our method. With high sensitivity of 0.907 and specificity of 0.80 being achieved, our method manifests the potential to reduce the workload for cytologists in the routine practice.
View details for DOI 10.1016/j.media.2021.101955
View details for Web of Science ID 000639620600003
View details for PubMedID 33588122
-
UD-MIL: Uncertainty-Driven Deep Multiple Instance Learning for OCT Image Classification
IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS
2020; 24 (12): 3431-3442
Abstract
Deep learning has achieved remarkable success in the optical coherence tomography (OCT) image classification task with substantial labelled B-scan images available. However, obtaining such fine-grained expert annotations is usually quite difficult and expensive. How to leverage the volume-level labels to develop a robust classifier is very appealing. In this paper, we propose a weakly supervised deep learning framework with uncertainty estimation to address the macula-related disease classification problem from OCT images with the only volume-level label being available. First, a convolutional neural network (CNN) based instance-level classifier is iteratively refined by using the proposed uncertainty-driven deep multiple instance learning scheme. To our best knowledge, we are the first to incorporate the uncertainty evaluation mechanism into multiple instance learning (MIL) for training a robust instance classifier. The classifier is able to detect suspicious abnormal instances and abstract the corresponding deep embedding with high representation capability simultaneously. Second, a recurrent neural network (RNN) takes instance features from the same bag as input and generates the final bag-level prediction by considering the individually local instance information and globally aggregated bag-level representation. For more comprehensive validation, we built two large diabetic macular edema (DME) OCT datasets from different devices and imaging protocols to evaluate the efficacy of our method, which are composed of 30,151 B-scans in 1,396 volumes from 274 patients (Heidelberg-DME dataset) and 38,976 B-scans in 3,248 volumes from 490 patients (Triton-DME dataset), respectively. We compare the proposed method with the state-of-the-art approaches, and experimentally demonstrate that our method is superior to alternative methods, achieving volume-level accuracy, F1-score and area under the receiver operating characteristic curve (AUC) of 95.1%, 0.939 and 0.990 on Heidelberg-DME and those of 95.1%, 0.935 and 0.986 on Triton-DME, respectively. Furthermore, the proposed method also yields competitive results on another public age-related macular degeneration OCT dataset, indicating the high potential as an effective screening tool in the clinical practice.
View details for DOI 10.1109/JBHI.2020.2983730
View details for Web of Science ID 000597173000010
View details for PubMedID 32248132
-
Deep Mining External Imperfect Data for Chest X-Ray Disease Screening.
IEEE transactions on medical imaging
2020; 39 (11): 3583–94
Abstract
Deep learning approaches have demonstrated remarkable progress in automatic Chest X-ray analysis. The data-driven feature of deep models requires training data to cover a large distribution. Therefore, it is substantial to integrate knowledge from multiple datasets, especially for medical images. However, learning a disease classification model with extra Chest X-ray (CXR) data is yet challenging. Recent researches have demonstrated that performance bottleneck exists in joint training on different CXR datasets, and few made efforts to address the obstacle. In this paper, we argue that incorporating an external CXR dataset leads to imperfect training data, which raises the challenges. Specifically, the imperfect data is in two folds: domain discrepancy, as the image appearances vary across datasets; and label discrepancy, as different datasets are partially labeled. To this end, we formulate the multi-label thoracic disease classification problem as weighted independent binary tasks according to the categories. For common categories shared across domains, we adopt task-specific adversarial training to alleviate the feature differences. For categories existing in a single dataset, we present uncertainty-aware temporal ensembling of model predictions to mine the information from the missing labels further. In this way, our framework simultaneously models and tackles the domain and label discrepancies, enabling superior knowledge mining ability. We conduct extensive experiments on three datasets with more than 360,000 Chest X-ray images. Our method outperforms other competing models and sets state-of-the-art performance on the official NIH test set with 0.8349 AUC, demonstrating its effectiveness of utilizing the external dataset to improve the internal classification.
View details for DOI 10.1109/TMI.2020.3000949
View details for PubMedID 32746106
-
Weakly Supervised Deep Learning for Whole Slide Lung Cancer Image Analysis
IEEE TRANSACTIONS ON CYBERNETICS
2020; 50 (9): 3950-3962
Abstract
Histopathology image analysis serves as the gold standard for cancer diagnosis. Efficient and precise diagnosis is quite critical for the subsequent therapeutic treatment of patients. So far, computer-aided diagnosis has not been widely applied in pathological field yet as currently well-addressed tasks are only the tip of the iceberg. Whole slide image (WSI) classification is a quite challenging problem. First, the scarcity of annotations heavily impedes the pace of developing effective approaches. Pixelwise delineated annotations on WSIs are time consuming and tedious, which poses difficulties in building a large-scale training dataset. In addition, a variety of heterogeneous patterns of tumor existing in high magnification field are actually the major obstacle. Furthermore, a gigapixel scale WSI cannot be directly analyzed due to the immeasurable computational cost. How to design the weakly supervised learning methods to maximize the use of available WSI-level labels that can be readily obtained in clinical practice is quite appealing. To overcome these challenges, we present a weakly supervised approach in this article for fast and effective classification on the whole slide lung cancer images. Our method first takes advantage of a patch-based fully convolutional network (FCN) to retrieve discriminative blocks and provides representative deep features with high efficiency. Then, different context-aware block selection and feature aggregation strategies are explored to generate globally holistic WSI descriptor which is ultimately fed into a random forest (RF) classifier for the image-level prediction. To the best of our knowledge, this is the first study to exploit the potential of image-level labels along with some coarse annotations for weakly supervised learning. A large-scale lung cancer WSI dataset is constructed in this article for evaluation, which validates the effectiveness and feasibility of the proposed method. Extensive experiments demonstrate the superior performance of our method that surpasses the state-of-the-art approaches by a significant margin with an accuracy of 97.3%. In addition, our method also achieves the best performance on the public lung cancer WSIs dataset from The Cancer Genome Atlas (TCGA). We highlight that a small number of coarse annotations can contribute to further accuracy improvement. We believe that weakly supervised learning methods have great potential to assist pathologists in histology image diagnosis in the near future.
View details for DOI 10.1109/TCYB.2019.2935141
View details for Web of Science ID 000562306000011
View details for PubMedID 31484154
-
Towards multi-center glaucoma OCT image screening with semi-supervised joint structure and function multi-task learning.
Medical image analysis
2020; 63: 101695
Abstract
Glaucoma is the leading cause of irreversible blindness in the world. Structure and function assessments play an important role in diagnosing glaucoma. Nowadays, Optical Coherence Tomography (OCT) imaging gains increasing popularity in measuring the structural change of eyes. However, few automated methods have been developed based on OCT images to screen glaucoma. In this paper, we are the first to unify the structure analysis and function regression to distinguish glaucoma patients from normal controls effectively. Specifically, our method works in two steps: a semi-supervised learning strategy with smoothness assumption is first applied for the surrogate assignment of missing function regression labels. Subsequently, the proposed multi-task learning network is capable of exploring the structure and function relationship between the OCT image and visual field measurement simultaneously, which contributes to classification performance improvement. It is also worth noting that the proposed method is assessed by two large-scale multi-center datasets. In other words, we first build the largest glaucoma OCT image dataset (i.e., HK dataset) involving 975,400 B-scans from 4,877 volumes to develop and evaluate the proposed method, then the model without further fine-tuning is directly applied on another independent dataset (i.e., Stanford dataset) containing 246,200 B-scans from 1,231 volumes. Extensive experiments are conducted to assess the contribution of each component within our framework. The proposed method outperforms the baseline methods and two glaucoma experts by a large margin, achieving volume-level Area Under ROC Curve (AUC) of 0.977 on HK dataset and 0.933 on Stanford dataset, respectively. The experimental results indicate the great potential of the proposed approach for the automated diagnosis system.
View details for DOI 10.1016/j.media.2020.101695
View details for PubMedID 32442866
-
Framework of Evolutionary Algorithm for Investigation of Influential Nodes in Complex Networks
IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION
2019; 23 (6): 1049-1063
View details for DOI 10.1109/TEVC.2019.2901012
View details for Web of Science ID 000501326800010
-
Detection of glaucomatous optic neuropathy with spectral-domain optical coherence tomography: a retrospective training and validation deep-learning analysis
LANCET DIGITAL HEALTH
2019; 1 (4): E172–E182
View details for DOI 10.1016/S2589-7500(19)30085-8
View details for Web of Science ID 000525870100011
-
Detection of glaucomatous optic neuropathy with spectral-domain optical coherence tomography: a retrospective training and validation deep-learning analysis.
The Lancet. Digital health
2019; 1 (4): e172-e182
Abstract
Spectral-domain optical coherence tomography (SDOCT) can be used to detect glaucomatous optic neuropathy, but human expertise in interpretation of SDOCT is limited. We aimed to develop and validate a three-dimensional (3D) deep-learning system using SDOCT volumes to detect glaucomatous optic neuropathy.We retrospectively collected a dataset including 4877 SDOCT volumes of optic disc cube for training (60%), testing (20%), and primary validation (20%) from electronic medical and research records at the Chinese University of Hong Kong Eye Centre (Hong Kong, China) and the Hong Kong Eye Hospital (Hong Kong, China). Residual network was used to build the 3D deep-learning system. Three independent datasets (two from Hong Kong and one from Stanford, CA, USA), including 546, 267, and 1231 SDOCT volumes, respectively, were used for external validation of the deep-learning system. Volumes were labelled as having or not having glaucomatous optic neuropathy according to the criteria of retinal nerve fibre layer thinning on reliable SDOCT images with position-correlated visual field defect. Heatmaps were generated for qualitative assessments.6921 SDOCT volumes from 1 384 200 two-dimensional cross-sectional scans were studied. The 3D deep-learning system had an area under the receiver operation characteristics curve (AUROC) of 0·969 (95% CI 0·960-0·976), sensitivity of 89% (95% CI 83-93), specificity of 96% (92-99), and accuracy of 91% (89-93) in the primary validation, outperforming a two-dimensional deep-learning system that was trained on en face fundus images (AUROC 0·921 [0·905-0·937]; p<0·0001). The 3D deep-learning system performed similarly in the external validation datasets, with AUROCs of 0·893-0·897, sensitivities of 78-90%, specificities of 79-86%, and accuracies of 80-86%. The heatmaps of glaucomatous optic neuropathy showed that the learned features by the 3D deep-learning system used for detection of glaucomatous optic neuropathy were similar to those used by clinicians.The proposed 3D deep-learning system performed well in detection of glaucomatous optic neuropathy in both primary and external validations. Further prospective studies are needed to estimate the incremental cost-effectiveness of incorporation of an artificial intelligence-based model for glaucoma screening.Hong Kong Research Grants Council.
View details for DOI 10.1016/S2589-7500(19)30085-8
View details for PubMedID 33323187
-
A 3D Deep Learning System for Detecting Glaucomatous Optic Neuropathy from Volumetric and En Face Optical Coherence Tomography Scans
ASSOC RESEARCH VISION OPHTHALMOLOGY INC. 2019
View details for Web of Science ID 000488800705121
-
Unifying Structure Analysis and Surrogate-Driven Function Regression for Glaucoma OCT Image Screening
SPRINGER INTERNATIONAL PUBLISHING AG. 2019: 39-47
View details for DOI 10.1007/978-3-030-32239-7_5
View details for Web of Science ID 000548734200005
-
Deep Angular Embedding and Feature Correlation Attention for Breast MRI Cancer Analysis
SPRINGER INTERNATIONAL PUBLISHING AG. 2019: 504-512
View details for DOI 10.1007/978-3-030-32251-9_55
View details for Web of Science ID 000548735900055
-
Optimization of targeted node set in complex networks under percolation and selection
PHYSICAL REVIEW E
2018; 98 (1): 012313
Abstract
Most of the existing methods for the robustness and targeted immunization problems can be viewed as greedy strategies, which are quite efficient but readily induce a local optimization. In this paper, starting from a percolation perspective, we develop two strategies, the relationship-related (RR) strategy and the prediction relationship (PR) strategy, to avoid a local optimum only through the investigation of interrelationships among nodes. Meanwhile, RR combines the sum rule and the product rule from explosive percolation, and PR holds the assumption that nodes with high degree are usually more important than those with low degree. In this manner our methods have a better capability to collapse or protect a network. The simulations performed on a number of networks also demonstrate their effectiveness, especially on large real-world networks where RR fragments each of them into the same size of the giant component; however, RR needs only less than 90% of the number of nodes which are necessary for the most excellent existing methods.
View details for DOI 10.1103/PhysRevE.98.012313
View details for Web of Science ID 000439414800006
View details for PubMedID 30110741
View details for PubMedCentralID PMC7217537
-
AUTOMATED MITOSIS DETECTION WITH DEEP REGRESSION NETWORKS
IEEE. 2016: 1204-1207
View details for DOI 10.1109/ISBI.2016.7493482
View details for Web of Science ID 000386377400284
-
Mitosis Detection in Breast Cancer Histology Images via Deep Cascaded Networks
ASSOC ADVANCEMENT ARTIFICIAL INTELLIGENCE. 2016: 1160-1166
View details for Web of Science ID 000485474201028
- 3D fully convolutional networks for intervertebral disc localization and segmentation International Conference on Medical Imaging and Augmented Reality 2016: 375–382