My research lies at the intersection of machine learning, computer vision, medical image analysis, and computational neuroscience. I work on the automatic analysis of human activities and behaviors from videos and further connect how humans perform actions to the brain by also looking at magnetic resonance images. I explore explainable machine learning algorithms for understanding the underlying factors of neurodegenerative and neuropsychiatric diseases on the brain as well as their ramifications for everyday life.
Instructor, Psychiatry and Behavioral Sciences
Associate Editor, IEEE Journal of Biomedical and Health Informatics (2020 - Present)
Associate Editor, Journal of Ambient Intelligence and Smart Environments (2019 - Present)
Honors & Awards
Young Investigator Award, Medical Image Computing and Computer Assisted Interventions (MICCAI) (2018)
NIH F32 Fellowship Award, NIAAA (2018-2019)
Postdoctoral Research Associate, University of North Carolina at Chapel Hill, Machine Learning and Medical Imaging (2017)
Research Scholar, Carnegie Mellon University, Computer Vision (2012)
Current Research and Scholarly Interests
My research lies in the intersection of machine learning, computer vision, neuroimaging, and computational neuroscience. Particularly, my research focuses on the investigation of different computational and statistical learning-based methods in processing both natural and biomedical images to extract semantics from the underlying visual content. Machine learning, statistics, signal and image processing, neuroscience, computer vision, and neuroimaging have conventionally evolved independently to tackle problems from different perspectives. Occasionally, these concepts neglected each other, while they can offer complementary viewpoints. In recent years, these fields have begun to intertwine, and it is increasingly becoming clear that we need to make use of multidisciplinary research to better process large-scale visual data. I consider my research interests and direction as located at the intersection of all the aforementioned fields.
Starting my position at the Biomedical Research Imaging Center (BRIC) in the University of North Carolina-Chapel Hill, my main research was focused on expanding my skillset and using my knowledge in machine learning and visual data analysis on the diagnosis of neurodegenerative diseases, and prediction of brain development throughout early years of life, based on neuroimaging data. Although neurodegenerative diseases manifest with diverse pathological features, the cellular level processes resemble similar structures. Therefore, data-driven machine learning methods can lead to great achievements and solve these problems accordingly. I have contributed to the critical studies on these diseases, including Parkinson's Progression Markers Initiative (PPMI) and Alzheimer's Disease Neuroimaging Initiative (ADNI). One of the goals of neuroscience and cognitive sciences is to understand how the brain works. Due to many different factors like technological limitations, this goal remains elusive. Over the past decade, remarkable advances in both hardware and software aspects have established new possibilities for understanding the brain.
Continuing my research at Stanford University, I believe my research advances computational science in identifying biomedical phenotypes that accelerate the detection, understanding, and treatment of medical diseases and specifically neuropsychiatric disorders. Recently, I have started to use my knowledge and expertise in the multidisciplinary fields of machine learning and computational neuroscience to analyses brain images for gaining more insight into the human immunodeficiency virus (HIV) infection and alcoholism, along with their comorbidity. Each of these disorders carries liability for disruption of brain structure integrity. Furthermore, both HIV infection and alcoholism reduce health-related quality of life, and their co-occurrence is highly prevalent. However, few studies examined the potentially heightened burden of disease comorbidity, which often leads to cognitive impairments. I sought to create machine learning techniques to improve the mechanistic understating of their comorbidity effects in the brain.
- Segmenting the Future IEEE ROBOTICS AND AUTOMATION LETTERS 2020; 5 (3): 4202–9
Image-to-Images Translation for Multi-Task Organ Segmentation and Bone Suppression in Chest X-Ray Radiography
IEEE TRANSACTIONS ON MEDICAL IMAGING
2020; 39 (7): 2553–65
Chest X-ray radiography is one of the earliest medical imaging technologies and remains one of the most widely-used for diagnosis, screening, and treatment follow up of diseases related to lungs and heart. The literature in this field of research reports many interesting studies dealing with the challenging tasks of bone suppression and organ segmentation but performed separately, limiting any learning that comes with the consolidation of parameters that could optimize both processes. This study, and for the first time, introduces a multitask deep learning model that generates simultaneously the bone-suppressed image and the organ-segmented image, enhancing the accuracy of tasks, minimizing the number of parameters needed by the model and optimizing the processing time, all by exploiting the interplay between the network parameters to benefit the performance of both tasks. The architectural design of this model, which relies on a conditional generative adversarial network, reveals the process on how the wellestablished pix2pix network (image-to-image network) is modified to fit the need for multitasking and extending it to the new image-to-images architecture. The developed source code of this multitask model is shared publicly on Github as the first attempt for providing the two-task pix2pix extension, a supervised/paired/aligned/registered image-to-images translation which would be useful in many multitask applications. Dilated convolutions are also used to improve the results through a more effective receptive field assessment. The comparison with state-of-the-art al-gorithms along with ablation study and a demonstration video1 are provided to evaluate the efficacy and gauge the merits of the proposed approach.
View details for DOI 10.1109/TMI.2020.2974159
View details for Web of Science ID 000545410200024
View details for PubMedID 32078541
- Skeleton-based structured early activity prediction MULTIMEDIA TOOLS AND APPLICATIONS 2020
- Spatiotemporal Relationship Reasoning for Pedestrian Intent Prediction IEEE ROBOTICS AND AUTOMATION LETTERS 2020; 5 (2): 3485–92
- Mammographic mass segmentation using multichannel and multiscale fully convolutional networks INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY 2020
FCN Based Label Correction for Multi-Atlas Guided Organ Segmentation.
Segmentation of medical images using multiple atlases has recently gained immense attention due to their augmented robustness against variabilities across different subjects. These atlas-based methods typically comprise of three steps: atlas selection, image registration, and finally label fusion. Image registration is one of the core steps in this process, accuracy of which directly affects the final labeling performance. However, due to inter-subject anatomical variations, registration errors are inevitable. The aim of this paper is to develop a deep learning-based confidence estimation method to alleviate the potential effects of registration errors. We first propose a fully convolutional network (FCN) with residual connections to learn the relationship between the image patch pair (i.e., patches from the target subject and the atlas) and the related label confidence patch. With the obtained label confidence patch, we can identify the potential errors in the warped atlas labels and correct them. Then, we use two label fusion methods to fuse the corrected atlas labels. The proposed methods are validated on a publicly available dataset for hippocampus segmentation. Experimental results demonstrate that our proposed methods outperform the state-of-the-art segmentation methods.
View details for DOI 10.1007/s12021-019-09448-5
View details for PubMedID 31898145
Adolescent alcohol use disrupts functional neurodevelopment in sensation seeking girls.
Exogenous causes, such as alcohol use, and endogenous factors, such as temperament and sex, can modulate developmental trajectories of adolescent neurofunctional maturation. We examined how these factors affect sexual dimorphism in brain functional networks in youth drinking below diagnostic threshold for alcohol use disorder (AUD). Based on the 3-year, annually acquired, longitudinal resting-state functional magnetic resonance imaging (MRI) data of 526 adolescents (12-21 years at baseline) from the National Consortium on Alcohol and Neurodevelopment in Adolescence (NCANDA) cohort, developmental trajectories of 23 intrinsic functional networks (IFNs) were analyzed for (1) sexual dimorphism in 259 participants who were no-to-low drinkers throughout this period; (2) sex-alcohol interactions in two age- and sex-matched NCANDA subgroups (N = 76 each), half no-to-low, and half moderate-to-heavy drinkers; and (3) moderating effects of gender-specific alcohol dose effects and a multifactorial impulsivity measure on IFN connectivity in all NCANDA participants. Results showed that sex differences in no-to-low drinkers diminished with age in the inferior-occipital network, yet girls had weaker within-network connectivity than boys in six other networks. Effects of adolescent alcohol use were more pronounced in girls than boys in three IFNs. In particular, girls showed greater within-network connectivity in two motor networks with more alcohol consumption, and these effects were mediated by sensation-seeking only in girls. Our results implied that drinking might attenuate the naturally diminishing sexual differences by disrupting the maturation of network efficiency more severely in girls. The sex-alcohol-dose effect might explain why women are at higher risk of alcohol-related health and psychosocial consequences than men.
View details for DOI 10.1111/adb.12914
View details for PubMedID 32428984
- Population-guided large margin classifier for high-dimension low-sample-size problems PATTERN RECOGNITION 2020; 97
Confounder-Aware Visualization of ConvNets.
Machine learning in medical imaging. MLMI (Workshop)
2019; 11861: 328–36
With recent advances in deep learning, neuroimaging studies increasingly rely on convolutional networks (ConvNets) to predict diagnosis based on MR images. To gain a better understanding of how a disease impacts the brain, the studies visualize the salience maps of the ConvNet highlighting voxels within the brain majorly contributing to the prediction. However, these salience maps are generally confounded, i.e., some salient regions are more predictive of confounding variables (such as age) than the diagnosis. To avoid such misinterpretation, we propose in this paper an approach that aims to visualize confounder-free saliency maps that only highlight voxels predictive of the diagnosis. The approach incorporates univariate statistical tests to identify confounding effects within the intermediate features learned by ConvNet. The influence from the subset of confounded features is then removed by a novel partial back-propagation procedure. We use this two-step approach to visualize confounder-free saliency maps extracted from synthetic and two real datasets. These experiments reveal the potential of our visualization in producing unbiased model-interpretation.
View details for DOI 10.1007/978-3-030-32692-0_38
View details for PubMedID 32549051
High-Resolution Encoder-Decoder Networks for Low-Contrast Medical Image Segmentation.
IEEE transactions on image processing : a publication of the IEEE Signal Processing Society
Automatic image segmentation is an essential step for many medical image analysis applications, include computer-aided radiation therapy, disease diagnosis, and treatment effect evaluation. One of the major challenges for this task is the blurry nature of medical images (e.g., CT, MR and, microscopic images), which can often result in low-contrast and vanishing boundaries. With the recent advances in convolutional neural networks, vast improvements have been made for image segmentation, mainly based on the skip-connection-linked encoder-decoder deep architectures. However, in many applications (with adjacent targets in blurry images), these models often fail to accurately locate complex boundaries and properly segment tiny isolated parts. In this paper, we aim to provide a method for blurry medical image segmentation and argue that skip connections are not enough to help accurately locate indistinct boundaries. Accordingly, we propose a novel high-resolution multi-scale encoder-decoder network (HMEDN), in which multi-scale dense connections are introduced for the encoder-decoder structure to finely exploit comprehensive semantic information. Besides skip connections, extra deeply-supervised high-resolution pathways (comprised of densely connected dilated convolutions) are integrated to collect high-resolution semantic information for accurate boundary localization. These pathways are paired with a difficulty-guided cross-entropy loss function and a contour regression task to enhance the quality of boundary detection. Extensive experiments on a pelvic CT image dataset, a multi-modal brain tumor dataset, and a cell segmentation dataset show the effectiveness of our method for 2D/3D semantic segmentation and 2D instance segmentation, respectively. Our experimental results also show that besides increasing the network complexity, raising the resolution of semantic feature maps can largely affect the overall model performance. For different tasks, finding a balance between these two factors can further improve the performance of the corresponding network.
View details for DOI 10.1109/TIP.2019.2919937
View details for PubMedID 31226074
Infant Brain Development Prediction With Latent Partial Multi-View Representation Learning
IEEE TRANSACTIONS ON MEDICAL IMAGING
2019; 38 (4): 909–18
The early postnatal period witnesses rapid and dynamic brain development. However, the relationship between brain anatomical structure and cognitive ability is still unknown. Currently, there is no explicit model to characterize this relationship in the literature. In this paper, we explore this relationship by investigating the mapping between morphological features of the cerebral cortex and cognitive scores. To this end, we introduce a multi-view multi-task learning approach to intuitively explore complementary information from different time-points and handle the missing data issue in longitudinal studies simultaneously. Accordingly, we establish a novel model, latent partial multi-view representation learning. Our approach regards data from different time-points as different views and constructs a latent representation to capture the complementary information from incomplete time-points. The latent representation explores the complementarity across different time-points and improves the accuracy of prediction. The minimization problem is solved by the alternating direction method of multipliers. Experimental results on both synthetic and real data validate the effectiveness of our proposed algorithm.
View details for DOI 10.1109/TMI.2018.2874964
View details for Web of Science ID 000463608000004
View details for PubMedID 30307859
View details for PubMedCentralID PMC6450718
Novel Machine Learning Identifies Brain Patterns Distinguishing Diagnostic Membership of Human Immunodeficiency Virus, Alcoholism, and Their Comorbidity of Individuals.
Biological psychiatry. Cognitive neuroscience and neuroimaging
The incidence of alcohol use disorder (AUD) in human immunodeficiency virus (HIV) infection is twice that of the rest of the population. This study documents complex radiologically identified, neuroanatomical effects of AUD+HIV comorbidity by identifying structural brain systems that predicted diagnosis on an individual basis. Applying novel machine learning analysis to 549 participants (199 control subjects, 222 with AUD, 68 with HIV, 60 with AUD+HIV), 298 magnetic resonance imaging brain measurements were automatically reduced to small subsets per group. Significance of each diagnostic pattern was inferred from its accuracy in predicting diagnosis and performance on six cognitive measures. While all three diagnostic patterns predicted the learning and memory score, the AUD+HIV pattern was the largest and had the highest predication accuracy (78.1%). Providing a roadmap for analyzing large, multimodal datasets, the machine learning analysis revealed imaging phenotypes that predicted diagnostic membership of magnetic resonance imaging scans of individuals with AUD, HIV, and their comorbidity.
View details for DOI 10.1016/j.bpsc.2019.02.003
View details for PubMedID 30982583
Semi-Supervised Discriminative Classification Robust to Sample-Outliers and Feature-Noises
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
2019; 41 (2): 515–22
Discriminative methods commonly produce models with relatively good generalization abilities. However, this advantage is challenged in real-world applications (e.g., medical image analysis problems), in which there often exist outlier data points (sample-outliers) and noises in the predictor values (feature-noises). Methods robust to both types of these deviations are somewhat overlooked in the literature. We further argue that denoising can be more effective, if we learn the model using all the available labeled and unlabeled samples, as the intrinsic geometry of the sample manifold can be better constructed using more data points. In this paper, we propose a semi-supervised robust discriminative classification method based on the least-squares formulation of linear discriminant analysis to detect sample-outliers and feature-noises simultaneously, using both labeled training and unlabeled testing data. We conduct several experiments on a synthetic, some benchmark semi-supervised learning, and two brain neurodegenerative disease diagnosis datasets (for Parkinson's and Alzheimer's diseases). Specifically for the application of neurodegenerative diseases diagnosis, incorporating robust machine learning methods can be of great benefit, due to the noisy nature of neuroimaging data. Our results show that our method outperforms the baseline and several state-of-the-art methods, in terms of both accuracy and the area under the ROC curve.
View details for DOI 10.1109/TPAMI.2018.2794470
View details for Web of Science ID 000456150600018
View details for PubMedID 29994560
View details for PubMedCentralID PMC6050136
Multi-task prediction of infant cognitive scores from longitudinal incomplete neuroimaging data
2019; 185: 783–92
Early postnatal brain undergoes a stunning period of development. Over the past few years, research on dynamic infant brain development has received increased attention, exhibiting how important the early stages of a child's life are in terms of brain development. To precisely chart the early brain developmental trajectories, longitudinal studies with data acquired over a long-enough period of infants' early life is essential. However, in practice, missing data from different time point(s) during the data gathering procedure is often inevitable. This leads to incomplete set of longitudinal data, which poses a major challenge for such studies. In this paper, prediction of multiple future cognitive scores with incomplete longitudinal imaging data is modeled into a multi-task machine learning framework. To efficiently learn this model, we account for selection of informative features (i.e., neuroimaging morphometric measurements for different time points), while preserving the structural information and the interrelation between these multiple cognitive scores. Several experiments are conducted on a carefully acquired in-house dataset, and the results affirm that we can predict the cognitive scores measured at the age of four years old, using the imaging data of earlier time points, as early as 24 months of age, with a reasonable performance (i.e., root mean square error of 0.18).
View details for DOI 10.1016/j.neuroimage.2018.04.052
View details for Web of Science ID 000451628200066
View details for PubMedID 29709627
View details for PubMedCentralID PMC6204112
Difficulty-Aware Attention Network with Confidence Learning for Medical Image Segmentation
ASSOC ADVANCEMENT ARTIFICIAL INTELLIGENCE. 2019: 1085–92
View details for Web of Science ID 000485292601012
Logistic Regression Confined by Cardinality-Constrained Sample and Feature Selection.
IEEE transactions on pattern analysis and machine intelligence
Many vision-based applications rely on logistic regression for embedding classification within a probabilistic context, such as recognition in images and videos or identifying disease-specific image phenotypes from neuroimages. Logistic regression, however, often performs poorly when trained on data that is noisy, has irrelevant features, or when the samples are distributed across the classes in an imbalanced setting; a common occurrence in visual recognition tasks. To deal with those issues, researchers generally rely on ad-hoc regularization techniques or model a subset of these issues. We instead propose a mathematically sound logistic regression model that selects a subset of (relevant) features and (informative and balanced) set of samples during the training process. The model does so by applying cardinality constraints (via l0 -'norm' sparsity) on the features and samples. l0 defines sparsity in mathematical settings but in practice has mostly been approximated (e.g., via l1 or its variations) for computational simplicity. We prove that a local minimum to the non-convex optimization problems induced by cardinality constraints can be computed by combining block coordinate descent with penalty decomposition. On synthetic, image recognition, and neuroimaging datasets, we furthermore show that the accuracy of the method is higher than alternative methods and classifiers commonly used in the literature.
View details for DOI 10.1109/TPAMI.2019.2901688
View details for PubMedID 30835210
- Variational Autoencoder with Truncated Mixture of Gaussians for Functional Connectivity Analysis SPRINGER INTERNATIONAL PUBLISHING AG. 2019: 867–79
- Action-Agnostic Human Pose Forecasting IEEE. 2019: 1423–32
UNSUPERVISED FEATURE RANKING AND SELECTION BASED ON AUTOENCODERS
IEEE. 2019: 3172–76
View details for Web of Science ID 000482554003079
- AVID: Adversarial Visual Irregularity Detection SPRINGER INTERNATIONAL PUBLISHING AG. 2019: 488–505
- Chained regularization for identifying brain patterns specific to HIV infection NEUROIMAGE 2018; 183: 425–37
Chained regularization for identifying brain patterns specific to HIV infection.
Human Immunodeficiency Virus (HIV) infection continues to have major adverse public health and clinical consequences despite the effectiveness of combination Antiretroviral Therapy (cART) in reducing HIV viral load and improving immune function. As successfully treated individuals with HIV infection age, their cognition declines faster than reported for normal aging. This phenomenon underlines the importance of improving long-term care, which requires better understanding of the impact of HIV on the brain. In this paper, automated identification of patients and brain regions affected by HIV infection are modeled as a classification problem, whose solution is determined in two steps within our proposed Chained-Regularization framework. The first step focuses on selecting the HIV pattern (i.e., the most informative constellation of brain region measurements for distinguishing HIV infected subjects from healthy controls) by constraining the search for the optimal parameter setting of the classifier via group sparsity (ℓ2,1-norm). The second step improves classification accuracy by constraining the parameterization with respect to the selected measurements and the Euclidean regularization (ℓ2-norm). When applied to the cortical and subcortical structural Magnetic Resonance Images (MRI) measurements of 65 controls and 65 HIV infected individuals, this approach is more accurate in distinguishing the two cohorts than more common models. Finally, the brain regions of the identified HIV pattern concur with the HIV literature that uses traditional group analysis models.
View details for PubMedID 30138676
Exploring diagnosis and imaging biomarkers of Parkinson's disease via iterative canonical correlation analysis based feature selection
COMPUTERIZED MEDICAL IMAGING AND GRAPHICS
2018; 67: 21–29
Parkinson's disease (PD) is a neurodegenerative disorder that progressively hampers the brain functions and leads to various movement and non-motor symptoms. However, it is difficult to attain early-stage PD diagnosis based on the subjective judgment of physicians in clinical routines. Therefore, automatic and accurate diagnosis of PD is highly demanded, so that the corresponding treatment can be implemented more appropriately. In this paper, we focus on finding the most discriminative features from different brain regions in PD through T1-weighted MR images, which can help the subsequent PD diagnosis. Specifically, we proposed a novel iterative canonical correlation analysis (ICCA) feature selection method, aiming at exploiting MR images in a more comprehensive manner and fusing features of different types into a common space. To state succinctly, we first extract the feature vectors from the gray matter and the white matter tissues separately, represented as insights of two different anatomical feature spaces for the subject's brain. The ICCA feature selection method aims at iteratively finding the optimal feature subset from two sets of features that have inherent high correlation with each other. In experiments we have conducted thorough investigations on the optimal feature set extracted by our ICCA method. We also demonstrate that using the proposed feature selection method, the PD diagnosis performance is further improved, and also outperforms many state-of-the-art methods.
View details for DOI 10.1016/j.compmedimag.2018.04.002
View details for Web of Science ID 000447358800003
View details for PubMedID 29702348
- Adversarially Learned One-Class Classifier for Novelty Detection IEEE. 2018: 3379–88
- Multi-label Transduction for Identifying Disease Comorbidity Patterns SPRINGER INTERNATIONAL PUBLISHING AG. 2018: 575–83
- End-To-End Alzheimer's Disease Diagnosis and Biomarker Identification SPRINGER INTERNATIONAL PUBLISHING AG. 2018: 337–45
INFANT BRAIN DEVELOPMENT PREDICTION WITH LATENT PARTIAL MULTI-VIEW REPRESENTATION LEARNING.
Proceedings. IEEE International Symposium on Biomedical Imaging
2018; 2018: 1048–51
The early postnatal period witnesses rapid and dynamic brain development. Understanding the cognitive development patterns can help identify various disorders at early ages of life and is essential for the health and well-being of children. This inspires us to investigate the relation between cognitive ability and the cerebral cortex by exploiting brain images in a longitudinal study. Specifically, we aim to predict the infant brain development status based on the morphological features of the cerebral cortex. For this goal, we introduce a multi-view multi-task learning approach to dexterously explore complementary information from different time points and handle the missing data simultaneously. Specifically, we establish a novel model termed as Latent Partial Multi-view Representation Learning. The approach regards data of different time points as different views, and constructs a latent representation to capture the complementary underlying information from different and even incomplete time points. It uncovers the latent representation that can be jointly used to learn the prediction model. This formulation elegantly explores the complementarity, effectively reduces the redundancy of different views, and improves the accuracy of prediction. The minimization problem is solved by the Alternating Direction Method of Multipliers (ADMM). Experimental results on real data validate the proposed method.
View details for PubMedID 30464798
View details for PubMedCentralID PMC6242279
Landmark-based deep multi-instance learning for brain disease diagnosis
MEDICAL IMAGE ANALYSIS
2018; 43: 157–68
In conventional Magnetic Resonance (MR) image based methods, two stages are often involved to capture brain structural information for disease diagnosis, i.e., 1) manually partitioning each MR image into a number of regions-of-interest (ROIs), and 2) extracting pre-defined features from each ROI for diagnosis with a certain classifier. However, these pre-defined features often limit the performance of the diagnosis, due to challenges in 1) defining the ROIs and 2) extracting effective disease-related features. In this paper, we propose a landmark-based deep multi-instance learning (LDMIL) framework for brain disease diagnosis. Specifically, we first adopt a data-driven learning approach to discover disease-related anatomical landmarks in the brain MR images, along with their nearby image patches. Then, our LDMIL framework learns an end-to-end MR image classifier for capturing both the local structural information conveyed by image patches located by landmarks and the global structural information derived from all detected landmarks. We have evaluated our proposed framework on 1526 subjects from three public datasets (i.e., ADNI-1, ADNI-2, and MIRIAD), and the experimental results show that our framework can achieve superior performance over state-of-the-art approaches.
View details for DOI 10.1016/j.media.2017.10.005
View details for Web of Science ID 000418627400012
View details for PubMedID 29107865
View details for PubMedCentralID PMC6203325
Multi-Layer Multi-View Classification for Alzheimer's Disease Diagnosis
ASSOC ADVANCEMENT ARTIFICIAL INTELLIGENCE. 2018: 4406–13
In this paper, we propose a novel multi-view learning method for Alzheimer's Disease (AD) diagnosis, using neuroimaging and genetics data. Generally, there are several major challenges associated with traditional classification methods on multi-source imaging and genetics data. First, the correlation between the extracted imaging features and class labels is generally complex, which often makes the traditional linear models ineffective. Second, medical data may be collected from different sources (i.e., multiple modalities of neuroimaging data, clinical scores or genetics measurements), therefore, how to effectively exploit the complementarity among multiple views is of great importance. In this paper, we propose a Multi-Layer Multi-View Classification (ML-MVC) approach, which regards the multi-view input as the first layer, and constructs a latent representation to explore the complex correlation between the features and class labels. This captures the high-order complementarity among different views, as we exploit the underlying information with a low-rank tensor regularization. Intrinsically, our formulation elegantly explores the nonlinear correlation together with complementarity among different views, and thus improves the accuracy of classification. Finally, the minimization problem is solved by the Alternating Direction Method of Multipliers (ADMM). Experimental results on Alzheimers Disease Neuroimaging Initiative (ADNI) data sets validate the effectiveness of our proposed method.
View details for Web of Science ID 000485488904061
View details for PubMedID 30416868
View details for PubMedCentralID PMC6223635
- Fine-Grained Segmentation Using Hierarchical Dilated Neural Networks SPRINGER INTERNATIONAL PUBLISHING AG. 2018: 488–96
- Predictive Modeling of Longitudinal Data for Alzheimer's Disease Diagnosis Using RNNs SPRINGER INTERNATIONAL PUBLISHING AG. 2018: 112–19
Structured Prediction with Short/Long-Range Dependencies for Human Activity Recognition from Depth Skeleton Data
IEEE. 2017: 560–67
View details for Web of Science ID 000426978200079