Daniel Rubin, Postdoctoral Faculty Sponsor
- Relation constraint self-attention for image captioning NEUROCOMPUTING 2022; 501: 778-789
In vivo non-invasive confocal fluorescence imaging beyond 1,700 nm using superconducting nanowire single-photon detectors.
Light scattering by biological tissues sets a limit to the penetration depth of high-resolution optical microscopy imaging of live mammals in vivo. An effective approach to reduce light scattering and increase imaging depth is to extend the excitation and emission wavelengths to the second near-infrared window (NIR-II) at >1,000 nm, also called the short-wavelength infrared window. Here we show biocompatible core-shell lead sulfide/cadmium sulfide quantum dots emitting at ~1,880 nm and superconducting nanowire single-photon detectors for single-photon detection up to 2,000 nm, enabling a one-photon excitation fluorescence imaging window in the 1,700-2,000 nm (NIR-IIc) range with 1,650 nm excitation-the longest one-photon excitation and emission for in vivo mouse imaging so far. Confocal fluorescence imaging in NIR-IIc reached an imaging depth of ~1,100 μm through an intact mouse head, and enabled non-invasive cellular-resolution imaging in the inguinal lymph nodes of mice without any surgery. We achieve in vivo molecular imaging of high endothelial venules with diameters as small as ~6.6 μm, as well as CD169 + macrophages and CD3 + T cells in the lymph nodes, opening the possibility of non-invasive intravital imaging of immune trafficking in lymph nodes at the single-cell/vessel-level longitudinally.
View details for DOI 10.1038/s41565-022-01130-3
View details for PubMedID 35606441
High-precision tumor resection down to few-cell level guided by NIR-IIb molecular fluorescence imaging.
Proceedings of the National Academy of Sciences of the United States of America
2022; 119 (15): e2123111119
SignificanceSurgical removal of tumors has been performed to combat cancer for over a century by surgeons relying on visual inspection and experience to identify margins between malignant and healthy tissues. Herein, we present a rare-earth down-conversion nanoparticle-anti-CD105 conjugate for cancer targeting and a handheld imager capable of concurrent photographic imaging and fluorescence/luminescence imaging. An unprecedented tumor-to-muscle ratio was achieved by near-infrared-IIb (NIR-IIb, 1,500 to 1,700 nm) imaging during surgery, 100 times higher than previous organic dyes for unambiguous determination of tumor margin. The sensitivity/biocompatibility/safety of the probes and instrumentation developed here open a paradigm of imaging-guided surgery at the single-cell level, meeting all major requirements for clinical translation to combat cancer and save human lives.
View details for DOI 10.1073/pnas.2123111119
View details for PubMedID 35380898
- A cascaded nested network for 3T brain MR image segmentation guided by 7T labeling * PATTERN RECOGNITION 2022; 124
Handling data heterogeneity with generative replay in collaborative learning for medical imaging.
Medical image analysis
2022; 78: 102424
Collaborative learning, which enables collaborative and decentralized training of deep neural networks at multiple institutions in a privacy-preserving manner, is rapidly emerging as a valuable technique in healthcare applications. However, its distributed nature often leads to significant heterogeneity in data distributions across institutions. In this paper, we present a novel generative replay strategy to address the challenge of data heterogeneity in collaborative learning methods. Different from traditional methods that directly aggregating the model parameters, we leverage generative adversarial learning to aggregate the knowledge from all the local institutions. Specifically, instead of directly training a model for task performance, we develop a novel dual model architecture: a primary model learns the desired task, and an auxiliary "generative replay model" allows aggregating knowledge from the heterogenous clients. The auxiliary model is then broadcasted to the central sever, to regulate the training of primary model with an unbiased target distribution. Experimental results demonstrate the capability of the proposed method in handling heterogeneous data across institutions. On highly heterogeneous data partitions, our model achieves 4.88% improvement in the prediction accuracy on a diabetic retinopathy classification dataset, and 49.8% reduction of mean absolution value on a Bone Age prediction dataset, respectively, compared to the state-of-the art collaborative learning methods.
View details for DOI 10.1016/j.media.2022.102424
View details for PubMedID 35390737
Breast Tumor Segmentation in DCE-MRI With Tumor Sensitive Synthesis
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS
Segmenting breast tumors from dynamic contrast-enhanced magnetic resonance (DCE-MR) images is a critical step for early detection and diagnosis of breast cancer. However, variable shapes and sizes of breast tumors, as well as inhomogeneous background, make it challenging to accurately segment tumors in DCE-MR images. Therefore, in this article, we propose a novel tumor-sensitive synthesis module and demonstrate its usage after being integrated with tumor segmentation. To suppress false-positive segmentation with similar contrast enhancement characteristics to true breast tumors, our tumor-sensitive synthesis module can feedback differential loss of the true and false breast tumors. Thus, by following the tumor-sensitive synthesis module after the segmentation predictions, the false breast tumors with similar contrast enhancement characteristics to the true ones will be effectively reduced in the learned segmentation model. Moreover, the synthesis module also helps improve the boundary accuracy while inaccurate predictions near the boundary will lead to higher loss. For the evaluation, we build a very large-scale breast DCE-MR image dataset with 422 subjects from different patients, and conduct comprehensive experiments and comparisons with other algorithms to justify the effectiveness, adaptability, and robustness of our proposed method.
View details for DOI 10.1109/TNNLS.2021.3129781
View details for Web of Science ID 000732239600001
View details for PubMedID 34874872
In vivo NIR-II structured-illumination light-sheet microscopy.
Proceedings of the National Academy of Sciences of the United States of America
2021; 118 (6)
Noninvasive optical imaging with deep tissue penetration depth and high spatiotemporal resolution is important to longitudinally studying the biology at the single-cell level in live mammals, but has been challenging due to light scattering. Here, we developed near-infrared II (NIR-II) (1,000 to 1,700 nm) structured-illumination light-sheet microscopy (NIR-II SIM) with ultralong excitation and emission wavelengths up to 1,540 and 1,700 nm, respectively, suppressing light scattering to afford large volumetric three-dimensional (3D) imaging of tissues with deep-axial penetration depths. Integrating structured illumination into NIR-II light-sheet microscopy further diminished background and improved spatial resolution by approximately twofold. In vivo oblique NIR-II SIM was performed noninvasively for 3D volumetric multiplexed molecular imaging of the CT26 tumor microenvironment in mice, longitudinally mapping out CD4, CD8, and OX40 at the single-cell level in response to immunotherapy by cytosine-phosphate-guanine (CpG), a Toll-like receptor 9 (TLR-9) agonist combined with OX40 antibody treatment. NIR-II SIM affords an additional tool for noninvasive volumetric molecular imaging of immune cells in live mammals.
View details for DOI 10.1073/pnas.2023888118
View details for PubMedID 33526701
Synthesized 7T MRI from 3T MRI via deep learning in spatial and wavelet domains
MEDICAL IMAGE ANALYSIS
2020; 62: 101663
Ultra-high field 7T MRI scanners, while producing images with exceptional anatomical details, are cost prohibitive and hence highly inaccessible. In this paper, we introduce a novel deep learning network that fuses complementary information from spatial and wavelet domains to synthesize 7T T1-weighted images from their 3T counterparts. Our deep learning network leverages wavelet transformation to facilitate effective multi-scale reconstruction, taking into account both low-frequency tissue contrast and high-frequency anatomical details. Our network utilizes a novel wavelet-based affine transformation (WAT) layer, which modulates feature maps from the spatial domain with information from the wavelet domain. Extensive experimental results demonstrate the capability of the proposed method in synthesizing high-quality 7T images with better tissue contrast and greater details, outperforming state-of-the-art methods.
View details for DOI 10.1016/j.media.2020.101663
View details for Web of Science ID 000534353000006
View details for PubMedID 32120269
View details for PubMedCentralID PMC7237331
Light-sheet microscopy in the near-infrared II window.
Non-invasive deep-tissue three-dimensional optical imaging of live mammals with high spatiotemporal resolution is challenging owing to light scattering. We developed near-infrared II (1,000-1,700nm) light-sheet microscopy with excitation and emission of up to approximately 1,320nm and 1,700nm, respectively, for optical sectioning at a penetration depth of approximately 750mum through live tissues without invasive surgery and at a depth of approximately 2mm in glycerol-cleared brain tissues. Near-infrared II light-sheet microscopy in normal and oblique configurations enabled in vivo imaging of live mice through intact tissue, revealing abnormal blood flow and T-cell motion in tumor microcirculation and mapping out programmed-death ligand 1 and programmed cell death protein 1 in tumors with cellular resolution. Three-dimensional imaging through the intact mouse head resolved vascular channels between the skull and brain cortex, and allowed monitoring of recruitment of macrophages and microglia to the traumatic brain injury site.
View details for PubMedID 31086342
RGBD Salient Object Detection via Deep Fusion
IEEE TRANSACTIONS ON IMAGE PROCESSING
2017; 26 (5): 2274-2285
Numerous efforts have been made to design various low-level saliency cues for RGBD saliency detection, such as color and depth contrast features as well as background and color compactness priors. However, how these low-level saliency cues interact with each other and how they can be effectively incorporated to generate a master saliency map remain challenging problems. In this paper, we design a new convolutional neural network (CNN) to automatically learn the interaction mechanism for RGBD salient object detection. In contrast to existing works, in which raw image pixels are fed directly to the CNN, the proposed method takes advantage of the knowledge obtained in traditional saliency detection by adopting various flexible and interpretable saliency feature vectors as inputs. This guides the CNN to learn a combination of existing features to predict saliency more effectively, which presents a less complex problem than operating on the pixels directly. We then integrate a superpixel-based Laplacian propagation framework with the trained CNN to extract a spatially consistent saliency map by exploiting the intrinsic structure of the input image. Extensive quantitative and qualitative experimental evaluations on three data sets demonstrate that the proposed method consistently outperforms the state-of-the-art methods.
View details for DOI 10.1109/TIP.2017.2682981
View details for Web of Science ID 000399396400015
View details for PubMedID 28320666
- DeshadowNet: A Multi-context Embedding Deep Network for Shadow Removal IEEE. 2017: 2308-2316