Bio


Dr. Xianjin Dai is a Clinical Assistant Professor and the American Board of Radiology certified Medical Physicist in the Department of Radiation Oncology at Stanford University. Dr. Dai completed the CAMPEP-credited Therapeutic Medical Physics residency at Emory University and earned his PhD in Biomedical Engineering at the University of Florida. His research focuses on developing and translating novel biomedical imaging techniques to enhance the diagnosis, management, and treatment of cancer diseases. Dr. Dai's research interests encompass artificial intelligence in medicine, therapeutic physics, medical image analysis, multimodal imaging, biomedical optics, photoacoustic imaging, ultrasound imaging, and optical coherence tomography. He is a recipient of the DOD Prostate Cancer Research Program (PCRP) Early Investigator Research Award and the American Association of Physicists in Medicine (AAPM) Research Seed Funding Grant.

Academic Appointments


Administrative Appointments


  • Associate Editor, Medical Physics Journal (2023 - Present)

Boards, Advisory Committees, Professional Organizations


  • Member, AAPM (The American Association of Physicists in Medicine) (2018 - Present)
  • Member, SPIE (the international society for optics and photonics) (2017 - Present)
  • Member, Optica(formerly OSA) (2016 - Present)

Professional Education


  • DABR, The American Board of Radiology, Therapeutic Medical Physics
  • Residency, Emory University
  • Certificate, University of California, Los Angeles
  • PhD, University of Florida
  • MS, BS, University of Electronic Science and Technology of China

Current Research and Scholarly Interests


AI in Medicine
Biomedical Physics
Multimodal Imaging
Medical Device
Biomedical Optics
Photoacoustic/Thermoacoustic Imaging
Optical Imaging (Microscopy, OCT, DOT, FMT)
Ultrasound Imaging


1. Artificial intelligence (AI) has great potential for improving the efficiency, precision, accuracy, and overall quality of radiation therapy for cancer patients. AI platforms are still not widely adopted in clinical practice due to challenges associated with the clinical development and implementation of AI-based tools in radiation oncology. The goal of this project is to address these challenges with innovative concepts and strategic developments.

2. A multimodal imaging platform that combines the strengths of several different imaging modalities has the capability to characterize biological tissue more completely, offering improved diagnosis, management, and treatment of diseases. While multimodality images can be obtained by performing each individual modality separately without integrating them into a single platform, it is, however, time-consuming to acquire multimodality images through such a process. Additionally, it is hard to avoid errors from the required complex image registration, and more importantly, impossible to capture dynamic biological processes simultaneously. This project has demonstrated a multimodal imaging system integrating three emerging biomedical imaging techniques: photoacoustic imaging (PAI), optical coherence tomography (OCT), and ultrasound imaging (USI) to obtain optical absorption, scattering, and acoustic properties of tissue simultaneously. Several applications of the multimodal imaging platform have been explored preclinically.

3. X-ray luminescence computed tomography (XLCT) has been recently proposed as a new imaging modality by detecting the luminescent emission signals arising from the interaction between X-ray and the media. Compared to the clinically widely used X-ray CT (anatomical imaging), XLCT represents significant progress in X-ray-based imaging techniques, as X-ray-based molecular or functional imaging becomes achievable in XLCT. Moreover, compared to conventional pure optic-based molecular or functional imaging, XLCT offers two main advantages. First, autofluorescence, problematic for fluorescence imaging, can be avoided. Second, deep tissue in vivo imaging with high optical contrast and spatial resolution becomes achievable. However, progress in this area is significantly hindered by technological challenges posed by the fact that currently most XLCT systems take a long time to acquire whole-body images (low speed). Additionally, XLCT has been entirely reliant on conventional nanophosphors emitting light in the visible or near-infrared spectrum region (700-1000 nm) with high photon absorption and scattering in biological tissues, limiting XLCT for deeper tissue imaging (insufficient imaging penetration depth) and reducing spatial resolution (limited spatial resolution). This project has been focused on addressing these challenges with innovative concepts and strategic developments.

All Publications


  • Ultrasound-guided needle tracking with deep learning: A novel approach with photoacoustic ground truth. Photoacoustics Hui, X., Rajendran, P., Ling, T., Dai, X., Xing, L., Pramanik, M. 2023; 34: 100575

    Abstract

    Accurate needle guidance is crucial for safe and effective clinical diagnosis and treatment procedures. Conventional ultrasound (US)-guided needle insertion often encounters challenges in consistency and precisely visualizing the needle, necessitating the development of reliable methods to track the needle. As a powerful tool in image processing, deep learning has shown promise for enhancing needle visibility in US images, although its dependence on manual annotation or simulated data as ground truth can lead to potential bias or difficulties in generalizing to real US images. Photoacoustic (PA) imaging has demonstrated its capability for high-contrast needle visualization. In this study, we explore the potential of PA imaging as a reliable ground truth for deep learning network training without the need for expert annotation. Our network (UIU-Net), trained on ex vivo tissue image datasets, has shown remarkable precision in localizing needles within US images. The evaluation of needle segmentation performance extends across previously unseen ex vivo data and in vivo human data (collected from an open-source data repository). Specifically, for human data, the Modified Hausdorff Distance (MHD) value stands at approximately 3.73, and the targeting error value is around 2.03, indicating the strong similarity and small needle orientation deviation between the predicted needle and actual needle location. A key advantage of our method is its applicability beyond US images captured from specific imaging systems, extending to images from other US imaging systems.

    View details for DOI 10.1016/j.pacs.2023.100575

    View details for PubMedID 38174105

    View details for PubMedCentralID PMC10761306

  • Landmark tracking in liver US images using cascade convolutional neural networks with long short-term memory. Measurement science & technology Zhang, Y., Dai, X., Tian, Z., Lei, Y., Wynne, J. F., Patel, P., Chen, Y., Liu, T., Yang, X. 2023; 34 (5): 054002

    Abstract

    Accurate tracking of anatomic landmarks is critical for motion management in liver radiation therapy. Ultrasound (US) is a safe, low-cost technology that is broadly available and offer real-time imaging capability. This study proposed a deep learning-based tracking method for the US image-guided radiation therapy. The proposed cascade deep learning model is composed of an attention network, a mask region-based convolutional neural network (mask R-CNN), and a long short-term memory (LSTM) network. The attention network learns a mapping from an US image to a suspected area of landmark motion in order to reduce the search region. The mask R-CNN then produces multiple region-of-interest proposals in the reduced region and identifies the proposed landmark via three network heads: bounding box regression, proposal classification, and landmark segmentation. The LSTM network models the temporal relationship among the successive image frames for bounding box regression and proposal classification. To consolidate the final proposal, a selection method is designed according to the similarities between sequential frames. The proposed method was tested on the liver US tracking datasets used in the medical image computing and computer assisted interventions 2015 challenges, where the landmarks were annotated by three experienced observers to obtain their mean positions. Five-fold cross validation on the 24 given US sequences with ground truths shows that the mean tracking error for all landmarks is 0.65 ± 0.56 mm, and the errors of all landmarks are within 2 mm. We further tested the proposed model on 69 landmarks from the testing dataset that have the similar image pattern with the training pattern, resulting in a mean tracking error of 0.94 ± 0.83 mm. The proposed deep-learning model was implemented on a graphics processing unit (GPU), tracking 47-81 frames s-1. Our experimental results have demonstrated the feasibility and accuracy of our proposed method in tracking liver anatomic landmarks using US images, providing a potential solution for real-time liver tracking for active motion management during radiation therapy.

    View details for DOI 10.1088/1361-6501/acb5b3

    View details for PubMedID 36743834

  • Deformable CT image registration via a dual feasible neural network. Medical physics Lei, Y., Fu, Y., Tian, Z., Wang, T., Dai, X., Roper, J., Yu, D. S., McDonald, M., Bradley, J. D., Liu, T., Zhou, J., Yang, X. 2022

    Abstract

    A quality assurance (QA) CT scans are usually acquired during cancer radiotherapy to assess for any anatomical changes, which may cause an unacceptable dose deviation and therefore warrant a replan. Accurate and rapid deformable image registration (DIR) is needed to support contour propagation from the planning CT (pCT) to the QA CT to facilitate dose volume histogram (DVH) review. Further, the generated deformation maps are used to track the anatomical variations throughout the treatment course and calculate the corresponding accumulated dose from one or more treatment plans.In this study, we aim to develop a deep learning (DL)-based method for automatic deformable registration to align the pCT and the QA CT. Our proposed method, named dual-feasible framework, was implemented by a mutual network that functions as both a forward module and a backward module. The mutual network was trained to predict two deformation vector fields (DVFs) simultaneously, which were then used to register the pCT and QA CT in both directions. A novel dual feasible loss was proposed to train the mutual network. The dual-feasible framework was able to provide additional DVF regularization during network training, which preserves the topology and reduces folding problems. We conducted experiments on 65 head-and-neck cancer patients (228 CTs in total), each with 1 pCT and 2-6 QA CTs. For evaluations, we calculated the mean absolute error (MAE), peak-signal-to-noise ratio (PSNR), structural similarity index (SSIM), target registration error (TRE) between the deformed and target images and the Jacobian determinant of the predicted DVFs.Within the body contour, the mean MAE, PSNR, SSIM, and TRE are 122.7 HU, 21.8 dB, 0.62 and 4.1 mm before registration and are 40.6 HU, 30.8 dB, 0.94, and 2.0 mm after registration using the proposed method. These results demonstrate the feasibility and efficacy of our proposed method for pCT and QA CT DIR.In summary, we proposed a DL-based method for automatic DIR to match the pCT to the QA CT. Such DIR method would not only benefit current workflow of evaluating DVHs on QA CTs but may also facilitate studies of treatment response assessment and radiomics that depend heavily on the accurate localization of tissues across longitudinal images.

    View details for DOI 10.1002/mp.15875

    View details for PubMedID 35869866

  • Cascaded Learning-Based Cone Beam CT Head-And-Neck Multi-Organ Segmentation Lei, Y., Dai, X., Tian, Z., Wang, T., Zhou, J., Roper, J., Ghavidel, B., McDonald, M., Yu, D., Bradley, J., Liu, T., Yang, X. WILEY. 2022: E551-E552
  • Deformable CT Image Registration Using Unsupervised Deep Learning Networks Lei, Y., Fu, Y., Tian, Z., Wang, T., Zhang, J., Dai, X., Zhou, J., Roper, J., McDonald, M., Yu, D., Bradley, J., Liu, T., Yang, X. WILEY. 2022: E527
  • Ultrasound-Based Motion Tracking Using Hybrid Learning Network Lei, Y., Axente, M., Dai, X., Roper, J., Dhabaan, A., Chen, Y., Bradley, J., Liu, T., Yang, X. WILEY. 2022: E225-E226
  • Multi-organ auto-delineation in head-and-neck MRI for radiation therapy using regional convolutional neural network. Physics in medicine and biology Dai, X., Lei, Y., Wang, T., Zhou, J., Rudra, S., McDonald, M., Curran, W. J., Liu, T., Yang, X. 2022; 67 (2)

    Abstract

    Magnetic resonance imaging (MRI) allows accurate and reliable organ delineation for many disease sites in radiation therapy because MRI is able to offer superb soft-tissue contrast. Manual organ-at-risk delineation is labor-intensive and time-consuming. This study aims to develop a deep-learning-based automated multi-organ segmentation method to release the labor and accelerate the treatment planning process for head-and-neck (HN) cancer radiotherapy. A novel regional convolutional neural network (R-CNN) architecture, namely, mask scoring R-CNN, has been developed in this study. In the proposed model, a deep attention feature pyramid network is used as a backbone to extract the coarse features given by MRI, followed by feature refinement using R-CNN. The final segmentation is obtained through mask and mask scoring networks taking those refined feature maps as input. With the mask scoring mechanism incorporated into conventional mask supervision, the classification error can be highly minimized in conventional mask R-CNN architecture. A cohort of 60 HN cancer patients receiving external beam radiation therapy was used for experimental validation. Five-fold cross-validation was performed for the assessment of our proposed method. The Dice similarity coefficients of brain stem, left/right cochlea, left/right eye, larynx, left/right lens, mandible, optic chiasm, left/right optic nerve, oral cavity, left/right parotid, pharynx, and spinal cord were 0.89 ± 0.06, 0.68 ± 0.14/0.68 ± 0.18, 0.89 ± 0.07/0.89 ± 0.05, 0.90 ± 0.07, 0.67 ± 0.18/0.67 ± 0.10, 0.82 ± 0.10, 0.61 ± 0.14, 0.67 ± 0.11/0.68 ± 0.11, 0.92 ± 0.07, 0.85 ± 0.06/0.86 ± 0.05, 0.80 ± 0.13, and 0.77 ± 0.15, respectively. After the model training, all OARs can be segmented within 1 min.

    View details for DOI 10.1088/1361-6560/ac3b34

    View details for PubMedID 34794138

    View details for PubMedCentralID PMC8811683

  • Liver Motion Tracking in Ultrasound images Using Attention Guided Mask R-CNN with Long-Short-Term-Memory Network Zhang, Y., Dai, X., Tian, Z., Lei, Y., Chen, Y., Patel, P., Bradley, J. D., Liu, T., Yang, X., Bottenus, N., Ruiter, N. V. SPIE-INT SOC OPTICAL ENGINEERING. 2022

    View details for DOI 10.1117/12.2613013

    View details for Web of Science ID 000836325500021

  • Automated CT Segmentation for Rapid Assessment of Anatomical Variations in Head-and-Neck Radiation Therapy Dai, X., Lei, Y., Wang, T., Tian, Z., Zhou, J., McDonald, M., Yu, D. S., Ghavidel, B. B., Bradley, J. D., Liu, T., Yang, X., Linte, C. A., Siewerdsen, J. H. SPIE-INT SOC OPTICAL ENGINEERING. 2022

    View details for DOI 10.1117/12.2613060

    View details for Web of Science ID 000836300000043

  • Deep Learning-based Longitudinal CT Registration for Anatomy Variation Assessment during Radiotherapy Fu, Y., Lei, Y., Tian, Z., Wang, T., Dai, X., Zhou, J., McDonald, M., Bradley, J. D., Liu, T., Yang, X., Drukker, K., Iftekharuddin, K. M. SPIE-INT SOC OPTICAL ENGINEERING. 2022

    View details for DOI 10.1117/12.2611901

    View details for Web of Science ID 000838048600071

  • Deep learning-based motion tracking using ultrasound images. Medical physics Dai, X., Lei, Y., Roper, J., Chen, Y., Bradley, J. D., Curran, W. J., Liu, T., Yang, X. 2021; 48 (12): 7747-7756

    Abstract

    Ultrasound (US) imaging is an established imaging modality capable of offering video-rate volumetric images without ionizing radiation. It has the potential for intra-fraction motion tracking in radiation therapy. In this study, a deep learning-based method has been developed to tackle the challenges in motion tracking using US imaging.We present a Markov-like network, which is implemented via generative adversarial networks, to extract features from sequential US frames (one tracked frame followed by untracked frames) and thereby estimate a set of deformation vector fields (DVFs) through the registration of the tracked frame and the untracked frames. The positions of the landmarks in the untracked frames are finally determined by shifting landmarks in the tracked frame according to the estimated DVFs. The performance of the proposed method was evaluated on the testing dataset by calculating the tracking error (TE) between the predicted and ground truth landmarks on each frame.The proposed method was evaluated using the MICCAI CLUST 2015 dataset which was collected using seven US scanners with eight types of transducers and the Cardiac Acquisitions for Multi-structure Ultrasound Segmentation (CAMUS) dataset which was acquired using GE Vivid E95 ultrasound scanners. The CLUST dataset contains 63 2D and 22 3D US image sequences respectively from 42 and 18 subjects, and the CAMUS dataset includes 2D US images from 450 patients. On CLUST dataset, our proposed method achieved a mean tracking error of 0.70 ± 0.38 mm for the 2D sequences and 1.71 ± 0.84 mm for the 3D sequences for those public available annotations. And on CAMUS dataset, a mean tracking error of 0.54 ± 1.24 mm for the landmarks in the left atrium was achieved.A novel motion tracking algorithm using US images based on modern deep learning techniques has been demonstrated in this study. The proposed method can offer millimeter-level tumor motion prediction in real time, which has the potential to be adopted into routine tumor motion management in radiation therapy.

    View details for DOI 10.1002/mp.15321

    View details for PubMedID 34724712

  • Synthetic CT-aided multiorgan segmentation for CBCT-guided adaptive pancreatic radiotherapy. Medical physics Dai, X., Lei, Y., Wynne, J., Janopaul-Naylor, J., Wang, T., Roper, J., Curran, W. J., Liu, T., Patel, P., Yang, X. 2021; 48 (11): 7063-7073

    Abstract

    The delineation of organs at risk (OARs) is fundamental to cone-beam CT (CBCT)-based adaptive radiotherapy treatment planning, but is time consuming, labor intensive, and subject to interoperator variability. We investigated a deep learning-based rapid multiorgan delineation method for use in CBCT-guided adaptive pancreatic radiotherapy.To improve the accuracy of OAR delineation, two innovative solutions have been proposed in this study. First, instead of directly segmenting organs on CBCT images, a pretrained cycle-consistent generative adversarial network (cycleGAN) was applied to generating synthetic CT images given CBCT images. Second, an advanced deep learning model called mask-scoring regional convolutional neural network (MS R-CNN) was applied on those synthetic CT to detect the positions and shapes of multiple organs simultaneously for final segmentation. The OAR contours delineated by the proposed method were validated and compared with expert-drawn contours for geometric agreement using the Dice similarity coefficient (DSC), 95th percentile Hausdorff distance (HD95), mean surface distance (MSD), and residual mean square distance (RMS).Across eight abdominal OARs including duodenum, large bowel, small bowel, left and right kidneys, liver, spinal cord, and stomach, the geometric comparisons between automated and expert contours are as follows: 0.92 (0.89-0.97) mean DSC, 2.90 mm (1.63-4.19 mm) mean HD95, 0.89 mm (0.61-1.36 mm) mean MSD, and 1.43 mm (0.90-2.10 mm) mean RMS. Compared to the competing methods, our proposed method had significant improvements (p < 0.05) in all the metrics for all the eight organs. Once the model was trained, the contours of eight OARs can be obtained on the order of seconds.We demonstrated the feasibility of a synthetic CT-aided deep learning framework for automated delineation of multiple OARs on CBCT. The proposed method could be implemented in the setting of pancreatic adaptive radiotherapy to rapidly contour OARs with high accuracy.

    View details for DOI 10.1002/mp.15264

    View details for PubMedID 34609745

    View details for PubMedCentralID PMC8595847

  • Self-Supervised Learning-Based High-Resolution Ultrasound Imaging for Prostate Brachytherapy Yang, X., Lei, Y., Dai, X., Wang, T., Lin, J. Y., Axente, M., Roper, J. R., Bradley, J. D., Jani, A., Patel, P. R., Liu, T. ELSEVIER SCIENCE INC. 2021: E119-E120
  • Automated delineation of head and neck organs at risk using synthetic MRI-aided mask scoring regional convolutional neural network. Medical physics Dai, X., Lei, Y., Wang, T., Zhou, J., Roper, J., McDonald, M., Beitler, J. J., Curran, W. J., Liu, T., Yang, X. 2021; 48 (10): 5862-5873

    Abstract

    Auto-segmentation algorithms offer a potential solution to eliminate the labor-intensive, time-consuming, and observer-dependent manual delineation of organs-at-risk (OARs) in radiotherapy treatment planning. This study aimed to develop a deep learning-based automated OAR delineation method to tackle the current challenges remaining in achieving reliable expert performance with the state-of-the-art auto-delineation algorithms.The accuracy of OAR delineation is expected to be improved by utilizing the complementary contrasts provided by computed tomography (CT) (bony-structure contrast) and magnetic resonance imaging (MRI) (soft-tissue contrast). Given CT images, synthetic MR images were firstly generated by a pre-trained cycle-consistent generative adversarial network. The features of CT and synthetic MRI were then extracted and combined for the final delineation of organs using mask scoring regional convolutional neural network. Both in-house and public datasets containing CT scans from head-and-neck (HN) cancer patients were adopted to quantitatively evaluate the performance of the proposed method against current state-of-the-art algorithms in metrics including Dice similarity coefficient (DSC), 95th percentile Hausdorff distance (HD95), mean surface distance (MSD), and residual mean square distance (RMS).Across all of 18 OARs in our in-house dataset, the proposed method achieved an average DSC, HD95, MSD, and RMS of 0.77 (0.58-0.90), 2.90 mm (1.32-7.63 mm), 0.89 mm (0.42-1.85 mm), and 1.44 mm (0.71-3.15 mm), respectively, outperforming the current state-of-the-art algorithms by 6%, 16%, 25%, and 36%, respectively. On public datasets, for all nine OARs, an average DSC of 0.86 (0.73-0.97) were achieved, 6% better than the competing methods.We demonstrated the feasibility of a synthetic MRI-aided deep learning framework for automated delineation of OARs in HN radiotherapy treatment planning. The proposed method could be adopted into routine HN cancer radiotherapy treatment planning to rapidly contour OARs with high accuracy.

    View details for DOI 10.1002/mp.15146

    View details for PubMedID 34342878

  • Self-supervised learning for accelerated 3D high-resolution ultrasound imaging. Medical physics Dai, X., Lei, Y., Wang, T., Axente, M., Xu, D., Patel, P., Jani, A. B., Curran, W. J., Liu, T., Yang, X. 2021; 48 (7): 3916-3926

    Abstract

    Ultrasound (US) imaging has been widely used in diagnosis, image-guided intervention, and therapy, where high-quality three-dimensional (3D) images are highly desired from sparsely acquired two-dimensional (2D) images. This study aims to develop a deep learning-based algorithm to reconstruct high-resolution (HR) 3D US images only reliant on the acquired sparsely distributed 2D images.We propose a self-supervised learning framework using cycle-consistent generative adversarial network (cycleGAN), where two independent cycleGAN models are trained with paired original US images and two sets of low-resolution (LR) US images, respectively. The two sets of LR US images are obtained through down-sampling the original US images along the two axes, respectively. In US imaging, in-plane spatial resolution is generally much higher than through-plane resolution. By learning the mapping from down-sampled in-plane LR images to original HR US images, cycleGAN can generate through-plane HR images from original sparely distributed 2D images. Finally, HR 3D US images are reconstructed by combining the generated 2D images from the two cycleGAN models.The proposed method was assessed on two different datasets. One is automatic breast ultrasound (ABUS) images from 70 breast cancer patients, the other is collected from 45 prostate cancer patients. By applying a spatial resolution enhancement factor of 3 to the breast cases, our proposed method achieved the mean absolute error (MAE) value of 0.90 ± 0.15, the peak signal-to-noise ratio (PSNR) value of 37.88 ± 0.88 dB, and the visual information fidelity (VIF) value of 0.69 ± 0.01, which significantly outperforms bicubic interpolation. Similar performances have been achieved using the enhancement factor of 5 in these breast cases and using the enhancement factors of 5 and 10 in the prostate cases.We have proposed and investigated a new deep learning-based algorithm for reconstructing HR 3D US images from sparely acquired 2D images. Significant improvement on through-plane resolution has been achieved by only using the acquired 2D images without any external atlas images. Its self-supervision capability could accelerate HR US imaging.

    View details for DOI 10.1002/mp.14946

    View details for PubMedID 33993508

  • Synthetic MRI-Aided Delineation of Organs at Risk in Head-And-Neck Radiotherapy Dai, X., Lei, Y., Wang, T., Zhou, J., Roper, J., McDonald, M., Beitler, J., Bradley, J., Liu, T., Yang, X. WILEY. 2021
  • Rapid Organ-At-Risk Delineation in Pancreatic CBCT for CBCT-Guided Adaptive Radiotherapy Dai, X., Lei, Y., Janopaul-naylor, J., Wynne, J., Wang, T., Zhou, J., Roper, J., Bradley, J., Patel, P., Liu, T., Yang, X. WILEY. 2021
  • An Unsupervised Ultrasound Liver Motion Tracking Using Deep Convolutional Neural Network Lei, Y., Dai, X., Momin, S., Roper, J., Schreibmann, E., Patel, P., Bradley, J., Liu, T., Yang, X. WILEY. 2021
  • High-Resolution Ultrasound Imaging Through Self-Supervised Learning Dai, X., Lei, Y., Wang, T., Axente, M., Roper, J., Xu, D., Lin, J., Bradley, J., Liu, T., Yang, X. WILEY. 2021
  • Head-and-neck organs-at-risk auto-delineation using dual pyramid networks for CBCT-guided adaptive radiotherapy. Physics in medicine and biology Dai, X., Lei, Y., Wang, T., Dhabaan, A. H., McDonald, M., Beitler, J. J., Curran, W. J., Zhou, J., Liu, T., Yang, X. 2021; 66 (4): 045021

    Abstract

    Organ-at-risk (OAR) delineation is a key step for cone-beam CT (CBCT) based adaptive radiotherapy planning that can be a time-consuming, labor-intensive, and subject-to-variability process. We aim to develop a fully automated approach aided by synthetic MRI for rapid and accurate CBCT multi-organ contouring in head-and-neck (HN) cancer patients. MRI has superb soft-tissue contrasts, while CBCT offers bony-structure contrasts. Using the complementary information provided by MRI and CBCT is expected to enable accurate multi-organ segmentation in HN cancer patients. In our proposed method, MR images are firstly synthesized using a pre-trained cycle-consistent generative adversarial network given CBCT. The features of CBCT and synthetic MRI (sMRI) are then extracted using dual pyramid networks for final delineation of organs. CBCT images and their corresponding manual contours were used as pairs to train and test the proposed model. Quantitative metrics including Dice similarity coefficient (DSC), Hausdorff distance 95% (HD95), mean surface distance, and residual mean square distance (RMS) were used to evaluate the proposed method. The proposed method was evaluated on a cohort of 65 HN cancer patients. CBCT images were collected from those patients who received proton therapy. Overall, DSC values of 0.87 ± 0.03, 0.79 ± 0.10/0.79 ± 0.11, 0.89 ± 0.08/0.89 ± 0.07, 0.90 ± 0.08, 0.75 ± 0.06/0.77 ± 0.06, 0.86 ± 0.13, 0.66 ± 0.14, 0.78 ± 0.05/0.77 ± 0.04, 0.96 ± 0.04, 0.89 ± 0.04/0.89 ± 0.04, 0.83 ± 0.02, and 0.84 ± 0.07 for commonly used OARs for treatment planning including brain stem, left/right cochlea, left/right eye, larynx, left/right lens, mandible, optic chiasm, left/right optic nerve, oral cavity, left/right parotid, pharynx, and spinal cord, respectively, were achieved. This study provides a rapid and accurate OAR auto-delineation approach, which can be used for adaptive radiation therapy.

    View details for DOI 10.1088/1361-6560/abd953

    View details for PubMedID 33412527

  • Synthetic MRI-aided Multi-Organ Segmentation in Head-and-Neck Cone Beam CT Dai, X., Lei, Y., Wang, T., Zhou, J., Curran, W., Liu, T., Yang, X., Linte, C. A., Siewerdsen, J. H. SPIE-INT SOC OPTICAL ENGINEERING. 2021

    View details for DOI 10.1117/12.2581128

    View details for Web of Science ID 000850698500055

  • Intensity Non-uniformity Correction in MR Imaging using Deep Learning Dai, X., Lei, Y., Liu, Y., Wang, T., Curran, W. J., Patel, P., Liu, T., Yang, X., Krol, A., Gimi, B. S. SPIE-INT SOC OPTICAL ENGINEERING. 2021

    View details for DOI 10.1117/12.2549017

    View details for Web of Science ID 000672554700072

  • Multiparametric MRI-guided High-dose-rate Prostate Brachytherapy with Focal Dose Boost to Dominant Intraprostatic Lesions Wang, T., Giles, M., Press, R. H., Dai, X., Jani, A. B., Rossi, P., Lei, Y., Curran, W. J., Patel, P., Liu, T., Yang, X., Krol, A., Gimi, B. S. SPIE-INT SOC OPTICAL ENGINEERING. 2021

    View details for DOI 10.1117/12.2548152

    View details for Web of Science ID 000672554700069

  • Deep Attention Mask Regional Convolutional Neural Network for Head-and-Neck MRI Multi-Organ Auto-Delineation Dai, X., Lei, Y., Wang, T., Zhou, J., Curran, W. J., Liu, T., Yang, X., Mazurowski, M. A., Drukker, K. SPIE-INT SOC OPTICAL ENGINEERING. 2021

    View details for DOI 10.1117/12.2581131

    View details for Web of Science ID 000672800100041

  • Synthetic CT-based Multi-Organ Segmentation in Cone Beam CT for Adaptive Pancreatic Radiotherapy Dai, X., Lei, Y., Janopaul-Naylor, J., Wang, T., Roper, J., Zhou, J., Curran, W. J., Liu, T., Patel, P., Yang, X., Isgum, Landman, B. A. SPIE-INT SOC OPTICAL ENGINEERING. 2021

    View details for DOI 10.1117/12.2581132

    View details for Web of Science ID 000672800200068

  • Residual Mask Scoring Regional Convolutional Neural Network for Multi-Organ Segmentation in Head-and-Neck CT Dai, X., Lei, Y., Wang, T., Zhou, J., Roper, J., McDonald, M., Beitler, J. J., Curran, W. J., Liu, T., Yang, X., Gimi, B. S., Krol, A. SPIE-INT SOC OPTICAL ENGINEERING. 2021

    View details for DOI 10.1117/12.2581126

    View details for Web of Science ID 000671880400052

  • Deep Learning-based Volumetric Image Generation from Projection Imaging for Prostate Radiotherapy Dai, X., Lei, Y., Tian, Z., Wang, T., Liu, T., Curran, W. J., Yang, X., Linte, C. A., Siewerdsen, J. H. SPIE-INT SOC OPTICAL ENGINEERING. 2021

    View details for DOI 10.1117/12.2581053

    View details for Web of Science ID 000850698500060

  • Deep Learning-based Multi-catheter Reconstruction for MRI-guided HDR Prostate Brachytherapy Dai, X., Lei, Y., Zhang, Y., Wang, T., Curran, W. J., Patel, P., Liu, T., Yang, X., Linte, C. A., Siewerdsen, J. H. SPIE-INT SOC OPTICAL ENGINEERING. 2021

    View details for DOI 10.1117/12.2581123

    View details for Web of Science ID 000850698500061

  • Region Proposal Network for Multi-Organ Segmentation in CT for Pancreatic Radiotherapy Dai, X., Lei, Y., Janopaul-Naylor, J., Wang, T., Roper, J., Liu, T., Curran, W. J., Patel, P., Yang, X., Linte, C. A., Siewerdsen, J. H. SPIE-INT SOC OPTICAL ENGINEERING. 2021

    View details for DOI 10.1117/12.2581147

    View details for Web of Science ID 000850698500062

  • Multimodal MRI synthesis using unified generative adversarial networks. Medical physics Dai, X., Lei, Y., Fu, Y., Curran, W. J., Liu, T., Mao, H., Yang, X. 2020; 47 (12): 6343-6354

    Abstract

    Complementary information obtained from multiple contrasts of tissue facilitates physicians assessing, diagnosing and planning treatment of a variety of diseases. However, acquiring multiple contrasts magnetic resonance images (MRI) for every patient using multiple pulse sequences is time-consuming and expensive, where, medical image synthesis has been demonstrated as an effective alternative. The purpose of this study is to develop a unified framework for multimodal MR image synthesis.A unified generative adversarial network consisting of only a single generator and a single discriminator was developed to learn the mappings among images of four different modalities. The generator took an image and its modality label as inputs and learned to synthesize the image in the target modality, while the discriminator was trained to distinguish between real and synthesized images and classify them to their corresponding modalities. The network was trained and tested using multimodal brain MRI consisting of four different contrasts which are T1-weighted (T1), T1-weighted and contrast-enhanced (T1c), T2-weighted (T2), and fluid-attenuated inversion recovery (Flair). Quantitative assessments of our proposed method were made through computing normalized mean absolute error (NMAE), peak signal-to-noise ratio (PSNR), structural similarity index measurement (SSIM), visual information fidelity (VIF), and naturalness image quality evaluator (NIQE).The proposed model was trained and tested on a cohort of 274 glioma patients with well-aligned multi-types of MRI scans. After the model was trained, tests were conducted by using each of T1, T1c, T2, Flair as a single input modality to generate its respective rest modalities. Our proposed method shows high accuracy and robustness for image synthesis with arbitrary MRI modality that is available in the database as input. For example, with T1 as input modality, the NMAEs for the generated T1c, T2, Flair respectively are 0.034 ± 0.005, 0.041 ± 0.006, and 0.041 ± 0.006, the PSNRs respectively are 32.353 ± 2.525 dB, 30.016 ± 2.577 dB, and 29.091 ± 2.795 dB, the SSIMs are 0.974 ± 0.059, 0.969 ± 0.059, and 0.959 ± 0.059, the VIF are 0.750 ± 0.087, 0.706 ± 0.097, and 0.654 ± 0.062, and NIQE are 1.396 ± 0.401, 1.511 ± 0.460, and 1.259 ± 0.358, respectively.We proposed a novel multimodal MR image synthesis method based on a unified generative adversarial network. The network takes an image and its modality label as inputs and synthesizes multimodal images in a single forward pass. The results demonstrate that the proposed method is able to accurately synthesize multimodal MR images from a single MR image.

    View details for DOI 10.1002/mp.14539

    View details for PubMedID 33053202

    View details for PubMedCentralID PMC7796974

  • Intensity non-uniformity correction in MR imaging using residual cycle generative adversarial network. Physics in medicine and biology Dai, X., Lei, Y., Liu, Y., Wang, T., Ren, L., Curran, W. J., Patel, P., Liu, T., Yang, X. 2020; 65 (21): 215025

    Abstract

    Correcting or reducing the effects of voxel intensity non-uniformity (INU) within a given tissue type is a crucial issue for quantitative magnetic resonance (MR) image analysis in daily clinical practice. Although having no severe impact on visual diagnosis, the INU can highly degrade the performance of automatic quantitative analysis such as segmentation, registration, feature extraction and radiomics. In this study, we present an advanced deep learning based INU correction algorithm called residual cycle generative adversarial network (res-cycle GAN), which integrates the residual block concept into a cycle-consistent GAN (cycle-GAN). In cycle-GAN, an inverse transformation was implemented between the INU uncorrected and corrected magnetic resonance imaging (MRI) images to constrain the model through forcing the calculation of both an INU corrected MRI and a synthetic corrected MRI. A fully convolution neural network integrating residual blocks was applied in the generator of cycle-GAN to enhance end-to-end raw MRI to INU corrected MRI transformation. A cohort of 55 abdominal patients with T1-weighted MR INU images and their corrections with a clinically established and commonly used method, namely, N4ITK were used as a pair to evaluate the proposed res-cycle GAN based INU correction algorithm. Quantitatively comparisons of normalized mean absolute error (NMAE), peak signal-to-noise ratio (PSNR), normalized cross-correlation (NCC) indices, and spatial non-uniformity (SNU) were made among the proposed method and other approaches. Our res-cycle GAN based method achieved an NMAE of 0.011 ± 0.002, a PSNR of 28.0 ± 1.9 dB, an NCC of 0.970 ± 0.017, and a SNU of 0.298 ± 0.085. Our proposed method has significant improvements (p < 0.05) in NMAE, PSNR, NCC and SNU over other algorithms including conventional GAN and U-net. Once the model is well trained, our approach can automatically generate the corrected MR images in a few minutes, eliminating the need for manual setting of parameters.

    View details for DOI 10.1088/1361-6560/abb31f

    View details for PubMedID 33245059

    View details for PubMedCentralID PMC7934018

  • Automatic multi-catheter detection using deeply supervised convolutional neural network in MRI-guided HDR prostate brachytherapy. Medical physics Dai, X., Lei, Y., Zhang, Y., Qiu, R. L., Wang, T., Dresser, S. A., Curran, W. J., Patel, P., Liu, T., Yang, X. 2020; 47 (9): 4115-4124

    Abstract

    High-dose-rate (HDR) brachytherapy is an established technique to be used as monotherapy option or focal boost in conjunction with external beam radiation therapy (EBRT) for treating prostate cancer. Radiation source path reconstruction is a critical procedure in HDR treatment planning. Manually identifying the source path is labor intensive and time inefficient. In recent years, magnetic resonance imaging (MRI) has become a valuable imaging modality for image-guided HDR prostate brachytherapy due to its superb soft-tissue contrast for target delineation and normal tissue contouring. The purpose of this study is to investigate a deep-learning-based method to automatically reconstruct multiple catheters in MRI for prostate cancer HDR brachytherapy treatment planning.Attention gated U-Net incorporated with total variation (TV) regularization model was developed for multi-catheter segmentation in MRI. The attention gates were used to improve the accuracy of identifying small catheter points, while TV regularization was adopted to encode the natural spatial continuity of catheters into the model. The model was trained using the binary catheter annotation images offered by experienced physicists as ground truth paired with original MRI images. After the network was trained, MR images of a new prostate cancer patient receiving HDR brachytherapy were fed into the model to predict the locations and shapes of all the catheters. Quantitative assessments of our proposed method were based on catheter shaft and tip errors compared to the ground truth.Our method detected 299 catheters from 20 patients receiving HDR prostate brachytherapy with a catheter tip error of 0.37 ± 1.68 mm and a catheter shaft error of 0.93 ± 0.50 mm. For detection of catheter tips, our method resulted in 87% of the catheter tips within an error of less than ± 2.0 mm, and more than 71% of the tips can be localized within an absolute error of no >1.0 mm. For catheter shaft localization, 97% of catheters were detected with an error of <2.0 mm, while 63% were within 1.0 mm.In this study, we proposed a novel multi-catheter detection method to precisely localize the tips and shafts of catheters in three-dimensional MRI images of HDR prostate brachytherapy. It paves the way for elevating the quality and outcome of MRI-guided HDR prostate brachytherapy.

    View details for DOI 10.1002/mp.14307

    View details for PubMedID 32484573

    View details for PubMedCentralID PMC7708403

  • Multi-Organ Segmentation in Head-And-Neck CT Using Mask Scoring Regional Convolutional Neural Network (MS-RCNN) Dai, X., Lei, Y., Liu, Y., Wang, T., Jiang, X., Beitler, J., Curran, W., Liu, T., Yang, X. WILEY. 2020: E302
  • High-speed X-ray-induced luminescence computed tomography. Journal of biophotonics Dai, X. n., Cheng, K. n., Zhao, W. n., Xing, L. n. 2020

    Abstract

    X-ray-induced luminescence computed tomography (XLCT) is an emerging molecular imaging. Challenges in improving spatial resolution and reducing the scan time in a whole-body field of view (FOV) still remain for practical in vivo applications. In this study, we present a novel XLCT technique capable of obtaining three-dimensional (3D) images from a single snapshot. Specifically, a customed two-planar-mirror component is integrated into a cone beam XLCT imaging system to obtain multiple optical views of an object simultaneously. Furthermore, a compressive sensing based algorithm is adopted to improve the efficiency of 3D XLCT image reconstruction. Numerical simulations and experiments were conducted to validate the single snapshot X-ray-induced luminescence computed tomography (SS-XLCT). The results show that the 3D distribution of the nanophosphor targets can be visualized much faster than conventional cone beam XLCT imaging method that was used in our comparisons while maintaining comparable spatial resolution as in conventional XLCT imaging. SS-XLCT has the potential to harness the power of XLCT for rapid whole-body in vivo molecular imaging of small animals. This article is protected by copyright. All rights reserved.

    View details for DOI 10.1002/jbio.202000066

    View details for PubMedID 32445254

  • X-ray-induced shortwave infrared luminescence computed tomography OPTICS LETTERS Dai, X., Cheng, K., Zhao, W., Xing, L. 2019; 44 (19): 4769–72

    Abstract

    X-ray luminescence computed tomography (XLCT) based on x-ray-excitable nanophosphors has been proposed as a new modality for molecular imaging. The technique has two main advantages compared to other modalities. First, autofluorescence, which is problematic for fluorescence imaging, can be substantially reduced. Second, deep-tissue in vivo imaging with high optical contrast and spatial resolution becomes achievable. Here, we extend the novel XLCT modality from the visible or infrared region to a shortwave infrared wavelength by developing an x-ray-induced shortwave infrared luminescence computed tomography (SWIR-XLCT). For this application, rare-earth nanophosphors (RENPs) were synthesized as core/shell structures consisting of a Ho-doped NaYbF4 core surrounded by a NaYF4 shell that emit light efficiently in the shortwave infrared spectral region under x-ray excitation. Through numerical simulations and phantom experiments, we showed the feasibility of SWIR-XLCT and demonstrated its potential for x-ray luminescence imaging with high spatial resolution and deep depth.

    View details for DOI 10.1364/OL.44.004769

    View details for Web of Science ID 000488503500037

    View details for PubMedID 31568438

  • Radiation Activatable Radiosensitizers for Image-Guided and Enhanced Radiation Therapy against Head and Neck Cancer Cheng, K., Schueler, E., Dai, X., Zhao, W., Sivasubramanian, K., Jia, M., Xing, L. ELSEVIER SCIENCE INC. 2019: E673-E674
  • Miniature fluorescence molecular tomography (FMT) endoscope based on a MEMS scanning mirror and an optical fiberscope. Physics in medicine and biology Yang, H., Wang, D., Shan, T., Dai, X., Xie, H., Yang, L., Jiang, H. 2019; 64 (12): 125015

    Abstract

    We present a novel FMT endoscope by using a MEMS scanning mirror and an optical fiberscope. The diameter of this highly miniaturized FMT device is only 5 mm. To our knowledge, this is the smallest FMT device we found so far. Several phantom experiments based on indocyanine green (ICG) were conducted to demonstrate the imaging ability of this device. Two tumor-bearing mice were systematically injected with tumor-targeted NIR fluorescent probes (ATF-PEG-IO-830) and were then imaged to further demonstrate the ability of this FMT endoscope for imaging small animals.

    View details for DOI 10.1088/1361-6560/ab23b3

    View details for PubMedID 31117059

  • X-Ray-Induced Shortwave Infrared Luminescence Computed Tomography Dai, X., Cheng, K., Zhao, W., Xing, L. WILEY. 2019: E301-E302
  • Deep Learning-Based Dual-Energy CT Imaging Using Only a Single-Energy CT Data Zhao, W., Lv, T., Shen, L., Dai, X., Cheng, K., Jia, M., Chen, Y., Xing, L. WILEY. 2019: E276
  • Deep Learning-Based Tomographic Image Reconstruction with Ultra-Sparse Projection Views Shen, L., Zhao, W., Dai, X., Xing, L. WILEY. 2019: E436
  • Single Snapshot X-Ray-Induced Luminescence Computed Tomography (SS-XLCT) Dai, X., Cheng, K., Zhao, W., Xing, L. WILEY. 2019: E465-E466
  • Deep Learning for High Spatial Resolution X-Ray Luminescence Computed Tomography Dai, X., Cheng, K., Zhao, W., Xing, L. WILEY. 2019: E566
  • Reduced acquisition time for L-shell x-ray fluorescence Computed tomography using polycapillary x-ray optics. Medical physics Vernekohl, D. n., Ahmad, M. n., Dai, X. n., Zhao, W. n., Cheng, K. n., Xing, L. n. 2019

    Abstract

    X-ray fluorescence computed tomography (XFCT) is an emerging molecular imaging modality for preclinical and clinical applications with high atomic number contrast agents. XFCT allows detection of molecular biomarkers at tissue depths of 4-9 mm at L-shell energies and several centimeters for K-shell energies, while maintaining highspatial resolution. This is typically not possible for other molecular imaging modalities. The purpose of this study is to demonstrate XFCT imaging with reduced acquisition times. To accomplish this, x-ray focusing polycapillary optics are utilized to simultaneously increase x-ray fluence rate and spatial resolution in L-shell XFCT imaging.A prototype imaging system using a polycapillary focusing optic was demonstrated. The optic, which was custom-designed for this prototype, provided a high fluence rate with a focal spot size of 2.6 mm at a source to isocenter distance of 3 cm with a ten times higher fluence rate compared to standard collimation. The study evaluates three different phantoms to explore different trade-offs and limitations of L-shell XFCT imaging. A low contrast gold phantom and a high contrast gold phantom, each with three target regions with gold concentrations of 60, 80, and 100 μg ml-1 for low contrast and 200, 600, and 1000 μg ml-1 for high contrast, and a mouse-sized water phantom with gold concentrations between 300-500 μg ml-1 were imaged. X-ray fluorescence photons were measured using a silicon drift detector (SDD) with an energy resolution of 180 eV FWHM at an x-ray energy of 11 keV. Images were reconstructed with an iterative image reconstruction algorithm and analyzed for contrast to noise ratio (CNR) and signal to noise ratio (SNR).The XFCT data acquisition could be reduced from 17 h to under one hour. The polycapillary x-ray optic increases the x-ray fluence rate and lowers the amount of background scatter which leads to reduced imaging time and improved sensitivity. The quantitative analysis of the reconstructed images validates that concentrations of 60 μg ml-1 of gold can be visualized with L-shell XFCT imaging. For a mouse sized phantom, a concentration of 300 μg ml-1 gold was detected within a 66 min measurement.With a high fluence rate pencil beam from a polycapillary x-ray source, a reduction in signal integration time is achieved. It is presented that subtle amounts of contrast agents can be detected with L-shell XFCT within biologically relevant time frames. Our basic measurements show that the polycapillary x-ray source technology is appropriate to realize preclinical L-shell XFCT imaging. The integration of more SDDs into the system will lower the dose and increase the sensitivity.

    View details for DOI 10.1002/mp.13822

    View details for PubMedID 31512753

  • High spatial resolution x-ray luminescence computed tomography and x-ray fluorescence computed tomography Dai, X., Sivasubramanian, K., Xing, L., Pogue, B. W., Gioux, S. SPIE-INT SOC OPTICAL ENGINEERING. 2019

    View details for DOI 10.1117/12.2511875

    View details for Web of Science ID 000492315000020

  • A deep learning approach for dual-energy CT imaging using a single-energy CT data Zhao, W., Lv, T., Gao, P., Shen, L., Dai, X., Cheng, K., Jia, M., Chen, Y., Xing, L., Matej, S., Metzler, S. D. SPIE-INT SOC OPTICAL ENGINEERING. 2019

    View details for DOI 10.1117/12.2534433

    View details for Web of Science ID 000535354300073

  • Full density fluorescence molecular tomography (FD-FMT) based on a dichroic mirror. Applied optics Yang, H., Dai, X., Jiang, H. 2018; 57 (27): 7938-7941

    Abstract

    We present a novel method called full density fluorescence molecular tomography (FD-FMT) that can considerably improve the performance of conventional FMT. By converting each source (or detector) to a detector (or source) through the use of a dichroic mirror, FD-FMT not only increases the amount of optical projections by more than fourfold (compared to conventional FMT) to achieve high-resolution image reconstruction, but also offers the possibility to realize miniaturized FMT systems.

    View details for DOI 10.1364/AO.57.007938

    View details for PubMedID 30462063

    View details for PubMedCentralID PMC6541215

  • Fast noninvasive functional diffuse optical tomography for brain imaging. Journal of biophotonics Dai, X., Zhang, T., Yang, H., Tang, J., Carney, P. R., Jiang, H. 2018; 11 (3)

    Abstract

    Advances in epilepsy studies have shown that specific changes in hemodynamics precede and accompany seizure onset and propagation. However, it has been challenging to noninvasively detect these changes in real time and in humans, due to the lack of fast functional neuroimaging tools. In this study, we present a functional diffuse optical tomography (DOT) method with the guidance of an anatomical human head atlas for 3-dimensionally mapping the brain in real time. Central to our DOT system is a human head interface coupled with a technique that can incorporate topological information of the brain surface into the DOT image reconstruction. The performance of the DOT system was tested by imaging motor tasks-involved brain activities on N = 6 subjects (3 epilepsy patients and 3 healthy controls). We observed diffuse areas of activations from the reconstructed [HbT] images of patients, relative to more focal activations for healthy subjects. Moreover, significant pretask hemodynamic activations were also seen in the motor cortex of patients, which indicated abnormal activities persistent in the brain of an epilepsy patient. This work demonstrates that fast functional DOT is a valuable tool for noninvasive 3-dimensional mapping of brain hemodynamics.

    View details for DOI 10.1002/jbio.201600267

    View details for PubMedID 28696034

  • Targeted Molecular Imaging of Pancreatic Cancer with a Miniature Endoscope. Applied sciences (Basel, Switzerland) Dai, X., Qian, W., Yang, H., Yang, L., Jiang, H. 2017; 7 (12)

    Abstract

    It is highly desirable to develop novel approaches to improve patient survival rate of pancreatic cancer through early detection. Here, we present such an approach based on photoacoustic and fluorescence molecular imaging of pancreatic tumor using a miniature multimodal endoscope in combination with targeted multifunctional iron oxide nanoparticles (IONPs). A novel fan-shaped scanning mechanism was developed to minimize the invasiveness for endoscopic imaging of pancreatic tumors. The results show that the enhancements in photoacoustic and fluorescence signals using amino-terminal fragment (ATF) targeted IONPs were ~four to six times higher compared to that using non-targeted IONPs. Our study indicates the potential of the combination of the multimodal photoacoustic-fluorescence endoscopy and targeted multifunctional nanoparticles as an efficient tool to provide improved specificity and sensitivity for pancreatic cancer detection.

    View details for DOI 10.3390/app7121241

    View details for PubMedID 31205772

    View details for PubMedCentralID PMC6570408

  • In vivo photoacoustic imaging of vasculature with a low-cost miniature light emitting diode excitation. Optics letters Dai, X., Yang, H., Jiang, H. 2017; 42 (7): 1456-1459

    Abstract

    In this Letter, we present a photoacoustic imaging (PAI) system based on a low-cost high-power miniature light emitting diode (LED) that is capable of in vivo mapping vasculature networks in biological tissue. Overdriving with 200 ns pulses and operating at a repetition rate of 40 kHz, a 1.2 W 405 nm LED with a radiation area of 1000  μm×1000  μm and a size of 3.5  mm×3.5  mm was used to excite photoacoustic signals in tissue. Phantoms including black stripes, lead, and hair were used to validate the system in which a volumetric PAI image was obtained by scanning the transducer and the light beam in a two-dimensional x-y plane over the object. In vivo imaging of the vasculature of a mouse ear shows that LED-based PAI could have great potential for label-free biomedical imaging applications where the use of bulky and expensive pulsed lasers is impractical.

    View details for DOI 10.1364/OL.42.001456

    View details for PubMedID 28362791

  • Low-cost high-power light emitting diodes for photoacoustic imaging Dai, X., Yang, H., Jiang, H., Oraevsky, A. A., Wang, L. V. SPIE-INT SOC OPTICAL ENGINEERING. 2017

    View details for DOI 10.1117/12.2251524

    View details for Web of Science ID 000405954800123

  • Miniature Endoscope for Multimodal Imaging ACS PHOTONICS Dai, X., Yang, H., Shan, T., Xie, H., Berceli, S. A., Jiang, H. 2017; 4 (1): 174-180
  • Miniature multimodal endoscopic probe based on double-clad fiber Dai, X., Yang, H., Tang, J., Duan, C., Tanguy, Q., Xie, H., Jiang, H., Tearney, G. J., Wang, T. D. SPIE-INT SOC OPTICAL ENGINEERING. 2017

    View details for DOI 10.1117/12.2251510

    View details for Web of Science ID 000401132400015

  • A fast atlas-guided high density diffuse optical tomography system for brain imaging Dai, X., Zhang, T., Yang, H., Jiang, H., Tromberg, B. J., Yodh, A. G., SevickMuraca, E. M., Alfano, R. R. SPIE-INT SOC OPTICAL ENGINEERING. 2017

    View details for DOI 10.1117/12.2251534

    View details for Web of Science ID 000401132900025

  • Wearable scanning photoacoustic brain imaging in behaving rats. Journal of biophotonics Tang, J., Dai, X., Jiang, H. 2016; 9 (6): 570-5

    Abstract

    A wearable scanning photoacoustic imaging (wPAI) system is presented for noninvasive brain study in behaving rats. This miniaturized wPAI system consists of four pico linear servos and a single transducer-based PAI probe. It has a dimension of 50 mm × 35 mm × 40 mm, and a weight of 26 g excluding cablings. Phantom evaluation shows that wPAI achieves a lateral resolution of ∼0.5 mm and an axial resolution of ∼0.1 mm at a depth of up to 11 mm. Its imaging ability is also tested in a behaving rat, and the results indicate that wPAI is able to image blood vessels at a depth of up to 5 mm with intact scalp and skull. With its noninvasive, deep penetration, and functional imaging ability in behaving animals, wPAI can be used for behavior, cognition, and preclinical brain disease studies.

    View details for DOI 10.1002/jbio.201500311

    View details for PubMedID 26777064

  • Wearable 3-D Photoacoustic Tomography for Functional Brain Imaging in Behaving Rats. Scientific reports Tang, J., Coleman, J. E., Dai, X., Jiang, H. 2016; 6: 25470

    Abstract

    Understanding the relationship between brain function and behavior remains a major challenge in neuroscience. Photoacoustic tomography (PAT) is an emerging technique that allows for noninvasive in vivo brain imaging at micrometer-millisecond spatiotemporal resolution. In this article, a novel, miniaturized 3D wearable PAT (3D-wPAT) technique is described for brain imaging in behaving rats. 3D-wPAT has three layers of fully functional acoustic transducer arrays. Phantom imaging experiments revealed that the in-plane X-Y spatial resolutions were ~200 μm for each acoustic detection layer. The functional imaging capacity of 3D-wPAT was demonstrated by mapping the cerebral oxygen saturation via multi-wavelength irradiation in behaving hyperoxic rats. In addition, we demonstrated that 3D-wPAT could be used for monitoring sensory stimulus-evoked responses in behaving rats by measuring hemodynamic responses in the primary visual cortex during visual stimulation. Together, these results show the potential of 3D-wPAT for brain study in behaving rodents.

    View details for DOI 10.1038/srep25470

    View details for PubMedID 27146026

    View details for PubMedCentralID PMC4857106

  • Continuous-wave yellow-green laser at 0.56  μm based on frequency doubling of a diode-end-pumped ceramic Nd:YAG laser. Applied optics Yao, W., Gao, J., Zhang, L., Li, J., Tian, Y., Ma, Y., Wu, X., Ma, G., Yang, J., Pan, Y., Dai, X. 2015; 54 (18): 5817-21

    Abstract

    We present what is, to the best of our knowledge, the first report on yellow-green laser generation based on the frequency doubling of the 1.1 μm transitions in Nd:YAG ceramics. By employing an 885 nm diode laser as the end-pumping source and a lithium triborate crystal as the frequency doubler, the highest continuous wave output powers of 1.4, 0.5, and 1.1 W at 556, 558, and 561 nm are achieved, respectively. These result in optical-to-optical efficiencies of 6.9%, 2.5%, and 5.4% with respect to the absorbed pump power, respectively.

    View details for DOI 10.1364/AO.54.005817

    View details for PubMedID 26193034

  • Miniature probe integrating optical-resolution photoacoustic microscopy, optical coherence tomography, and ultrasound imaging: proof-of-concept. Optics letters Dai, X., Xi, L., Duan, C., Yang, H., Xie, H., Jiang, H. 2015; 40 (12): 2921-4

    Abstract

    In this Letter, we present a novel tri-modal miniature side-view probe, through which optical-resolution photoacoustic microscopy (OR-PAM), optical coherence tomography (OCT), and pulse-echo ultrasound (US) images can be coaxially acquired and displayed simultaneously. The probe consists of a common optical path for OR-PAM (light delivery) and OCT (light delivery/detection), and a 40-MHz unfocused ultrasound transducer for OR-PAM (photoacoustic detection) and US (ultrasound transmission/receiving) with an overall diameter of 2 mm. Combining OR-PAM, OCT, and US would provide complementary information including optical absorption (OR-PAM), optical back-scattering (OCT), and deep tissue structures (US) about biological tissue. Based on an integrated imaging system consisting of OR-PAM, time-domain OCT, and US, phantom images and in vivo images of rat ear were acquired to demonstrate the capabilities of the integrated tri-modality imaging probe. The probe yields a lateral resolution of 13.6 μm for OR-PAM and OCT, and an axial resolution of 43 μm for OR-PAM and US. Currently, for a scanning area of 1 ×1  mm, it took ∼25  min to acquire data for tri-modal volumetric imaging.

    View details for DOI 10.1364/OL.40.002921

    View details for PubMedID 26076296

  • FMTPen: A Miniaturized Handheld Fluorescence Molecular Tomography Probe for Image-Guided Cancer Surgery PHOTONICS Yang, H., He, B., Dai, X., Satpathy, M., Yang, L., Jiang, H. 2015; 2 (1): 279-287
  • High-power continuous-wave yellow-green laser at 558 nm under in-band pumping OPTICS COMMUNICATIONS Gao, J., Zhang, L., Sun, H., Dai, X., Wu, X. 2014; 319: 110-112
  • Highly efficient continuous-wave composite Nd:YAG laser at 1,112 nm under diode pumping directly into the emitting level APPLIED PHYSICS B-LASERS AND OPTICS Gao, J., Dai, X. J., Zhang, L., Sun, H. X., Wu, X. D. 2013; 111 (3): 407-413
  • A Continuous-Wave Medical Yellow Laser at 561 nm Gao, J., Dai, X. J., Zhang, L., Sun, H. X., Wu, X. D., IEEE IEEE. 2013
  • All-solid-state continuous-wave yellow laser at 561 nm under in-band pumping JOURNAL OF THE OPTICAL SOCIETY OF AMERICA B-OPTICAL PHYSICS Gao, J., Dai, X., Zhang, L., Sun, H., Wu, X. 2013; 30 (1): 95-98
  • Efficient continuous-wave 1112 nm Nd:YAG laser operation under direct diode pumping at 885 nm LASER PHYSICS LETTERS Gao, J., Dai, X. J., Zhang, L., Wu, X. D. 2013; 10 (1)
  • Quasi-three-level neodymium vanadate laser operation under polarized diode pumping: theoretical and experimental investigation LASER PHYSICS Gao, J., Yan, R. P., Dai, X. J., Yu, X., Zhang, L., Wu, X. D. 2012; 22 (8): 1279-1285