Honors & Awards


  • Competition winner, Automatic intervertebral discs segmentation, MICCAI (2015)
  • Competition winner, Automatic vertebra segmentation, MICCAI (2014)
  • Competition winner, Automatic cephalometric X-ray landmark detection ISBI (2014)

Professional Education


  • Doctor of Philosophy, Ljubljana University (2014)

Stanford Advisors


All Publications


  • Segmentation of organs-at-risks in head and neck CT images using convolutional neural networks. Medical physics Ibragimov, B., Xing, L. 2017; 44 (2): 547-557

    Abstract

    Accurate segmentation of organs-at-risks (OARs) is the key step for efficient planning of radiation therapy for head and neck (HaN) cancer treatment. In the work, we proposed the first deep learning-based algorithm, for segmentation of OARs in HaN CT images, and compared its performance against state-of-the-art automated segmentation algorithms, commercial software, and interobserver variability.Convolutional neural networks (CNNs)-a concept from the field of deep learning-were used to study consistent intensity patterns of OARs from training CT images and to segment the OAR in a previously unseen test CT image. For CNN training, we extracted a representative number of positive intensity patches around voxels that belong to the OAR of interest in training CT images, and negative intensity patches around voxels that belong to the surrounding structures. These patches then passed through a sequence of CNN layers that captured local image features such as corners, end-points, and edges, and combined them into more complex high-order features that can efficiently describe the OAR. The trained network was applied to classify voxels in a region of interest in the test image where the corresponding OAR is expected to be located. We then smoothed the obtained classification results by using Markov random fields algorithm. We finally extracted the largest connected component of the smoothed voxels classified as the OAR by CNN, performed dilate-erode operations to remove cavities of the component, which resulted in segmentation of the OAR in the test image.The performance of CNNs was validated on segmentation of spinal cord, mandible, parotid glands, submandibular glands, larynx, pharynx, eye globes, optic nerves, and optic chiasm using 50 CT images. The obtained segmentation results varied from 37.4% Dice coefficient (DSC) for chiasm to 89.5% DSC for mandible. We also analyzed the performance of state-of-the-art algorithms and commercial software reported in the literature, and observed that CNNs demonstrate similar or superior performance on segmentation of spinal cord, mandible, parotid glands, larynx, pharynx, eye globes, and optic nerves, but inferior performance on segmentation of submandibular glands and optic chiasm.We concluded that convolution neural networks can accurately segment most of OARs using a representative database of 50 HaN CT images. At the same time, inclusion of additional information, for example, MR images, may be beneficial to some OARs with poorly visible boundaries.

    View details for DOI 10.1002/mp.12045

    View details for PubMedID 28205307

    View details for PubMedCentralID PMC5383420

  • Augmenting atlas-based liver segmentation for radiotherapy treatment planning by incorporating image features proximal to the atlas contours PHYSICS IN MEDICINE AND BIOLOGY Li, D., Liu, L., Chen, J., Li, H., Yin, Y., Ibragimov, B., Xing, L. 2017; 62 (1): 272-288

    Abstract

    Atlas-based segmentation utilizes a library of previously delineated contours of similar cases to facilitate automatic segmentation. The problem, however, remains challenging because of limited information carried by the contours in the library. In this studying, we developed a narrow-shell strategy to enhance the information of each contour in the library and to improve the accuracy of the exiting atlas-based approach. This study presented a new concept of atlas based segmentation method. Instead of using the complete volume of the target organs, only information along the organ contours from the atlas images was used for guiding segmentation of the new image. In setting up an atlas-based library, we included not only the coordinates of contour points, but also the image features adjacent to the contour. In this work, 139 CT images with normal appearing livers collected for radiotherapy treatment planning were used to construct the library. The CT images within the library were first registered to each other using affine registration. The nonlinear narrow shell was generated alongside the object contours of registered images. Matching voxels were selected inside common narrow shell image features of a library case and a new case using a speed-up robust features (SURF) strategy. A deformable registration was then performed using a thin plate splines (TPS) technique. The contour associated with the library case was propagated automatically onto the new image by exploiting the deformation field vectors. The liver contour was finally obtained by employing level set based energy optimization within the narrow shell. The performance of the proposed method was evaluated by comparing quantitatively the auto-segmentation results with that delineated by physicians. A novel atlas-based segmentation technique with inclusion of neighborhood image features through the introduction of a narrow-shell surrounding the target objects was established. Application of the technique to 30 liver cases suggested that the technique was capable to reliably segment liver cases from CT, 4D-CT, and CBCT images with little human interaction. The accuracy and speed of the proposed method are quantitatively validated by comparing automatic segmentation results with the manual delineation results. The Jaccard similarity metric between the automatically generated liver contours obtained by the proposed method and the physician delineated results are on an average 90%-96% for planning images. Incorporation of image features into the library contours improves the currently available atlas-based auto-contouring techniques and provides a clinically practical solution for auto-segmentation. The proposed mountainous narrow shell atlas based method can achieve efficient automatic liver propagation for CT, 4D-CT and CBCT images with following treatment planning and should find widespread application in future treatment planning systems.

    View details for DOI 10.1088/1361-6560/62/1/272

    View details for Web of Science ID 000391567700008

    View details for PubMedID 27991439

  • Evaluation and comparison of 3D intervertebral disc localization and segmentation methods for 3D T2 MR data: A grand challenge. Medical image analysis Zheng, G., Chu, C., Belavý, D. L., Ibragimov, B., Korez, R., Vrtovec, T., Hutt, H., Everson, R., Meakin, J., Andrade, I. L., Glocker, B., Chen, H., Dou, Q., Heng, P., Wang, C., Forsberg, D., Neubert, A., Fripp, J., Urschler, M., Stern, D., Wimmer, M., Novikov, A. A., Cheng, H., Armbrecht, G., Felsenberg, D., Li, S. 2017; 35: 327-344

    Abstract

    The evaluation of changes in Intervertebral Discs (IVDs) with 3D Magnetic Resonance (MR) Imaging (MRI) can be of interest for many clinical applications. This paper presents the evaluation of both IVD localization and IVD segmentation methods submitted to the Automatic 3D MRI IVD Localization and Segmentation challenge, held at the 2015 International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI2015) with an on-site competition. With the construction of a manually annotated reference data set composed of 25 3D T2-weighted MR images acquired from two different studies and the establishment of a standard validation framework, quantitative evaluation was performed to compare the results of methods submitted to the challenge. Experimental results show that overall the best localization method achieves a mean localization distance of 0.8 mm and the best segmentation method achieves a mean Dice of 91.8%, a mean average absolute distance of 1.1 mm and a mean Hausdorff distance of 4.3 mm, respectively. The strengths and drawbacks of each method are discussed, which provides insights into the performance of different IVD localization and segmentation methods.

    View details for DOI 10.1016/j.media.2016.08.005

    View details for PubMedID 27567734

  • Fully automated quantitative cephalometry using convolutional neural networks. Journal of medical imaging (Bellingham, Wash.) Arik, S. Ö., Ibragimov, B., Xing, L. 2017; 4 (1): 014501-?

    Abstract

    Quantitative cephalometry plays an essential role in clinical diagnosis, treatment, and surgery. Development of fully automated techniques for these procedures is important to enable consistently accurate computerized analyses. We study the application of deep convolutional neural networks (CNNs) for fully automated quantitative cephalometry for the first time. The proposed framework utilizes CNNs for detection of landmarks that describe the anatomy of the depicted patient and yield quantitative estimation of pathologies in the jaws and skull base regions. We use a publicly available cephalometric x-ray image dataset to train CNNs for recognition of landmark appearance patterns. CNNs are trained to output probabilistic estimations of different landmark locations, which are combined using a shape-based model. We evaluate the overall framework on the test set and compare with other proposed techniques. We use the estimated landmark locations to assess anatomically relevant measurements and classify them into different anatomical types. Overall, our results demonstrate high anatomical landmark detection accuracy ([Formula: see text] to 2% higher success detection rate for a 2-mm range compared with the top benchmarks in the literature) and high anatomical type classification accuracy ([Formula: see text] average classification accuracy for test set). We demonstrate that CNNs, which merely input raw image patches, are promising for accurate quantitative cephalometry.

    View details for DOI 10.1117/1.JMI.4.1.014501

    View details for PubMedID 28097213

    View details for PubMedCentralID PMC5220585