Bio


Daniel L. Rubin, MD, MS is Professor of Biomedical Data Science, Radiology, Medicine (Biomedical Informatics), and Ophthalmology (courtesy) at Stanford University. He is Principal Investigator of two centers in the National Cancer Institute's Quantitative Imaging Network (QIN) and is Director of Biomedical Informatics for the Stanford Cancer Institute. He also leads the Research Informatics Center (RIC) of the School of Medicine (https://med.stanford.edu/ric.html). He previously chaired the Informatics Committee of the ECOG-ACRIN cooperative group, of the QIN Executive Committee, and of the RadLex Steering Committee of the Radiological Society of North America. His NIH-funded research program focuses on quantitative imaging and integrating imaging data with clinical and molecular data to discover imaging phenotypes that can predict the underlying biology, define disease subtypes, and personalize treatment. He is a Fellow of the American Institute for Medical and Biological Engineering (AIMBE), Fellow of the American College of Medical Informatics (ACMI), Fellow of the Society of Imaging Informatics in Medicine (SIIM), and recipient of the Distinguished Investigator Award from the Academy for Radiology & Biomedical Imaging Research. He has published over 350 scientific publications in biomedical imaging informatics, data science, and radiology.

Academic Appointments


Administrative Appointments


  • Director of Biomedical Informatics, Stanford Cancer Institute (2016 - 2024)
  • Co-Director, Cancer Imaging and Early Detection Program, Stanford Cancer Institute (2018 - 2024)
  • Director, Scholarly Concentration in Informatics and Data Driven Medicine, Stanford School of Medicine (2011 - 2024)

Honors & Awards


  • Fellow (FSIIM), Society of Imaging Informatics in Medicine (2018)
  • Distinguished Investigator Award, Academy for Radiology & Biomedical Imaging Research (2017)
  • Fellow (FACMI), American College of Medical Informatics (ACMI) (2012)
  • Honored Educator Award, Radiological Society of North America, Radiological Society of North America (RSNA) (2012, 2013)
  • Cum Laude Award, Radiological Society of North America (2011)
  • caBIG Connecting Collaborators Award, National Cancer Institute (2010)
  • Certificate of Merit, Radiological Society of North America (2009)
  • Cum Laude Award, Radiological Society of North America (2008)
  • Cum Laude Award, Radiological Society of North America (2006)

Boards, Advisory Committees, Professional Organizations


  • Diplomate, American Board of Radiology (1990 - Present)
  • Certified Physician and Surgeon, California Board of Medical Quality Assurance (1986 - Present)

Current Research and Scholarly Interests


My research interest is imaging informatics--ways computers can work with images to leverage their rich information content and to help physicians use images to guide personalized care. Just as biology has been revolutionized by online genetic data, now clinical medicine can be transformed by mining huge image repositories and electronically correlating image data with pathology and molecular data. Work in our lab thus lies at the intersection of biomedical informatics and imaging science, and we are working in several major areas. We are developing methods to extract information and meaning from images for data mining. We are also developing statistical natural language processing methods to extract and summarize information in radiology reports and published articles. We are building resources to integrate images with related clinical and molecular data to discover novel image biomarkers of disease. Finally, we are translating these methods into practice by creating decision support applications that relate radiology findings to diagnoses and that will improve diagnostic accuracy and clinical effectiveness.

Clinical Trials


  • Genetic & Pathological Studies of BRCA1/BRCA2: Associated Tumors & Blood Samples Recruiting

    The purpose of this study is to try to understand the biology of development of breast, ovarian, fallopian tube, peritoneal or endometrial cancer from persons at high genetic risk for these diseases. The influence of environmental factors on cancer development in individuals and families will be studied. The efficacy of treatments for these diseases will be evaluated.

    View full details

  • A Study of GDC-0853 in Patients With Resistant B-Cell Lymphoma or Chronic Lymphocytic Leukemia. Not Recruiting

    This open-label, Phase I study will evaluate the safety, tolerability, and pharmacokinetics of increasing doses of GDC-0853 in patients with relapsed or refractory B-cell non-Hodgkin's lymphoma or chronic lymphocytic leukemia. In a dose-expansion part, GDC-0853 will be assessed in subsets of patients.

    Stanford is currently not accepting patients for this trial. For more information, please contact Sabata Lund, 650-725-6432.

    View full details

  • A Study of the Bruton's Tyrosine Kinase Inhibitor, PCI-32765 (Ibrutinib), in Combination With Rituximab, Cyclophosphamide, Doxorubicin, Vincristine, and Prednisone in Patients With Newly Diagnosed Non-Germinal Center B-Cell Subtype of Diffuse Large B-Cell Lymphoma Not Recruiting

    The purpose of this study is to evaluate if ibrutinib administered in combination with rituximab, cyclophosphamide, doxorubicin, vincristine, and prednisone (R-CHOP) improves the clinical outcome in newly diagnosed patients with non-germinal center B-cell subtype (GCB) of diffuse large B-cell lymphoma (DLBCL) selected by immunohistochemistry (IHC) or newly diagnosed patients with activated B cell-like (ABC) subtype of DLBCL identified by gene expression profiling (GEP) or both populations.

    Stanford is currently not accepting patients for this trial. For more information, please contact Sipra Choudhury, 650-736-2563.

    View full details

  • A Study to Evaluate Safety, Tolerability, and Pharmacokinetics of Escalating Doses of AGS67E Given as Monotherapy in Subjects With Refractory or Relapsed Lymphoid Malignancies Not Recruiting

    The purpose of this study is to evaluate the safety, tolerability and pharmacokinetics of AGS67E both without and with myeloid growth factor (GF) in subjects with refractory or relapsed lymphoid malignancies. Immunogenicity and anticancer activity of AGS67E will also be assessed.

    Stanford is currently not accepting patients for this trial. For more information, please contact Sipra Choudhury, 650-736-2563.

    View full details

  • Correlation of PET-CT Studies With Serum Protein Analysis Not Recruiting

    To correlate serum proteomics patterns with PET/CT findings to improve cancer diagnosis, staging, prognosis, and therapy monitoring.

    Stanford is currently not accepting patients for this trial. For more information, please contact Erik Mittra, (650) 725 - 4711.

    View full details

  • Ibrutinib With Rituximab in Adults With Waldenström's Macroglobulinemia Not Recruiting

    The purpose of this study is to evaluate the safety and efficacy of ibrutinib in combination with rituximab in participants with Waldenström's macroglobulinemia (WM).

    Stanford is currently not accepting patients for this trial. For more information, please contact Kelsey Walters, 650-725-6432.

    View full details

  • Perfusion CT as a Predictor of Treatment Response in Patients With Rectal Cancer Not Recruiting

    A research study of rectal cancer perfusion (how blood flows to the rectum over time). We hope to learn whether perfusion characteristics of rectal masses may be predictive of response to treatment and whether rectal perfusion characteristics can be used to follow response to treatment.

    Stanford is currently not accepting patients for this trial. For more information, please contact Laura Gable, 650-736-0798.

    View full details

Projects


  • Quantitative image analysis and machine learning, Stanford University

    We are developing machine learning methods for image detection, segmentation, classification and retrieval.

    Location

    Stanford, CA

  • The ePAD project, Stanford University

    In this project we are creating a Web based semantic image annotation and analysis tool, the electronic Imaging Physician Device, see http://epad.stanford.edu

    Location

    Stanford, CA

  • Automatic abstraction of imaging observations with their characteristics, Stanford University

    The goal of this project is to extract semantic imaging features from radiology texts to enable automated decision support and quality assurance.

    Location

    Stanford, CA

  • Natural language processing of radiology reports, Stanford University

    We are developing methods to extract imaging features and characteristics from free text radiology reports to enable decision support.

    Location

    Stanford, CA

  • AI-Enabled Cancer Tumor Boards, Stanford University

    This project uses AI methods to summarize and guide decisions in patients who are reviewed during tumor boards

    Location

    Stanford, CA

2023-24 Courses


Stanford Advisees


Graduate and Fellowship Programs


  • Biomedical Data Science (Phd Program)
  • Biomedical Data Science (Masters Program)

All Publications


  • Learning domain-agnostic visual representation for computational pathology using medically-irrelevant style transfer augmentation IEEE Transactions on Medical Imaging Yamashita, R., Long, J., Banda, S., Shen*, J., Rubin*, D. L., (*equal contribution) 2021: 3945-3954

    Abstract

    Suboptimal generalization of machine learning models on unseen data is a key challenge which hampers the clinical applicability of such models to medical imaging. Although various methods such as domain adaptation and domain generalization have evolved to combat this challenge, learning robust and generalizable representations is core to medical image understanding, and continues to be a problem. Here, we propose STRAP (Style TRansfer Augmentation for histoPathology), a form of data augmentation based on random style transfer from non-medical style sources such as artistic paintings, for learning domain-agnostic visual representations in computational pathology. Style transfer replaces the low-level texture content of an image with the uninformative style of randomly selected style source image, while preserving the original high-level semantic content. This improves robustness to domain shift and can be used as a simple yet powerful tool for learning domain-agnostic representations. We demonstrate that STRAP leads to state-of-the-art performance, particularly in the presence of domain shifts, on two particular classification tasks in computational pathology. Our code is available at https://github.com/rikiyay/style-transfer-for-digital-pathology.

    View details for DOI 10.1109/TMI.2021.3101985

  • A Probabilistic Model to Support Radiologists' Classification Decisions in Mammography Practice MEDICAL DECISION MAKING Zeng, J., Gimenez, F., Burnside, E. S., Rubin, D. L., Shachter, R. 2019; 39 (3): 208–16
  • Automatic inference of BI-RADS final assessment categories from narrative mammography report findings Journal of Biomedical Informatics Banerjee, I., Bozkurt, S., Alkim, E., Sagreiya, H., Kurian, A. W., Rubin, D. L. 2019
  • Deep learning enables automatic detection and segmentation of brain metastases on multisequence MRI. Journal of magnetic resonance imaging : JMRI Grøvik, E. n., Yi, D. n., Iv, M. n., Tong, E. n., Rubin, D. n., Zaharchuk, G. n. 2019

    Abstract

    Detecting and segmenting brain metastases is a tedious and time-consuming task for many radiologists, particularly with the growing use of multisequence 3D imaging.To demonstrate automated detection and segmentation of brain metastases on multisequence MRI using a deep-learning approach based on a fully convolution neural network (CNN).Retrospective.In all, 156 patients with brain metastases from several primary cancers were included.5T and 3T.Pretherapy MR images included pre- and postgadolinium T1 -weighted 3D fast spin echo (CUBE), postgadolinium T1 -weighted 3D axial IR-prepped FSPGR (BRAVO), and 3D CUBE fluid attenuated inversion recovery (FLAIR).The ground truth was established by manual delineation by two experienced neuroradiologists. CNN training/development was performed using 100 and 5 patients, respectively, with a 2.5D network based on a GoogLeNet architecture. The results were evaluated in 51 patients, equally separated into those with few (1-3), multiple (4-10), and many (>10) lesions.Network performance was evaluated using precision, recall, Dice/F1 score, and receiver operating characteristic (ROC) curve statistics. For an optimal probability threshold, detection and segmentation performance was assessed on a per-metastasis basis. The Wilcoxon rank sum test was used to test the differences between patient subgroups.The area under the ROC curve (AUC), averaged across all patients, was 0.98 ± 0.04. The AUC in the subgroups was 0.99 ± 0.01, 0.97 ± 0.05, and 0.97 ± 0.03 for patients having 1-3, 4-10, and >10 metastases, respectively. Using an average optimal probability threshold determined by the development set, precision, recall, and Dice score were 0.79 ± 0.20, 0.53 ± 0.22, and 0.79 ± 0.12, respectively. At the same probability threshold, the network showed an average false-positive rate of 8.3/patient (no lesion-size limit) and 3.4/patient (10 mm3 lesion size limit).A deep-learning approach using multisequence MRI can automatically detect and segment brain metastases with high accuracy.3 Technical Efficacy Stage: 2 J. Magn. Reson. Imaging 2019.

    View details for PubMedID 31050074

  • Geographic atrophy segmentation in SD-OCT images using synthesized fundus autofluorescence imaging. Computer methods and programs in biomedicine Wu, M. n., Cai, X. n., Chen, Q. n., Ji, Z. n., Niu, S. n., Leng, T. n., Rubin, D. L., Park, H. n. 2019; 182: 105101

    Abstract

    Accurate assessment of geographic atrophy (GA) is critical for diagnosis and therapy of non-exudative age-related macular degeneration (AMD). Herein, we propose a novel GA segmentation framework for spectral-domain optical coherence tomography (SD-OCT) images that employs synthesized fundus autofluorescence (FAF) images.An en-face OCT image is created via the restricted sub-volume projection of three-dimensional OCT data. A GA region-aware conditional generative adversarial network is employed to generate a plausible FAF image from the en-face OCT image. The network balances the consistency between the entire synthesize FAF image and the lesion. We use a fully convolutional deep network architecture to segment the GA region using the multimodal images, where the features of the en-face OCT and synthesized FAF images are fused on the front-end of the network.Experimental results for 56 SD-OCT scans with GA indicate that our synthesis algorithm can generate high-quality synthesized FAF images and that the proposed segmentation network achieves a dice similarity coefficient, an overlap ratio, and an absolute area difference of 87.2%, 77.9%, and 11.0%, respectively.We report an automatic GA segmentation method utilizing synthesized FAF images.Our method is effective for multimodal segmentation of the GA region and can improve AMD treatment.

    View details for DOI 10.1016/j.cmpb.2019.105101

    View details for PubMedID 31600644

  • Automated Survival Prediction in Metastatic Cancer Patients Using High-Dimensional Electronic Medical Record Data. Journal of the National Cancer Institute Gensheimer, M. F., Henry, A. S., Wood, D. J., Hastie, T. J., Aggarwal, S., Dudley, S. A., Pradhan, P., Banerjee, I., Cho, E., Ramchandran, K., Pollom, E., Koong, A. C., Rubin, D. L., Chang, D. T. 2018

    Abstract

    Background: Oncologists use patients' life expectancy to guide decisions and may benefit from a tool that accurately predicts prognosis. Existing prognostic models generally use only a few predictor variables. We used an electronic medical record dataset to train a prognostic model for patients with metastatic cancer.Methods: The model was trained and tested using 12588 patients treated for metastatic cancer in the Stanford Health Care system from 2008 to 2017. Data sources included provider note text, labs, vital signs, procedures, medication orders, and diagnosis codes. Patients were divided randomly into a training set used to fit the model coefficients and a test set used to evaluate model performance (80%/20% split). A regularized Cox model with 4126 predictor variables was used. A landmarking approach was used due to the multiple observations per patient, with t0 set to the time of metastatic cancer diagnosis. Performance was also evaluated using 399 palliative radiation courses in test set patients.Results: The C-index for overall survival was 0.786 in the test set (averaged across landmark times). For palliative radiation courses, the C-index was 0.745 (95% confidence interval [CI] = 0.715 to 0.775) compared with 0.635 (95% CI = 0.601 to 0.669) for a published model using performance status, primary tumor site, and treated site (two-sided P<.001). Our model's predictions were well-calibrated.Conclusions: The model showed high predictive performance, which will need to be validated using external data. Because it is fully automated, the model can be used to examine providers' practice patterns and could be deployed in a decision support tool to help improve quality of care.

    View details for PubMedID 30346554

  • Association of Tumor [18F]FDG Activity and Diffusion Restriction with Clinical Outcomes of Rhabdomyosarcomas. Molecular imaging and biology : MIB : the official publication of the Academy of Molecular Imaging Pourmehdi Lahiji, A., Jackson, T., Nejadnik, H., von Eyben, R., Rubin, D., Spunt, S. L., Quon, A., Daldrup-Link, H. 2018

    Abstract

    PURPOSE: To evaluate whether the extent of restricted diffusion and 2-deoxy-2-[18F] fluoro-D-glucose ([18F]FDG) uptake of pediatric rhabdomyosarcomas (RMS) on positron emission tomography (PET)/magnetic resonance (MR) images provides prognostic information.PROCEDURE: In a retrospective, IRB-approved study, we evaluated [18F]FDG PET/CT and diffusion-weighted (DW) MR imaging studies of 21 children and adolescents (age 1-20years) with RMS of the head and neck. [18F]FDG PET and DW MR scans at the time of the initial tumor diagnosis were fused using MIM software. Quantitative measures of the tumor mass with restricted diffusion, [18F]FDG hypermetabolism, or both were dichotomized at the median and tested for significance using Gray's test. Data were analyzed using a survival analysis and competing risk model with death as the competing risk.RESULTS: [18F]FDG PET/MR images demonstrated a mismatch between tumor areas with increased [18F]FDG uptake and restricted diffusion. The DWI, PET, and DWI+PET tumor volumes were dichotomized at their median values, 23.7, 16.4, and 9.5cm3, respectively, and were used to estimate survival. DWI, PET, and DWI+PET overlap tumor volumes above the cutoff values were associated with tumor recurrence, regardless of post therapy COG stage (p=0.007, p=0.04, and p=0.07, respectively).CONCLUSION: The extent of restricted diffusion within RMS and overlap of hypermetabolism plus restricted diffusion predict unfavorable clinical outcomes.

    View details for PubMedID 30187233

  • Magnetic resonance imaging and molecular features associated with tumor-infiltrating lymphocytes in breast cancer. Breast cancer research : BCR Wu, J., Li, X., Teng, X., Rubin, D. L., Napel, S., Daniel, B. L., Li, R. 2018; 20 (1): 101

    Abstract

    BACKGROUND: We sought to investigate associations between dynamic contrast-enhanced (DCE) magnetic resonance imaging (MRI) features and tumor-infiltrating lymphocytes (TILs) in breast cancer, as well as to study if MRI features are complementary to molecular markers of TILs.METHODS: In this retrospective study, we extracted 17 computational DCE-MRI features to characterize tumor and parenchyma in The Cancer Genome Atlas cohort (n=126). The percentage of stromal TILs was evaluated on H&E-stained histological whole-tumor sections. We first evaluated associations between individual imaging features and TILs. Multiple-hypothesis testing was corrected by the Benjamini-Hochberg method using false discovery rate (FDR). Second, we implemented LASSO (least absolute shrinkage and selection operator) and linear regression nested with tenfold cross-validation to develop an imaging signature for TILs. Next, we built a composite prediction model for TILs by combining imaging signature with molecular features. Finally, we tested the prognostic significance of the TIL model in an independent cohort (I-SPY 1; n=106).RESULTS: Four imaging features were significantly associated with TILs (P<0.05 and FDR<0.2), including tumor volume, cluster shade of signal enhancement ratio (SER), mean SER of tumor-surrounding background parenchymal enhancement (BPE), and proportion of BPE. Among molecular and clinicopathological factors, only cytolytic score was correlated with TILs (rho=0.51; 95% CI, 0.36-0.63; P=1.6E-9). An imaging signature that linearly combines five features showed correlation with TILs (rho=0.40; 95% CI, 0.24-0.54; P=4.2E-6). A composite model combining the imaging signature and cytolytic score improved correlation with TILs (rho=0.62; 95% CI, 0.50-0.72; P=9.7E-15). The composite model successfully distinguished low vs high, intermediate vs high, and low vs intermediate TIL groups, with AUCs of 0.94, 0.76, and 0.79, respectively. During validation (I-SPY 1), the predicted TILs from the imaging signature separated patients into two groups with distinct recurrence-free survival (RFS), with log-rank P=0.042 among triple-negative breast cancer (TNBC). The composite model further improved stratification of patients with distinct RFS (log-rank P=0.0008), where TNBC with no/minimal TILs had a worse prognosis.CONCLUSIONS: Specific MRI features of tumor and parenchyma are associated with TILs in breast cancer, and imaging may play an important role in the evaluation of TILs by providing key complementary information in equivocal cases or situations that are prone to sampling bias.

    View details for PubMedID 30176944

  • Automated dendritic spine detection using convolutional neural networks on maximum intensity projected microscopic volumes. Journal of neuroscience methods Xiao, X., Djurisic, M., Hoogi, A., Sapp, R. W., Shatz, C. J., Rubin, D. L. 2018

    Abstract

    BACKGROUND: Dendritic spines are structural correlates of excitatory synapses in the brain. Their density and structure are shaped by experience, pointing to their role in memory encoding. Dendritic spine imaging, followed by manual analysis, is a primary way to study spines. However, an approach that analyses dendritic spines images in an automated and unbiased manner is needed to fully capture how spines change with normal experience, as well as in disease.NEW METHOD: We propose an approach based on fully convolutional neural networks (FCNs) to detect dendritic spines in two-dimensional maximum-intensity projected images from confocal fluorescent micrographs. We experiment on both fractionally strided convolution and efficient sub-pixel convolutions. Dendritic spines far from the dendritic shaft are pruned by extraction of the shaft to reduce false positives. Performance of the proposed method is evaluated by comparing predicted spine positions to those manually marked by experts.RESULTS: The averaged distance between predicted and manually annotated spines is 2.81±2.63 pixels (0.082±0.076 microns) and 2.87±2.33 pixels (0.084±0.068 microns) based on two different experts. FCN-based detection achieves F scores > 0.80 for both sets of expert annotations.COMPARISON WITH EXISTING METHODS: Our method significantly outperforms two well-known software, NeuronStudio and Neurolucida (p-value < 0.02).CONCLUSIONS: FCN architectures used in this work allow for automated dendritic spine detection. Superior outcomes are possible even with small training data-sets. The proposed method may generalize to other datasets on larger scales.

    View details for PubMedID 30130608

  • Distributed deep learning networks among institutions for medical imaging JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION Chang, K., Balachandar, N., Lam, C., Yi, D., Brown, J., Beers, A., Rosen, B., Rubin, D. L., Kalpathy-Cramer, J. 2018; 25 (8): 945–54

    Abstract

    Deep learning has become a promising approach for automated support for clinical diagnosis. When medical data samples are limited, collaboration among multiple institutions is necessary to achieve high algorithm performance. However, sharing patient data often has limitations due to technical, legal, or ethical concerns. In this study, we propose methods of distributing deep learning models as an attractive alternative to sharing patient data.We simulate the distribution of deep learning models across 4 institutions using various training heuristics and compare the results with a deep learning model trained on centrally hosted patient data. The training heuristics investigated include ensembling single institution models, single weight transfer, and cyclical weight transfer. We evaluated these approaches for image classification in 3 independent image collections (retinal fundus photos, mammography, and ImageNet).We find that cyclical weight transfer resulted in a performance that was comparable to that of centrally hosted patient data. We also found that there is an improvement in the performance of cyclical weight transfer heuristic with a high frequency of weight transfer.We show that distributing deep learning models is an effective alternative to sharing patient data. This finding has implications for any collaborative deep learning study.

    View details for PubMedID 29617797

    View details for PubMedCentralID PMC6077811

  • Probabilistic Prognostic Estimates of Survival in Metastatic Cancer Patients (PPES-Met) Utilizing Free-Text Clinical Narratives. Scientific reports Banerjee, I., Gensheimer, M. F., Wood, D. J., Henry, S., Aggarwal, S., Chang, D. T., Rubin, D. L. 2018; 8 (1): 10037

    Abstract

    We propose a deep learning model - Probabilistic Prognostic Estimates of Survival in Metastatic Cancer Patients (PPES-Met) for estimating short-term life expectancy (>3 months) of the patients by analyzing free-text clinical notes in the electronic medical record, while maintaining the temporal visit sequence. In a single framework, we integrated semantic data mapping and neural embedding technique to produce a text processing method that extracts relevant information from heterogeneous types of clinical notes in an unsupervised manner, and we designed a recurrent neural network to model the temporal dependency of the patient visits. The model was trained on a large dataset (10,293 patients) and validated on a separated dataset (1818 patients). Our method achieved an area under the ROC curve (AUC) of 0.89. To provide explain-ability, we developed an interactive graphical tool that may improve physician understanding of the basis for the model's predictions. The high accuracy and explain-ability of the PPES-Met model may enable our model to be used as a decision support tool to personalize metastatic cancer treatment and provide valuable assistance to the physicians.

    View details for PubMedID 29968730

  • The LOINC RSNA radiology playbook - a unified terminology for radiology procedures JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION Vreeman, D. J., Abhyankar, S., Wang, K. C., Carr, C., Collins, B., Rubin, D. L., Langlotz, C. P. 2018; 25 (7): 885–92

    Abstract

    This paper describes the unified LOINC/RSNA Radiology Playbook and the process by which it was produced.The Regenstrief Institute and the Radiological Society of North America (RSNA) developed a unification plan consisting of six objectives 1) develop a unified model for radiology procedure names that represents the attributes with an extensible set of values, 2) transform existing LOINC procedure codes into the unified model representation, 3) create a mapping between all the attribute values used in the unified model as coded in LOINC (ie, LOINC Parts) and their equivalent concepts in RadLex, 4) create a mapping between the existing procedure codes in the RadLex Core Playbook and the corresponding codes in LOINC, 5) develop a single integrated governance process for managing the unified terminology, and 6) publicly distribute the terminology artifacts.We developed a unified model and instantiated it in a new LOINC release artifact that contains the LOINC codes and display name (ie LONG_COMMON_NAME) for each procedure, mappings between LOINC and the RSNA Playbook at the procedure code level, and connections between procedure terms and their attribute values that are expressed as LOINC Parts and RadLex IDs. We transformed all the existing LOINC content into the new model and publicly distributed it in standard releases. The organizations have also developed a joint governance process for ongoing maintenance of the terminology.The LOINC/RSNA Radiology Playbook provides a universal terminology standard for radiology orders and results.

    View details for PubMedID 29850823

    View details for PubMedCentralID PMC6016707

  • Longitudinal Data in Ophthalmic Imaging: Curation and Annotation Hallak, J., Yi, D., Noorozi, V., Lam, C., Mojab, N., Baker, J., Rubin, D., Azar, D. T., Rosenblatt, M. ASSOC RESEARCH VISION OPHTHALMOLOGY INC. 2018
  • Proposing New RadLex Terms by Analyzing Free-Text Mammography Reports. Journal of digital imaging Bulu, H., Sippo, D. A., Lee, J. M., Burnside, E. S., Rubin, D. L. 2018

    Abstract

    After years of development, the RadLex terminology contains a large set of controlled terms for the radiology domain, but gaps still exist. We developed a data-driven approach to discover new terms for RadLex by mining a large corpus of radiology reports using natural language processing (NLP) methods. Our system, developed for mammography, discovers new candidate terms by analyzing noun phrases in free-text reports to extend the mammography part of RadLex. Our NLP system extracts noun phrases from free-text mammography reports and classifies these noun phrases as "Has Candidate RadLex Term" or "Does Not Have Candidate RadLex Term." We tested the performance of our algorithm using 100 free-text mammography reports. An expert radiologist determined the true positive and true negative RadLex candidate terms. We calculated precision/positive predictive value and recall/sensitivity metrics to judge the system's performance. Finally, to identify new candidate terms for enhancing RadLex, we applied our NLP method to 270,540 free-text mammography reports obtained from three academic institutions. Our method demonstrated precision/positive predictive value of 0.77 (159/206 terms) and arecall/sensitivity of 0.94 (159/170 terms). The overall accuracy of the system is 0.80 (235/293 terms). When we ran our system on the set of 270,540 reports, it found 31,800 unique noun phrases that are potential candidates for RadLex. Our data-driven approach to mining radiology reports can identify new candidate terms for expanding the breast imaging lexicon portion of RadLex and may be a useful approach for discovering new candidate terms from other radiology domains.

    View details for PubMedID 29560542

  • Automatic information extraction from unstructured mammography reports using distributed semantics JOURNAL OF BIOMEDICAL INFORMATICS Gupta, A., Banerjee, I., Rubin, D. L. 2018; 78: 78–86

    Abstract

    To date, the methods developed for automated extraction of information from radiology reports are mainly rule-based or dictionary-based, and, therefore, require substantial manual effort to build these systems. Recent efforts to develop automated systems for entity detection have been undertaken, but little work has been done to automatically extract relations and their associated named entities in narrative radiology reports that have comparable accuracy to rule-based methods. Our goal is to extract relations in a unsupervised way from radiology reports without specifying prior domain knowledge. We propose a hybrid approach for information extraction that combines dependency-based parse tree with distributed semantics for generating structured information frames about particular findings/abnormalities from the free-text mammography reports. The proposed IE system obtains a F1-score of 0.94 in terms of completeness of the content in the information frames, which outperforms a state-of-the-art rule-based system in this domain by a significant margin. The proposed system can be leveraged in a variety of applications, such as decision support and information retrieval, and may also easily scale to other radiology domains, since there is no need to tune the system with hand-crafted information extraction rules.

    View details for PubMedID 29329701

  • Expanding a radiology lexicon using contextual patterns in radiology reports. Journal of the American Medical Informatics Association : JAMIA Percha, B. n., Zhang, Y. n., Bozkurt, S. n., Rubin, D. n., Altman, R. B., Langlotz, C. P. 2018

    Abstract

    Distributional semantics algorithms, which learn vector space representations of words and phrases from large corpora, identify related terms based on contextual usage patterns. We hypothesize that distributional semantics can speed up lexicon expansion in a clinical domain, radiology, by unearthing synonyms from the corpus.We apply word2vec, a distributional semantics software package, to the text of radiology notes to identify synonyms for RadLex, a structured lexicon of radiology terms. We stratify performance by term category, term frequency, number of tokens in the term, vector magnitude, and the context window used in vector building.Ranking candidates based on distributional similarity to a target term results in high curation efficiency: on a ranked list of 775 249 terms, >50% of synonyms occurred within the first 25 terms. Synonyms are easier to find if the target term is a phrase rather than a single word, if it occurs at least 100× in the corpus, and if its vector magnitude is between 4 and 5. Some RadLex categories, such as anatomical substances, are easier to identify synonyms for than others.The unstructured text of clinical notes contains a wealth of information about human diseases and treatment patterns. However, searching and retrieving information from clinical notes often suffer due to variations in how similar concepts are described in the text. Biomedical lexicons address this challenge, but are expensive to produce and maintain. Distributional semantics algorithms can assist lexicon curation, saving researchers time and money.

    View details for PubMedID 29329435

  • Relevance Feedback for Enhancing Content Based Image Retrieval and Automatic Prediction of Semantic Image Features: Application to Bone Tumor Radiographs. Journal of biomedical informatics Banerjee, I. n., Kurtz, C. n., Edward Devorah, A. n., Do, B. n., Rubin, D. L., Beaulieu, C. F. 2018

    Abstract

    The majority of current medical CBIR systems perform retrieval based only on "imaging signatures" generated by extracting pixel-level quantitative features, and only rarely has a feedback mechanism been incorporated to improve retrieval performance. In addition, current medical CBIR approaches do not routinely incorporate semantic terms that model the user's high-level expectations, and this can limit CBIR performance.We propose a retrieval framework that exploits a hybrid feature space (HFS) that is built by integrating low-level image features and high-level semantic terms, through rounds of relevance feedback (RF) and performs similarity-based retrieval to support semi-automatic image interpretation. The novelty of the proposed system is that it can impute the semantic features of the query image by reformulating the query vector representation in the HFS via user feedback. We implemented our framework as a prototype that performs the retrieval over a database of 811 radiographic images that contains 69 unique types of bone tumors.We evaluated the system performance by conducting independent reading sessions with two subspecialist musculoskeletal radiologists. For the test set, the proposed retrieval system at fourth RF iteration of the sessions conducted with both the radiologists achieved mean average precision (MAP) value ∼ 0.90 where the initial MAP with baseline CBIR was 0.20. In addition, we also achieved high prediction accuracy (>0.8) for the majority of the semantic features automatically predicted by the system.Our proposed framework addresses some limitations of existing CBIR systems by incorporating user feedback and simultaneously predicting the semantic features of the query image. This obviates the need for the user to provide those terms and makes CBIR search more efficient for inexperience users/trainees. Encouraging results achieved in the current study highlight possible new directions in radiological image interpretation employing semantic CBIR combined with relevance feedback of visual similarity.

    View details for PubMedID 29981490

  • Beyond Retinal Layers: A Deep Voting Model for Automated Geographic Atrophy Segmentation in SD-OCT Images TRANSLATIONAL VISION SCIENCE & TECHNOLOGY Ji, Z., Chen, Q., Niu, S., Leng, T., Rubin, D. L. 2018; 7 (1): 1

    Abstract

    To automatically and accurately segment geographic atrophy (GA) in spectral-domain optical coherence tomography (SD-OCT) images by constructing a voting system with deep neural networks without the use of retinal layer segmentation.An automatic GA segmentation method for SD-OCT images based on the deep network was constructed. The structure of the deep network was composed of five layers, including one input layer, three hidden layers, and one output layer. During the training phase, the labeled A-scans with 1024 features were directly fed into the network as the input layer to obtain the deep representations. Then a soft-max classifier was trained to determine the label of each individual pixel. Finally, a voting decision strategy was used to refine the segmentation results among 10 trained models.Two image data sets with GA were used to evaluate the model. For the first dataset, our algorithm obtained a mean overlap ratio (OR) 86.94% ± 8.75%, absolute area difference (AAD) 11.49% ± 11.50%, and correlation coefficients (CC) 0.9857; for the second dataset, the mean OR, AAD, and CC of the proposed method were 81.66% ± 10.93%, 8.30% ± 9.09%, and 0.9952, respectively. The proposed algorithm was capable of improving over 5% and 10% segmentation accuracy, respectively, when compared with several state-of-the-art algorithms on two data sets.Without retinal layer segmentation, the proposed algorithm could produce higher segmentation accuracy and was more stable when compared with state-of-the-art methods that relied on retinal layer segmentation results. Our model may provide reliable GA segmentations from SD-OCT images and be useful in the clinical diagnosis of advanced nonexudative AMD.Based on the deep neural networks, this study presents an accurate GA segmentation method for SD-OCT images without using any retinal layer segmentation results, and may contribute to improved understanding of advanced nonexudative AMD.

    View details for PubMedID 29302382

  • Locally adaptive magnetic resonance intensity models for unsupervised segmentation of multiple sclerosis lesions JOURNAL OF MEDICAL IMAGING Galimzianova, A., Lesjak, Z., Rubin, D. L., Likar, B., Pernus, F., Spiclin, Z. 2018; 5 (1): 011007

    Abstract

    Multiple sclerosis (MS) is a neurological disease characterized by focal lesions and morphological changes in the brain captured on magnetic resonance (MR) images. However, extraction of the corresponding imaging markers requires accurate segmentation of normal-appearing brain structures (NABS) and the lesions in MR images. On MR images of healthy brains, the NABS can be accurately captured by MR intensity mixture models, which, in combination with regularization techniques, such as in Markov random field (MRF) models, are known to give reliable NABS segmentation. However, on MR images that also contain abnormalities such as MS lesions, obtaining an accurate and reliable estimate of NABS intensity models is a challenge. We propose a method for automated segmentation of normal-appearing and abnormal structures in brain MR images that is based on a locally adaptive NABS model, a robust model parameters estimation method, and an MRF-based segmentation framework. Experiments on multisequence brain MR images of 30 MS patients show that, compared to whole-brain MR intensity model and compared to four popular unsupervised lesion segmentation methods, the proposed method increases the accuracy of MS lesion segmentation.

    View details for PubMedID 29134190

    View details for PubMedCentralID PMC5665678

  • Retinal Lesion Detection With Deep Learning Using Image Patches INVESTIGATIVE OPHTHALMOLOGY & VISUAL SCIENCE Lam, C., Yu, C., Huang, L., Rubin, D. 2018; 59 (1): 590–96

    Abstract

    To develop an automated method of localizing and discerning multiple types of findings in retinal images using a limited set of training data without hard-coded feature extraction as a step toward generalizing these methods to rare disease detection in which a limited number of training data are available.Two ophthalmologists verified 243 retinal images, labeling important subsections of the image to generate 1324 image patches containing either hemorrhages, microaneurysms, exudates, retinal neovascularization, or normal-appearing structures from the Kaggle dataset. These image patches were used to train one standard convolutional neural network to predict the presence of these five classes. A sliding window method was used to generate probability maps across the entire image.The method was validated on the eOphta dataset of 148 whole retinal images for microaneurysms and 47 for exudates. A pixel-wise classification of the area under the curve of the receiver operating characteristic of 0.94 and 0.95, as well as a lesion-wise area under the precision recall curve of 0.86 and 0.64, was achieved for microaneurysms and exudates, respectively.Regionally trained convolutional neural networks can generate lesion-specific probability maps able to detect and distinguish between subtle pathologic lesions with only a few hundred training examples per lesion.

    View details for PubMedID 29372258

  • Non-Small Cell Lung Cancer Radiogenomics Map Identifies Relationships between Molecular and Imaging Phenotypes with Prognostic Implications. Radiology Zhou, M. n., Leung, A. n., Echegaray, S. n., Gentles, A. n., Shrager, J. B., Jensen, K. C., Berry, G. J., Plevritis, S. K., Rubin, D. L., Napel, S. n., Gevaert, O. n. 2018; 286 (1): 307–15

    Abstract

    Purpose To create a radiogenomic map linking computed tomographic (CT) image features and gene expression profiles generated by RNA sequencing for patients with non-small cell lung cancer (NSCLC). Materials and Methods A cohort of 113 patients with NSCLC diagnosed between April 2008 and September 2014 who had preoperative CT data and tumor tissue available was studied. For each tumor, a thoracic radiologist recorded 87 semantic image features, selected to reflect radiologic characteristics of nodule shape, margin, texture, tumor environment, and overall lung characteristics. Next, total RNA was extracted from the tissue and analyzed with RNA sequencing technology. Ten highly coexpressed gene clusters, termed metagenes, were identified, validated in publicly available gene-expression cohorts, and correlated with prognosis. Next, a radiogenomics map was built that linked semantic image features to metagenes by using the t statistic and the Spearman correlation metric with multiple testing correction. Results RNA sequencing analysis resulted in 10 metagenes that capture a variety of molecular pathways, including the epidermal growth factor (EGF) pathway. A radiogenomic map was created with 32 statistically significant correlations between semantic image features and metagenes. For example, nodule attenuation and margins are associated with the late cell-cycle genes, and a metagene that represents the EGF pathway was significantly correlated with the presence of ground-glass opacity and irregular nodules or nodules with poorly defined margins. Conclusion Radiogenomic analysis of NSCLC showed multiple associations between semantic image features and metagenes that represented canonical molecular pathways, and it can result in noninvasive identification of molecular properties of NSCLC. Online supplemental material is available for this article.

    View details for PubMedID 28727543

  • Radiology report annotation using intelligent word embeddings: Applied to multi-institutional chest CT cohort JOURNAL OF BIOMEDICAL INFORMATICS Banerjee, I., Chen, M. C., Lungren, M. P., Rubin, D. L. 2018; 77: 11–20

    Abstract

    We proposed an unsupervised hybrid method - Intelligent Word Embedding (IWE) that combines neural embedding method with a semantic dictionary mapping technique for creating a dense vector representation of unstructured radiology reports. We applied IWE to generate embedding of chest CT radiology reports from two healthcare organizations and utilized the vector representations to semi-automate report categorization based on clinically relevant categorization related to the diagnosis of pulmonary embolism (PE). We benchmark the performance against a state-of-the-art rule-based tool, PeFinder and out-of-the-box word2vec. On the Stanford test set, the IWE model achieved average F1 score 0.97, whereas the PeFinder scored 0.9 and the original word2vec scored 0.94. On UPMC dataset, the IWE model's average F1 score was 0.94, when the PeFinder scored 0.92 and word2vec scored 0.85. The IWE model had lowest generalization error with highest F1 scores. Of particular interest, the IWE model (trained on the Stanford dataset) outperformed PeFinder on the UPMC dataset which was used originally to tailor the PeFinder model.

    View details for PubMedID 29175548

    View details for PubMedCentralID PMC5771955

  • Assessing treatment response in triple-negative breast cancer from quantitative image analysis in perfusion magnetic resonance imaging. Journal of medical imaging (Bellingham, Wash.) Banerjee, I. n., Malladi, S. n., Lee, D. n., Depeursinge, A. n., Telli, M. n., Lipson, J. n., Golden, D. n., Rubin, D. L. 2018; 5 (1): 011008

    Abstract

    Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is sensitive but not specific to determining treatment response in early stage triple-negative breast cancer (TNBC) patients. We propose an efficient computerized technique for assessing treatment response, specifically the residual tumor (RT) status and pathological complete response (pCR), in response to neoadjuvant chemotherapy. The proposed approach is based on Riesz wavelet analysis of pharmacokinetic maps derived from noninvasive DCE-MRI scans, obtained before and after treatment. We compared the performance of Riesz features with the traditional gray level co-occurrence matrices and a comprehensive characterization of the lesion that includes a wide range of quantitative features (e.g., shape and boundary). We investigated a set of predictive models ([Formula: see text]) incorporating distinct combinations of quantitative characterizations and statistical models at different time points of the treatment and some area under the receiver operating characteristic curve (AUC) values we reported are above 0.8. The most efficient models are based on first-order statistics and Riesz wavelets, which predicted RT with an AUC value of 0.85 and pCR with an AUC value of 0.83, improving results reported in a previous study by [Formula: see text]. Our findings suggest that Riesz texture analysis of TNBC lesions can be considered a potential framework for optimizing TNBC patient care.

    View details for PubMedID 29134191

    View details for PubMedCentralID PMC5668126

  • Intratumoral Spatial Heterogeneity at Perfusion MR Imaging Predicts Recurrence-free Survival in Locally Advanced Breast Cancer Treated with Neoadjuvant Chemotherapy. Radiology Wu, J. n., Cao, G. n., Sun, X. n., Lee, J. n., Rubin, D. L., Napel, S. n., Kurian, A. W., Daniel, B. L., Li, R. n. 2018: 172462

    Abstract

    Purpose To characterize intratumoral spatial heterogeneity at perfusion magnetic resonance (MR) imaging and investigate intratumoral heterogeneity as a predictor of recurrence-free survival (RFS) in breast cancer. Materials and Methods In this retrospective study, a discovery cohort (n = 60) and a multicenter validation cohort (n = 186) were analyzed. Each tumor was divided into multiple spatially segregated, phenotypically consistent subregions on the basis of perfusion MR imaging parameters. The authors first defined a multiregional spatial interaction (MSI) matrix and then, based on this matrix, calculated 22 image features. A network strategy was used to integrate all image features and classify patients into different risk groups. The prognostic value of imaging-based stratification was evaluated in relation to clinical-pathologic factors with multivariable Cox regression. Results Three intratumoral subregions with high, intermediate, and low MR perfusion were identified and showed high consistency between the two cohorts. Patients in both cohorts were stratified according to network analysis of multiregional image features regarding RFS (log-rank test, P = .002 for both). Aggressive tumors were associated with a larger volume of the poorly perfused subregion as well as interaction between poorly and moderately perfused subregions and surrounding parenchyma. At multivariable analysis, the proposed MSI-based marker was independently associated with RFS (hazard ratio: 3.42; 95% confidence interval: 1.55, 7.57; P = .002) adjusting for age, estrogen receptor (ER) status, progesterone receptor status, human epidermal growth factor receptor type 2 (HER2) status, tumor volume, and pathologic complete response (pCR). Furthermore, imaging helped stratify patients for RFS within the ER-positive and HER2-positive subgroups (log-rank test, P = .007 and .004) and among patients without pCR after neoadjuvant chemotherapy (log-rank test, P = .003). Conclusion Breast cancer consists of multiple spatially distinct subregions. Imaging heterogeneity is an independent prognostic factor beyond traditional risk predictors.

    View details for PubMedID 29714680

  • Association of Omics Features with Histopathology Patterns in Lung Adenocarcinoma CELL SYSTEMS Yu, K., Berry, G. J., Rubin, D. L., Re, C., Altman, R. B., Snyder, M. 2017; 5 (6): 620-+

    Abstract

    Adenocarcinoma accounts for more than 40% of lung malignancy, and microscopic pathology evaluation is indispensable for its diagnosis. However, how histopathology findings relate to molecular abnormalities remains largely unknown. Here, we obtained H&E-stained whole-slide histopathology images, pathology reports, RNA sequencing, and proteomics data of 538 lung adenocarcinoma patients from The Cancer Genome Atlas and used these to identify molecular pathways associated with histopathology patterns. We report cell-cycle regulation and nucleotide binding pathways underpinning tumor cell dedifferentiation, and we predicted histology grade using transcriptomics and proteomics signatures (area under curve >0.80). We built an integrative histopathology-transcriptomics model to generate better prognostic predictions for stage I patients (p = 0.0182 ± 0.0021) compared with gene expression or histopathology studies alone, and the results were replicated in an independent cohort (p = 0.0220 ± 0.0070). These results motivate the integration of histopathology and omics data to investigate molecular mechanisms of pathology findings and enhance clinical prognostic prediction.

    View details for PubMedID 29153840

    View details for PubMedCentralID PMC5746468

  • A curated mammography data set for use in computer-aided detection and diagnosis research SCIENTIFIC DATA Lee, R., Gimenez, F., Hoogi, A., Miyake, K., Gorovoy, M., Rubin, D. L. 2017; 4: 170177

    Abstract

    Published research results are difficult to replicate due to the lack of a standard evaluation data set in the area of decision support systems in mammography; most computer-aided diagnosis (CADx) and detection (CADe) algorithms for breast cancer in mammography are evaluated on private data sets or on unspecified subsets of public databases. This causes an inability to directly compare the performance of methods or to replicate prior results. We seek to resolve this substantial challenge by releasing an updated and standardized version of the Digital Database for Screening Mammography (DDSM) for evaluation of future CADx and CADe systems (sometimes referred to generally as CAD) research in mammography. Our data set, the CBIS-DDSM (Curated Breast Imaging Subset of DDSM), includes decompressed images, data selection and curation by trained mammographers, updated mass segmentation and bounding boxes, and pathologic diagnosis for training data, formatted similarly to modern computer vision data sets. The data set contains 753 calcification cases and 891 mass cases, providing a data-set size capable of analyzing decision support systems in mammography.

    View details for PubMedID 29257132

  • Automated detection of foveal center in SD-OCT images using the saliency of retinal thickness maps MEDICAL PHYSICS Niu, S., Chen, Q., de Sisternes, L., Leng, T., Rubin, D. L. 2017; 44 (12): 6390–6403

    View details for DOI 10.1002/mp.12614

    View details for Web of Science ID 000425379200027

  • GLIOBLASTOMA TUMOR SEGMENTATION USING DEEP CONVOLUTIONAL NEURAL NETWORKS Liu, T., Achrol, A., Rubin, D., Chang, S. OXFORD UNIV PRESS INC. 2017: 147
  • Quantitative Image Feature Engine (QIFE): an Open-Source, Modular Engine for 3D Quantitative Feature Extraction from Volumetric Medical Images. Journal of digital imaging Echegaray, S., Bakr, S., Rubin, D. L., Napel, S. 2017

    Abstract

    The aim of this study was to develop an open-source, modular, locally run or server-based system for 3D radiomics feature computation that can be used on any computer system and included in existing workflows for understanding associations and building predictive models between image features and clinical data, such as survival. The QIFE exploits various levels of parallelization for use on multiprocessor systems. It consists of a managing framework and four stages: input, pre-processing, feature computation, and output. Each stage contains one or more swappable components, allowing run-time customization. We benchmarked the engine using various levels of parallelization on a cohort of CT scans presenting 108 lung tumors. Two versions of the QIFE have been released: (1) the open-source MATLAB code posted to Github, (2) a compiled version loaded in a Docker container, posted to DockerHub, which can be easily deployed on any computer. The QIFE processed 108 objects (tumors) in 2:12 (h/mm) using 1 core, and 1:04 (h/mm) hours using four cores with object-level parallelization. We developed the Quantitative Image Feature Engine (QIFE), an open-source feature-extraction framework that focuses on modularity, standards, parallelism, provenance, and integration. Researchers can easily integrate it with their existing segmentation and imaging workflows by creating input and output components that implement their existing interfaces. Computational efficiency can be improved by parallelizing execution at the cost of memory usage. Different parallelization levels provide different trade-offs, and the optimal setting will depend on the size and composition of the dataset to be processed.

    View details for PubMedID 28993897

  • Piecewise convexity of artificial neural networks NEURAL NETWORKS Rister, B., Rubin, D. L. 2017; 94: 34–45

    Abstract

    Although artificial neural networks have shown great promise in applications including computer vision and speech recognition, there remains considerable practical and theoretical difficulty in optimizing their parameters. The seemingly unreasonable success of gradient descent methods in minimizing these non-convex functions remains poorly understood. In this work we offer some theoretical guarantees for networks with piecewise affine activation functions, which have in recent years become the norm. We prove three main results. First, that the network is piecewise convex as a function of the input data. Second, that the network, considered as a function of the parameters in a single layer, all others held constant, is again piecewise convex. Third, that the network as a function of all its parameters is piecewise multi-convex, a generalization of biconvexity. From here we characterize the local minima and stationary points of the training objective, showing that they minimize the objective on certain subsets of the parameter space. We then analyze the performance of two optimization algorithms on multi-convex problems: gradient descent, and a method which repeatedly solves a number of convex sub-problems. We prove necessary convergence conditions for the first algorithm and both necessary and sufficient conditions for the second, after introducing regularization to the objective. Finally, we remark on the remaining difficulty of the global optimization problem. Under the squared error objective, we show that by varying the training data, a single rectifier neuron admits local minima arbitrarily far apart, both in objective value and parameter space.

    View details for PubMedID 28732233

  • Volumetric Image Registration From Invariant Keypoints IEEE TRANSACTIONS ON IMAGE PROCESSING Rister, B., Horowitz, M. A., Rubin, D. L. 2017; 26 (10): 4900–4910

    Abstract

    We present a method for image registration based on 3D scale- and rotation-invariant keypoints. The method extends the scale invariant feature transform (SIFT) to arbitrary dimensions by making key modifications to orientation assignment and gradient histograms. Rotation invariance is proven mathematically. Additional modifications are made to extrema detection and keypoint matching based on the demands of image registration. Our experiments suggest that the choice of neighborhood in discrete extrema detection has a strong impact on image registration accuracy. In head MR images, the brain is registered to a labeled atlas with an average Dice coefficient of 92%, outperforming registration from mutual information as well as an existing 3D SIFT implementation. In abdominal CT images, the spine is registered with an average error of 4.82 mm. Furthermore, keypoints are matched with high precision in simulated head MR images exhibiting lesions from multiple sclerosis. These results were achieved using only affine transforms, and with no change in parameters across a wide variety of medical images. This paper is freely available as a cross-platform software library.

    View details for DOI 10.1109/TIP.2017.2722689

    View details for Web of Science ID 000406329500024

    View details for PubMedID 28682256

    View details for PubMedCentralID PMC5581541

  • Age at Menarche and Late Adolescent Adiposity Associated with Mammographic Density on Processed Digital Mammograms in 24,840 Women CANCER EPIDEMIOLOGY BIOMARKERS & PREVENTION Alexeeff, S. E., Odo, N. U., Lipson, J. A., Achacosol, N., Rothstein, J. H., Yaffe, M. J., Liang, R. Y., Acton, L., McGuire, V., Whittemore, A. S., Rubin, D. L., Sieh, W., Habel, L. A. 2017; 26 (9): 1450–58

    Abstract

    Background: High mammographic density is strongly associated with increased breast cancer risk. Some, but not all, risk factors for breast cancer are also associated with higher mammographic density.Methods: The study cohort (N = 24,840) was drawn from the Research Program in Genes, Environment and Health of Kaiser Permanente Northern California and included non-Hispanic white females ages 40 to 74 years with a full-field digital mammogram (FFDM). Percent density (PD) and dense area (DA) were measured by a radiological technologist using Cumulus. The association of age at menarche and late adolescent body mass index (BMI) with PD and DA were modeled using linear regression adjusted for confounders.Results: Age at menarche and late adolescent BMI were negatively correlated. Age at menarche was positively associated with PD (P value for trend <0.0001) and DA (P value for trend <0.0001) in fully adjusted models. Compared with the reference category of ages 12 to 13 years at menarche, menarche at age >16 years was associated with an increase in PD of 1.47% (95% CI, 0.69-2.25) and an increase in DA of 1.59 cm2 (95% CI, 0.48-2.70). Late adolescent BMI was inversely associated with PD (P < 0.0001) and DA (P < 0.0001) in fully adjusted models.Conclusions: Age at menarche and late adolescent BMI are both associated with Cumulus measures of mammographic density on processed FFDM images.Impact: Age at menarche and late adolescent BMI may act through different pathways. The long-term effects of age at menarche on cancer risk may be mediated through factors besides mammographic density. Cancer Epidemiol Biomarkers Prev; 26(9); 1450-8. ©2017 AACR.

    View details for PubMedID 28698185

    View details for PubMedCentralID PMC5659765

  • Mammographic Density: Is There a Public Health Significance Linked to Published Relative Risk Data? Response RADIOLOGY Sieh, W., Lipson, J. A., Whittemore, A. S., Rubin, D. L. 2017; 284 (3): 919
  • Use of Radiology Procedure Codes in Health Care: The Need for Standardization and Structure RADIOGRAPHICS Wang, K. C., Patel, J. B., Vyas, B., Toland, M., Collins, B., Vreeman, D. J., Abhyankar, S., Siegel, E. L., Rubin, D. L., Langlotz, C. P. 2017; 37 (4): 1099–1110

    Abstract

    Radiology procedure codes are a fundamental part of most radiology workflows, such as ordering, scheduling, billing, and image interpretation. Nonstandardized unstructured procedure codes have typically been used in radiology departments. Such codes may be sufficient for specific purposes, but they offer limited support for interoperability. As radiology workflows and the various forms of clinical data exchange have become more sophisticated, the need for more advanced interoperability with use of standardized structured codes has increased. For example, structured codes facilitate the automated identification of relevant prior imaging studies and the collection of data for radiation dose tracking. The authors review the role of imaging procedure codes in radiology departments and across the health care enterprise. Standards for radiology procedure coding are described, and the mechanisms of structured coding systems are reviewed. In particular, the structure of the RadLex™ Playbook coding system and examples of the use of this system are described. Harmonization of the RadLex Playbook system with the Logical Observation Identifiers Names and Codes standard, which is currently in progress, also is described. The benefits and challenges of adopting standardized codes-especially the difficulties in mapping local codes to standardized codes-are reviewed. Tools and strategies for mitigating these challenges, including the use of billing codes as an intermediate step in mapping, also are reviewed. In addition, the authors describe how to use the RadLex Playbook Web service application programming interface for partial automation of code mapping. © RSNA, 2017.

    View details for PubMedID 28696857

  • Deep Learning for Brain MRI Segmentation: State of the Art and Future Directions. Journal of digital imaging Akkus, Z., Galimzianova, A., Hoogi, A., Rubin, D. L., Erickson, B. J. 2017

    Abstract

    Quantitative analysis of brain MRI is routine for many neurological diseases and conditions and relies on accurate segmentation of structures of interest. Deep learning-based segmentation approaches for brain MRI are gaining interest due to their self-learning and generalization ability over large amounts of data. As the deep learning architectures are becoming more mature, they gradually outperform previous state-of-the-art classical machine learning algorithms. This review aims to provide an overview of current deep learning-based segmentation approaches for quantitative brain MRI. First we review the current deep learning architectures used for segmentation of anatomical brain structures and brain lesions. Next, the performance, speed, and properties of deep learning approaches are summarized and discussed. Finally, we provide a critical assessment of the current state and identify likely future developments and trends.

    View details for DOI 10.1007/s10278-017-9983-4

    View details for PubMedID 28577131

  • Opening the Black Box: Visualization of Deep Neural Network for Detection of Disease in Retinal Fundus Photographs Huang, L. C., Yu, C., Kleinman, R. A., Shields, R. A., Smith, R. G., Lam, C., Yi, D., Rubin, D. ASSOC RESEARCH VISION OPHTHALMOLOGY INC. 2017
  • Prediction of EGFR and KRAS mutation in non-small cell lung cancer using quantitative 18F FDG-PET/CT metrics. Oncotarget Minamimoto, R., Jamali, M., Gevaert, O., Echegaray, S., Khuong, A., Hoang, C. D., Shrager, J. B., Plevritis, S. K., Rubin, D. L., Leung, A. N., Napel, S., Quon, A. 2017

    Abstract

    This study investigated the relationship between epidermal growth factor receptor (EGFR) and Kirsten rat sarcoma viral oncogene homolog (KRAS) mutations in non-small-cell lung cancer (NSCLC) and quantitative FDG-PET/CT parameters including tumor heterogeneity. 131 patients with NSCLC underwent staging FDG-PET/CT followed by tumor resection and histopathological analysis that included testing for the EGFR and KRAS gene mutations. Patient and lesion characteristics, including smoking habits and FDG uptake parameters, were correlated to each gene mutation. Never-smoker (P < 0.001) or low pack-year smoking history (p = 0.002) and female gender (p = 0.047) were predictive factors for the presence of the EGFR mutations. Being a current or former smoker was a predictive factor for the KRAS mutations (p = 0.018). The maximum standardized uptake value (SUVmax) of FDG uptake in lung lesions was a predictive factor of the EGFR mutations (p = 0.029), while metabolic tumor volume and total lesion glycolysis were not predictive. Amongst several tumor heterogeneity metrics included in our analysis, inverse coefficient of variation (1/COV) was a predictive factor (p < 0.02) of EGFR mutations status, independent of metabolic tumor diameter. Multivariate analysis showed that being a never-smoker was the most significant factor (p < 0.001) for the EGFR mutations in lung cancer overall. The tumor heterogeneity metric 1/COV and SUVmax were both predictive for the EGFR mutations in NSCLC in a univariate analysis. Overall, smoking status was the most significant factor for the presence of the EGFR and KRAS mutations in lung cancer.

    View details for DOI 10.18632/oncotarget.17782

    View details for PubMedID 28538213

  • Transfer learning on fused multiparametric MR images for classifying histopathological subtypes of rhabdomyosarcoma. Computerized medical imaging and graphics Banerjee, I., Crawley, A., Bhethanabotla, M., Daldrup-Link, H. E., Rubin, D. L. 2017

    Abstract

    This paper presents a deep-learning-based CADx for the differential diagnosis of embryonal (ERMS) and alveolar (ARMS) subtypes of rhabdomysarcoma (RMS) solely by analyzing multiparametric MR images. We formulated an automated pipeline that creates a comprehensive representation of tumor by performing a fusion of diffusion-weighted MR scans (DWI) and gadolinium chelate-enhanced T1-weighted MR scans (MRI). Finally, we adapted transfer learning approach where a pre-trained deep convolutional neural network has been fine-tuned based on the fused images for performing classification of the two RMS subtypes. We achieved 85% cross validation prediction accuracy from the fine-tuned deep CNN model. Our system can be exploited to provide a fast, efficient and reproducible diagnosis of RMS subtypes with less human interaction. The framework offers an efficient integration between advanced image processing methods and cutting-edge deep learning techniques which can be extended to deal with other clinical domains that involve multimodal imaging for disease diagnosis.

    View details for DOI 10.1016/j.compmedimag.2017.05.002

    View details for PubMedID 28515009

  • Software for Distributed Computation on Medical Databases: A Demonstration Project JOURNAL OF STATISTICAL SOFTWARE Narasimhan, B., Rubin, D. L., Gross, S. M., Bendersky, M., Lavori, P. W. 2017; 77 (13): 1-22
  • Adaptive local window for level set segmentation of CT and MRI liver lesions. Medical image analysis Hoogi, A., Beaulieu, C. F., Cunha, G. M., Heba, E., Sirlin, C. B., Napel, S., Rubin, D. L. 2017; 37: 46-55

    Abstract

    We propose a novel method, the adaptive local window, for improving level set segmentation technique. The window is estimated separately for each contour point, over iterations of the segmentation process, and for each individual object. Our method considers the object scale, the spatial texture, and the changes of the energy functional over iterations. Global and local statistics are considered by calculating several gray level co-occurrence matrices. We demonstrate the capabilities of the method in the domain of medical imaging for segmenting 233 images with liver lesions. To illustrate the strength of our method, those lesions were screened by either Computed Tomography or Magnetic Resonance Imaging. Moreover, we analyzed images using three different energy models. We compared our method to a global level set segmentation, to a local framework that uses predefined fixed-size square windows and to a local region-scalable fitting model. The results indicate that our proposed method outperforms the other methods in terms of agreement with the manual marking and dependence on contour initialization or the energy model used. In case of complex lesions, such as low contrast lesions, heterogeneous lesions, or lesions with a noisy background, our method shows significantly better segmentation with an improvement of 0.25 ± 0.13 in Dice similarity coefficient, compared with state of the art fixed-size local windows (Wilcoxon, p < 0.001).

    View details for DOI 10.1016/j.media.2017.01.002

    View details for PubMedID 28157660

    View details for PubMedCentralID PMC5393306

  • Revealing cancer subtypes with higher-order correlations applied to imaging and omics data BMC MEDICAL GENOMICS Graim, K., Liu, T. T., Achrol, A. S., Paull, E. O., Newton, Y., Chang, S. D., Harsh, G. R., Cordero, S. P., Rubin, D. L., Stuart, J. M. 2017; 10

    Abstract

    Patient stratification to identify subtypes with different disease manifestations, severity, and expected survival time is a critical task in cancer diagnosis and treatment. While stratification approaches using various biomarkers (including high-throughput gene expression measurements) for patient-to-patient comparisons have been successful in elucidating previously unseen subtypes, there remains an untapped potential of incorporating various genotypic and phenotypic data to discover novel or improved groupings.Here, we present HOCUS, a unified analytical framework for patient stratification that uses a community detection technique to extract subtypes out of sparse patient measurements. HOCUS constructs a patient-to-patient network from similarities in the data and iteratively groups and reconstructs the network into higher order clusters. We investigate the merits of using higher-order correlations to cluster samples of cancer patients in terms of their associations with survival outcomes.In an initial test of the method, the approach identifies cancer subtypes in mutation data of glioblastoma, ovarian, breast, prostate, and bladder cancers. In several cases, HOCUS provides an improvement over using the molecular features directly to compare samples. Application of HOCUS to glioblastoma images reveals a size and location classification of tumors that improves over human expert-based stratification.Subtypes based on higher order features can reveal comparable or distinct groupings. The distinct solutions can provide biologically- and treatment-relevant solutions that are just as significant as solutions based on the original data.

    View details for DOI 10.1186/s12920-017-0256-3

    View details for Web of Science ID 000397792900001

    View details for PubMedID 28359308

  • Automated intraretinal segmentation of SD-OCT images in normal and age-related macular degeneration eyes BIOMEDICAL OPTICS EXPRESS de Sisternes, L., Jonna, G., Moss, J., Marmor, M. F., Leng, T., Rubin, D. L. 2017; 8 (3): 1926-1949

    Abstract

    This work introduces and evaluates an automated intra-retinal segmentation method for spectral-domain optical coherence (SD-OCT) retinal images. While quantitative assessment of retinal features in SD-OCT data is important, manual segmentation is extremely time-consuming and subjective. We address challenges that have hindered prior automated methods, including poor performance with diseased retinas relative to healthy retinas, and data smoothing that obscures image features such as small retinal drusen. Our novel segmentation approach is based on the iterative adaptation of a weighted median process, wherein a three-dimensional weighting function is defined according to image intensity and gradient properties, and a set of smoothness constraints and pre-defined rules are considered. We compared the segmentation results for 9 segmented outlines associated with intra-retinal boundaries to those drawn by hand by two retinal specialists and to those produced by an independent state-of-the-art automated software tool in a set of 42 clinical images (from 14 patients). These images were obtained with a Zeiss Cirrus SD-OCT system, including healthy, early or intermediate AMD, and advanced AMD eyes. As a qualitative evaluation of accuracy, a highly experienced third independent reader blindly rated the quality of the outlines produced by each method. The accuracy and image detail of our method was superior in healthy and early or intermediate AMD eyes (98.15% and 97.78% of results not needing substantial editing) to the automated method we compared against. While the performance was not as good in advanced AMD (68.89%), it was still better than the manual outlines or the comparison method (which failed in such cases). We also tested our method's performance on images acquired with a different SD-OCT manufacturer, collected from a large publicly available data set (114 healthy and 255 AMD eyes), and compared the data quantitatively to reference standard markings of the internal limiting membrane and inner boundary of retinal pigment epithelium, producing a mean unsigned positioning error of 6.04 ± 7.83µm (mean under 2 pixels). Our automated method should be applicable to data from different OCT manufacturers and offers detailed layer segmentations in healthy and AMD eyes.

    View details for DOI 10.1364/BOE.8.001926

    View details for Web of Science ID 000395942600047

    View details for PubMedCentralID PMC5480589

  • Adaptive Estimation of Active Contour Parameters Using Convolutional Neural Networks and Texture Analysis IEEE TRANSACTIONS ON MEDICAL IMAGING Hoogi, A., Subramaniam, A., Veerapaneni, R., Rubin, D. L. 2017; 36 (3): 781-791

    Abstract

    In this paper, we propose a generalization of the level set segmentation approach by supplying a novel method for adaptive estimation of active contour parameters. The presented segmentation method is fully automatic once the lesion has been detected. First, the location of the level set contour relative to the lesion is estimated using a convolutional neural network (CNN). The CNN has two convolutional layers for feature extraction, which lead into dense layers for classification. Second, the output CNN probabilities are then used to adaptively calculate the parameters of the active contour functional during the segmentation process. Finally, the adaptive window size surrounding each contour point is re-estimated by an iterative process that considers lesion size and spatial texture. We demonstrate the capabilities of our method on a dataset of 164 MRI and 112 CT images of liver lesions that includes low contrast and heterogeneous lesions as well as noisy images. To illustrate the strength of our method, we evaluated it against state of the art CNN-based and active contour techniques. For all cases, our method, as assessed by Dice similarity coefficients, performed significantly better than currently available methods. An average Dice improvement of 0.27 was found across the entire dataset over all comparisons. We also analyzed two challenging subsets of lesions and obtained a significant Dice improvement of 0.24 with our method (p <;0.001, Wilcoxon).

    View details for DOI 10.1109/TMI.2016.2628084

    View details for Web of Science ID 000396117300009

    View details for PubMedCentralID PMC5510759

  • Dynamic Strategy for Personalized Medicine: An Application to Metastatic Breast Cancer. Journal of biomedical informatics Chen, X., Shachter, R., Kurian, A., Rubin, D. 2017

    Abstract

    We compare methods to develop an adaptive strategy for therapy choice in a class of breast cancer patients, as an example of approaches to personalize therapies for individual characteristics and each patient's response to therapy. Our model maintains a Markov belief about the effectiveness of the different therapies and updates it as therapies are administered and tumor images are observed, reflecting tumor response. We compare three different approximate methods to solve our analytical model against standard medical practice and show significant potential benefit of the computed dynamic strategies to limit tumor growth and to reduce the number of time periods patients are given chemotherapy, with its attendant side effects.

    View details for DOI 10.1016/j.jbi.2017.02.012

    View details for PubMedID 28232241

  • Breast Cancer Risk and Mammographic Density Assessed with Semiautomated and Fully Automated Methods and BI-RADS. Radiology Jeffers, A. M., Sieh, W., Lipson, J. A., Rothstein, J. H., McGuire, V., Whittemore, A. S., Rubin, D. L. 2017; 282 (2): 348-355

    Abstract

    Purpose To compare three metrics of breast density on full-field digital mammographic (FFDM) images as predictors of future breast cancer risk. Materials and Methods This institutional review board-approved study included 125 women with invasive breast cancer and 274 age- and race-matched control subjects who underwent screening FFDM during 2004-2013 and provided informed consent. The percentage of density and dense area were assessed semiautomatically with software (Cumulus 4.0; University of Toronto, Toronto, Canada), and volumetric percentage of density and dense volume were assessed automatically with software (Volpara; Volpara Solutions, Wellington, New Zealand). Clinical Breast Imaging Reporting and Data System (BI-RADS) classifications of breast density were extracted from mammography reports. Odds ratios and 95% confidence intervals (CIs) were estimated by using conditional logistic regression stratified according to age and race and adjusted for body mass index, parity, and menopausal status, and the area under the receiver operating characteristic curve (AUC) was computed. Results The adjusted odds ratios and 95% CIs for each standard deviation increment of the percentage of density, dense area, volumetric percentage of density, and dense volume were 1.61 (95% CI: 1.19, 2.19), 1.49 (95% CI: 1.15, 1.92), 1.54 (95% CI: 1.12, 2.10), and 1.41 (95% CI: 1.11, 1.80), respectively. Odds ratios for women with extremely dense breasts compared with those with scattered areas of fibroglandular density were 2.06 (95% CI: 0.85, 4.97) and 2.05 (95% CI: 0.90, 4.64) for BI-RADS and Volpara density classifications, respectively. Clinical BI-RADS was more accurate (AUC, 0.68; 95% CI: 0.63, 0.74) than Volpara (AUC, 0.64; 95% CI: 0.58, 0.70) and continuous measures of percentage of density (AUC, 0.66; 95% CI: 0.60, 0.72), dense area (AUC, 0.66; 95% CI: 0.60, 0.72), volumetric percentage of density (AUC, 0.64; 95% CI: 0.58, 0.70), and density volume (AUC, 0.65; 95% CI: 0.59, 0.71), although the AUC differences were not statistically significant. Conclusion Mammographic density on FFDM images was positively associated with breast cancer risk by using the computer assisted methods and BI-RADS. BI-RADS classification was as accurate as computer-assisted methods for discrimination of patients from control subjects. (©) RSNA, 2016.

    View details for DOI 10.1148/radiol.2016152062

    View details for PubMedID 27598536

  • Individual Drusen Segmentation and Repeatability and Reproducibility of Their Automated Quantification in Optical Coherence Tomography Images. Translational vision science & technology de Sisternes, L., Jonna, G., Greven, M. A., Chen, Q., Leng, T., Rubin, D. L. 2017; 6 (1): 12-?

    Abstract

    To introduce a novel method to segment individual drusen in spectral-domain optical coherence tomography (SD-OCT), and evaluate its accuracy, and repeatability/reproducibility of drusen quantifications extracted from the segmentation results.Our method uses a smooth interpolation of the retinal pigment epithelium (RPE) outer boundary, fitted to candidate locations in proximity to Bruch's Membrane, to identify regions of substantial lifting in the inner-RPE or inner-segment boundaries, and then separates and evaluates individual druse independently. The study included 192 eyes from 129 patients. Accuracy of drusen segmentations was evaluated measuring the overlap ratio (OR) with manual markings, also comparing the results to a previously proposed method. Repeatability and reproducibility across scanning protocols of automated drusen quantifications were investigated in repeated SD-OCT volume pairs and compared with those measured by a commercial tool (Cirrus HD-OCT).Our segmentation method produced higher accuracy than a previously proposed method, showing similar differences to manual markings (0.72 ± 0.09 OR) as the measured intra- and interreader variability (0.78 ± 0.09 and 0.77 ± 0.09, respectively). The automated quantifications displayed high repeatability and reproducibility, showing a more stable behavior across scanning protocols in drusen area and volume measurements than the commercial software. Measurements of drusen slope and mean intensity showed significant differences across protocols.Automated drusen outlines produced by our method show promising accurate results that seem relatively stable in repeated scans using the same or different scanning protocols.The proposed method represents a viable tool to measure and track drusen measurements in early or intermediate age-related macular degeneration patients.

    View details for DOI 10.1167/tvst.6.1.12

    View details for PubMedID 28275527

    View details for PubMedCentralID PMC5338477

  • Building and Querying RDF/OWL Database of Semantically Annotated Nuclear Medicine Images JOURNAL OF DIGITAL IMAGING Hwang, K. H., Lee, H., Koh, G., Willrett, D., Rubin, D. L. 2017; 30 (1): 4-10
  • Predictive radiogenomics modeling of EGFR mutation status in lung cancer SCIENTIFIC REPORTS Gevaert, O., Echegaray, S., Khuong, A., Hoang, C. D., Shrager, J. B., Jensen, K. C., Berry, G. J., Guo, H. H., Lau, C., Plevritis, S. K., Rubin, D. L., Napel, S., Leung, A. N. 2017; 7

    Abstract

    Molecular analysis of the mutation status for EGFR and KRAS are now routine in the management of non-small cell lung cancer. Radiogenomics, the linking of medical images with the genomic properties of human tumors, provides exciting opportunities for non-invasive diagnostics and prognostics. We investigated whether EGFR and KRAS mutation status can be predicted using imaging data. To accomplish this, we studied 186 cases of NSCLC with preoperative thin-slice CT scans. A thoracic radiologist annotated 89 semantic image features of each patient's tumor. Next, we built a decision tree to predict the presence of EGFR and KRAS mutations. We found a statistically significant model for predicting EGFR but not for KRAS mutations. The test set area under the ROC curve for predicting EGFR mutation status was 0.89. The final decision tree used four variables: emphysema, airway abnormality, the percentage of ground glass component and the type of tumor margin. The presence of either of the first two features predicts a wild type status for EGFR while the presence of any ground glass component indicates EGFR mutations. These results show the potential of quantitative imaging to predict molecular properties in a non-invasive manner, as CT imaging is more readily available than biopsies.

    View details for DOI 10.1038/srep41674

    View details for PubMedID 28139704

  • A Convolutional Neural Network for Automatic Characterization of Plaque Composition in Carotid Ultrasound IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS Lekadir, K., Galimzianova, A., Betriu, A., del Mar Vila, M., Igual, L., Rubin, D. L., Fernandez, E., Radeva, P., Napel, S. 2017; 21 (1): 48-55

    Abstract

    Characterization of carotid plaque composition, more specifically the amount of lipid core, fibrous tissue, and calcified tissue, is an important task for the identification of plaques that are prone to rupture, and thus for early risk estimation of cardiovascular and cerebrovascular events. Due to its low costs and wide availability, carotid ultrasound has the potential to become the modality of choice for plaque characterization in clinical practice. However, its significant image noise, coupled with the small size of the plaques and their complex appearance, makes it difficult for automated techniques to discriminate between the different plaque constituents. In this paper, we propose to address this challenging problem by exploiting the unique capabilities of the emerging deep learning framework. More specifically, and unlike existing works which require a priori definition of specific imaging features or thresholding values, we propose to build a convolutional neural network (CNN) that will automatically extract from the images the information that is optimal for the identification of the different plaque constituents. We used approximately 90 000 patches extracted from a database of images and corresponding expert plaque characterizations to train and to validate the proposed CNN. The results of cross-validation experiments show a correlation of about 0.90 with the clinical assessment for the estimation of lipid core, fibrous cap, and calcified tissue areas, indicating the potential of deep learning for the challenging task of automatic characterization of plaque composition in carotid ultrasound.

    View details for DOI 10.1109/JBHI.2016.2631401

    View details for Web of Science ID 000395538500006

    View details for PubMedID 27893402

  • Computerized Prediction of Radiological Observations Based on Quantitative Feature Analysis: Initial Experience in Liver Lesions Journal of Digital Imaging Banerjee, I. 2017: 506–18

    Abstract

    We propose a computerized framework that, given a region of interest (ROI) circumscribing a lesion, not only predicts radiological observations related to the lesion characteristics with 83.2% average prediction accuracy but also derives explicit association between low-level imaging features and high-level semantic terms by exploiting their statistical correlation. Such direct association between semantic concepts and low-level imaging features can be leveraged to build a powerful annotation system for radiological images that not only allows the computer to infer the semantics from diverse medical images and run automatic reasoning for making diagnostic decision but also provides "human-interpretable explanation" of the system output to facilitate better end user understanding of computer-based diagnostic decisions. The core component of our framework is a radiological observation detection algorithm that maximizes the low-level imaging feature relevancy for each high-level semantic term. On a liver lesion CT dataset, we have implemented our framework by incorporating a large set of state-of-the-art low-level imaging features. Additionally, we included a novel feature that quantifies lesion(s) present within the liver that have a similar appearance as the primary lesion identified by the radiologist. Our framework achieved a high prediction accuracy (83.2%), and the derived association between semantic concepts and imaging features closely correlates with human expectation. The framework has been only tested on liver lesion CT images, but it is capable of being applied to other imaging domains.

    View details for DOI 10.1007/s10278-017-9987-0

    View details for PubMedCentralID PMC5537098

  • Heterogeneous Enhancement Patterns of Tumor-adjacent Parenchyma at MR Imaging Are Associated with Dysregulated Signaling Pathways and Poor Survival in Breast Cancer. Radiology Wu, J. n., Li, B. n., Sun, X. n., Cao, G. n., Rubin, D. L., Napel, S. n., Ikeda, D. M., Kurian, A. W., Li, R. n. 2017: 162823

    Abstract

    Purpose To identify the molecular basis of quantitative imaging characteristics of tumor-adjacent parenchyma at dynamic contrast material-enhanced magnetic resonance (MR) imaging and to evaluate their prognostic value in breast cancer. Materials and Methods In this institutional review board-approved, HIPAA-compliant study, 10 quantitative imaging features depicting tumor-adjacent parenchymal enhancement patterns were extracted and screened for prognostic features in a discovery cohort of 60 patients. By using data from The Cancer Genome Atlas (TCGA), a radiogenomic map for the tumor-adjacent parenchymal tissue was created and molecular pathways associated with prognostic parenchymal imaging features were identified. Furthermore, a multigene signature of the parenchymal imaging feature was built in a training cohort (n = 126), and its prognostic relevance was evaluated in two independent cohorts (n = 879 and 159). Results One image feature measuring heterogeneity (ie, information measure of correlation) was significantly associated with prognosis (false-discovery rate < 0.1), and at a cutoff of 0.57 stratified patients into two groups with different recurrence-free survival rates (log-rank P = .024). The tumor necrosis factor signaling pathway was identified as the top enriched pathway (hypergeometric P < .0001) among genes associated with the image feature. A 73-gene signature based on the tumor profiles in TCGA achieved good association with the tumor-adjacent parenchymal image feature (R(2) = 0.873), which stratified patients into groups regarding recurrence-free survival (log-rank P = .029) and overall survival (log-rank P = .042) in an independent TCGA cohort. The prognostic value was confirmed in another independent cohort (Gene Expression Omnibus GSE 1456), with log-rank P = .00058 for recurrence-free survival and log-rank P = .0026 for overall survival. Conclusion Heterogeneous enhancement patterns of tumor-adjacent parenchyma at MR imaging are associated with the tumor necrosis signaling pathway and poor survival in breast cancer. (©) RSNA, 2017 Online supplemental material is available for this article.

    View details for PubMedID 28708462

  • Perioperative Retinal Artery Occlusion: Risk Factors in Cardiac Surgery from the United States National Inpatient Sample 1998-2013. Ophthalmology Calway, T. n., Rubin, D. S., Moss, H. E., Joslin, C. E., Beckmann, K. n., Roth, S. n. 2017; 124 (2): 189–96

    Abstract

    To study the incidence and risk factors for retinal artery occlusion (RAO) in cardiac surgery.Retrospective study using the National Inpatient Sample (NIS).The NIS was searched for cardiac surgery. Retinal artery occlusion was identified by International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) codes. Postulated risk factors based on literature review were included in multivariate logistic models.Diagnosis of RAO.A total of 5 872 833 cardiac operative procedures were estimated in the United States from 1998 to 2013, with 4564 RAO cases (95% confidence interval [95% CI], 4282-4869). Nationally estimated RAO incidence was 7.77/10 000 cardiac operative procedures from 1998 to 2013 (95% CI, 7.29-8.29). Associated with increased RAO were giant cell arteritis (odds ratio [OR], 7.73; CI, 2.78-21.52; P < 0.001), transient cerebral ischemia (OR, 7.67; CI, 5.31-11.07; P < 0.001), carotid artery stenosis (OR, 7.52; CI, 6.22-9.09; P < 0.001), embolic stroke (OR, 4.43; CI, 3.05-6.42; P < 0.001), hypercoagulability (OR, 2.90; CI, 1.56-5.39; P < 0.001), myxoma (OR, 2.43; CI, 1.39-4.26; P = 0.002), diabetes mellitus (DM) with ophthalmic complications (OR, 1.89; CI, 1.10-3.24; P = 0.02), and aortic insufficiency (OR, 1.85; CI, 1.26-2.71; P = 0.002). Perioperative bleeding, aortic and mitral valve surgery, and septal surgery increased the odds of RAO. Negatively associated with RAO were female gender (OR, 0.77; CI, 0.66-0.89; P < 0.001), thrombocytopenia (OR, 0.79; CI, 0.62-1.00; P = 0.049), acute coronary syndrome (OR, 0.72; CI, 0.58-0.89; P = 0.003), atrial fibrillation (OR, 0.82; CI, 0.70-0.95; P = 0.01), congestive heart failure (OR, 0.73; CI, 0.60-0.88; P < 0.001), DM 2 (OR, 0.74; CI, 0.61-0.89; P = 0.001), and smoking (OR, 0.82; CI, 0.70-0.97; P = 0.02).Risk factors for RAO in cardiac surgery include giant cell arteritis, carotid stenosis, stroke, hypercoagulable state, and DM with ophthalmic complications; associated with lower risk were female gender, thrombocytopenia, acute coronary syndrome, atrial fibrillation, congestive heart failure, DM 2, and smoking. Surgery in which the heart was opened (e.g., septal repair) versus surgery in which it was not (e.g., CABG) and perioperative bleeding increased the risk of RAO.

    View details for PubMedID 27914836

  • Ischemic Optic Neuropathy in Cardiac Surgery: Incidence and Risk Factors in the United States from the National Inpatient Sample 1998 to 2013. Anesthesiology Rubin, D. S., Matsumoto, M. M., Moss, H. E., Joslin, C. E., Tung, A. n., Roth, S. n. 2017

    Abstract

    Ischemic optic neuropathy is the most common form of perioperative visual loss, with highest incidence in cardiac and spinal fusion surgery. To date, potential risk factors have been identified in cardiac surgery by only small, single-institution studies. To determine the preoperative risk factors for ischemic optic neuropathy, the authors used the National Inpatient Sample, a database of inpatient discharges for nonfederal hospitals in the United States.Adults aged 18 yr or older admitted for coronary artery bypass grafting, heart valve repair or replacement surgery, or left ventricular assist device insertion in National Inpatient Sample from 1998 to 2013 were included. Risk of ischemic optic neuropathy was evaluated by multivariable logistic regression.A total of 5,559,395 discharges met inclusion criteria with 794 (0.014%) cases of ischemic optic neuropathy. The average yearly incidence was 1.43 of 10,000 cardiac procedures, with no change during the study period (P = 0.57). Conditions increasing risk were carotid artery stenosis (odds ratio, 2.70), stroke (odds ratio, 3.43), diabetic retinopathy (odds ratio, 3.83), hypertensive retinopathy (odds ratio, 30.09), macular degeneration (odds ratio, 4.50), glaucoma (odds ratio, 2.68), and cataract (odds ratio, 5.62). Female sex (odds ratio, 0.59) and uncomplicated diabetes mellitus type 2 (odds ratio, 0.51) decreased risk.The incidence of ischemic optic neuropathy in cardiac surgery did not change during the study period. Development of ischemic optic neuropathy after cardiac surgery is associated with carotid artery stenosis, stroke, and degenerative eye conditions.

    View details for PubMedID 28244936

  • Web-Based Tools for Exploring the Potential of Quantitative Imaging Biomarkers in Radiology Intensity and Texture Analysis on the ePAD Platform BIOMEDICAL TEXTURE ANALYSIS: FUNDAMENTALS, TOOLS AND CHALLENGES Schaer, R., Cid, Y., Alkim, E., John, S., Rubin, D. L., Depeursinge, A., Depeursinge, A., AlKadi, O. S., Mitchell 2017: 379–410
  • Differential Data Augmentation Techniques for Medical Imaging Classification Tasks. AMIA ... Annual Symposium proceedings. AMIA Symposium Hussain, Z., Gimenez, F., Yi, D., Rubin, D. 2017; 2017: 979–84

    Abstract

    Data augmentation is an essential part of training discriminative Convolutional Neural Networks (CNNs). A variety of augmentation strategies, including horizontal flips, random crops, and principal component analysis (PCA), have been proposed and shown to capture important characteristics of natural images. However, while data augmentation has been commonly used for deep learning in medical imaging, little work has been done to determine which augmentation strategies best capture medical image statistics, leading to more discriminative models. This work compares augmentation strategies and shows that the extent to which an augmented training set retains properties of the original medical images determines model performance. Specifically, augmentation strategies such as flips and gaussian filters lead to validation accuracies of 84% and 88%, respectively. On the other hand, a less effective strategy such as adding noise leads to a significantly worse validation accuracy of 66%. Finally, we show that the augmentation affects mass generation.

    View details for PubMedID 29854165

  • Intelligent Word Embeddings of Free-Text Radiology Reports. AMIA ... Annual Symposium proceedings. AMIA Symposium Banerjee, I., Madhavan, S., Goldman, R. E., Rubin, D. L. 2017; 2017: 411–20

    Abstract

    Radiology reports are a rich resource for advancing deep learning applications in medicine by leveraging the large volume of data continuously being updated, integrated, and shared. However, there are significant challenges as well, largely due to the ambiguity and subtlety of natural language. We propose a hybrid strategy that combines semantic-dictionary mapping and word2vec modeling for creating dense vector embeddings of free-text radiology reports. Our method leverages the benefits of both semantic-dictionary mapping as well as unsupervised learning. Using the vector representation, we automatically classify the radiology reports into three classes denoting confidence in the diagnosis of intracranial hemorrhage by the interpreting radiologist. We performed experiments with varying hyperparameter settings of the word embeddings and a range of different classifiers. Best performance achieved was a weighted precision of 88% and weighted recall of 90%. Our work offers the potential to leverage unstructured electronic health record data by allowing direct analysis of narrative clinical notes.

    View details for PubMedID 29854105

  • Toward Automated Pre-Biopsy Thyroid Cancer Risk Estimation in Ultrasound. AMIA ... Annual Symposium proceedings. AMIA Symposium Galimzianova, A. n., Siebert, S. M., Kamaya, A. n., Desser, T. S., Rubin, D. L. 2017; 2017: 734–41

    Abstract

    We propose a computational framework for automated cancer risk estimation of thyroid nodules visualized in ultrasound (US) images. Our framework estimates the probability of nodule malignancy using random forests on a rich set of computational features. An expert radiologist annotated thyroid nodules in 93 biopsy-confirmed patients using semantic image descriptors derived from standardized lexicon. On our dataset, the AUC of the proposed method was 0.70, which was comparable to five baseline expert annotation-based classifiers with AUC values from 0.72 to 0.81. Moreover, the use of the framework for decision making on nodule biopsy could have spared five out of 46 benign nodule biopsies at no cost to the health of patients with malignancies. Our results confirm the feasibility of computer-aided tools for noninvasive malignancy risk estimation in patients with thyroid nodules that could help to decrease the number of unnecessary biopsies and surgeries.

    View details for PubMedID 29854139

  • Mining Electronic Health Records to Extract Patient-Centered Outcomes Following Prostate Cancer Treatment. AMIA ... Annual Symposium proceedings. AMIA Symposium Hernandez-Boussard, T., Kourdis, P. D., Seto, T., Ferrari, M., Blayney, D. W., Rubin, D., Brooks, J. D. 2017; 2017: 876–82

    Abstract

    The clinical, granular data in electronic health record (EHR) systems provide opportunities to improve patient care using informatics retrieval methods. However, it is well known that many methodological obstacles exist in accessing data within EHRs. In particular, clinical notes routinely stored in EHR are composed from narrative, highly unstructured and heterogeneous biomedical text. This inherent complexity hinders the ability to perform automated large-scale medical knowledge extraction tasks without the use of computational linguistics methods. The aim of this work was to develop and validate a Natural Language Processing (NLP) pipeline to detect important patient-centered outcomes (PCOs) as interpreted and documented by clinicians in their dictated notes for male patients receiving treatment for localized prostate cancer at an academic medical center.

    View details for PubMedID 29854154

  • Robust noise region-based active contour model via local similarity factor for image segmentation PATTERN RECOGNITION Niu, S., Chen, Q., de Sisternes, L., Ji, Z., Zhou, Z., Rubin, D. L. 2017; 61: 104-119
  • A 3-D Riesz-Covariance Texture Model for Prediction of Nodule Recurrence in Lung CT IEEE TRANSACTIONS ON MEDICAL IMAGING Cirujeda, P., Cid, Y. D., Muller, H., Rubin, D., Aguilera, T. A., Loo, B. W., Diehn, M., Binefa, X., Depeursinge, A. 2016; 35 (12): 2620-2630

    Abstract

    This paper proposes a novel imaging biomarker of lung cancer relapse from 3-D texture analysis of CT images. Three-dimensional morphological nodular tissue properties are described in terms of 3-D Riesz-wavelets. The responses of the latter are aggregated within nodular regions by means of feature covariances, which leverage rich intra- and inter- variations of the feature space dimensions. When compared to the classical use of the average for feature aggregation, feature covariances preserve spatial co-variations between features. The obtained Riesz-covariance descriptors lie on a manifold governed by Riemannian geometry allowing geodesic measurements and differentiations. The latter property is incorporated both into a kernel for support vector machines (SVM) and a manifold-aware sparse regularized classifier. The effectiveness of the presented models is evaluated on a dataset of 110 patients with non-small cell lung carcinoma (NSCLC) and cancer recurrence information. Disease recurrence within a timeframe of 12 months could be predicted with an accuracy of 81.3-82.7%. The anatomical location of recurrence could be discriminated between local, regional and distant failure with an accuracy of 78.3-93.3%. The obtained results open novel research perspectives by revealing the importance of the nodular regions used to build the predictive models.

    View details for DOI 10.1109/TMI.2016.2591921

    View details for Web of Science ID 000391547700011

    View details for PubMedID 27429433

  • Computational Challenges and Collaborative Projects in the NCI Quantitative Imaging Network TOMOGRAPHY Farahani, K., Kalpathy-Cramer, J., Chenevert, T. L., Rubin, D. L., Sunderland, J. J., Nordstrom, R. J., Buatti, J., Hylton, N. 2016; 2 (4): 242–49

    Abstract

    The Quantitative Imaging Network (QIN) of the National Cancer Institute (NCI) conducts research in development and validation of imaging tools and methods for predicting and evaluating clinical response to cancer therapy. Members of the network are involved in examining various imaging and image assessment parameters through network-wide cooperative projects. To more effectively use the cooperative power of the network in conducting computational challenges in benchmarking of tools and methods and collaborative projects in analytical assessment of imaging technologies, the QIN Challenge Task Force has developed policies and procedures to enhance the value of these activities by developing guidelines and leveraging NCI resources to help their administration and manage dissemination of results. Challenges and Collaborative Projects (CCPs) are further divided into technical and clinical CCPs. As the first NCI network to engage in CCPs, we anticipate a variety of CCPs to be conducted by QIN teams in the coming years. These will be aimed to benchmark advanced software tools for clinical decision support, explore new imaging biomarkers for therapeutic assessment, and establish consensus on a range of methods and protocols in support of the use of quantitative imaging to predict and assess response to cancer therapy.

    View details for PubMedID 28798963

  • Radiomics of Lung Nodules: A Multi-Institutional Study of Robustness and Agreement of Quantitative Imaging Features. Tomography : a journal for imaging research Kalpathy-Cramer, J., Mamomov, A., Zhao, B., Lu, L., Cherezov, D., Napel, S., Echegaray, S., Rubin, D., McNitt-Gray, M., Lo, P., Sieren, J. C., Uthoff, J., Dilger, S. K., Driscoll, B., Yeung, I., Hadjiiski, L., Cha, K., Balagurunathan, Y., Gillies, R., Goldgof, D. 2016; 2 (4): 430-437

    Abstract

    Radiomics is to provide quantitative descriptors of normal and abnormal tissues during classification and prediction tasks in radiology and oncology. Quantitative Imaging Network members are developing radiomic "feature" sets to characterize tumors, in general, the size, shape, texture, intensity, margin, and other aspects of the imaging features of nodules and lesions. Efforts are ongoing for developing an ontology to describe radiomic features for lung nodules, with the main classes consisting of size, local and global shape descriptors, margin, intensity, and texture-based features, which are based on wavelets, Laplacian of Gaussians, Law's features, gray-level co-occurrence matrices, and run-length features. The purpose of this study is to investigate the sensitivity of quantitative descriptors of pulmonary nodules to segmentations and to illustrate comparisons across different feature types and features computed by different implementations of feature extraction algorithms. We calculated the concordance correlation coefficients of the features as a measure of their stability with the underlying segmentation; 68% of the 830 features in this study had a concordance CC of ≥0.75. Pairwise correlation coefficients between pairs of features were used to uncover associations between features, particularly as measured by different participants. A graphical model approach was used to enumerate the number of uncorrelated feature groups at given thresholds of correlation. At a threshold of 0.75 and 0.95, there were 75 and 246 subgroups, respectively, providing a measure for the features' redundancy.

    View details for DOI 10.18383/j.tom.2016.00235

    View details for PubMedID 28149958

    View details for PubMedCentralID PMC5279995

  • Common Data Elements in Radiology. Radiology Rubin, D. L., Kahn, C. E. 2016: 161553-?

    Abstract

    Diagnostic radiologists generally produce unstructured information in the form of images and narrative text reports. Although designed for human consumption, radiologic reports contain a wealth of information that could be valuable for clinical care, research, and quality improvement if that information could be extracted by automated systems. Unfortunately, the lack of structure in radiologic reports limits the ability of information systems to share information easily with other systems. A common data element (CDE)-a unit of information used in a shared, predefined fashion-can improve the ability to exchange information seamlessly among information systems. In this article, a model and a repository of radiologic CDEs is described, and three important applications are highlighted. CDEs can help advance radiologic practice, research, and performance improvement, and thus, it is crucial that CDEs be adopted widely in radiologic information systems. (©) RSNA, 2016.

    View details for DOI 10.1148/radiol.2016161553

    View details for PubMedID 27831831

  • Improved Patch-Based Automated Liver Lesion Classification by Separate Analysis of the Interior and Boundary Regions IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS Diamant, I., Hoogi, A., Beaulieu, C. F., Safdari, M., Klang, E., Amitai, M., Greenspan, H., Rubin, D. L. 2016; 20 (6): 1585-1594

    Abstract

    The bag-of-visual-words (BoVW) method with construction of a single dictionary of visual words has been used previously for a variety of classification tasks in medical imaging, including the diagnosis of liver lesions. In this paper, we describe a novel method for automated diagnosis of liver lesions in portal-phase computed tomography (CT) images that improves over single-dictionary BoVW methods by using an image patch representation of the interior and boundary regions of the lesions. Our approach captures characteristics of the lesion margin and of the lesion interior by creating two separate dictionaries for the margin and the interior regions of lesions ("dual dictionaries" of visual words). Based on these dictionaries, visual word histograms are generated for each region of interest within the lesion and its margin. For validation of our approach, we used two datasets from two different institutions, containing CT images of 194 liver lesions (61 cysts, 80 metastasis, and 53 hemangiomas). The final diagnosis of each lesion was established by radiologists. The classification accuracy for the images from the two institutions was 99% and 88%, respectively, and 93% for a combined dataset. Our new BoVW approach that uses dual dictionaries shows promising results. We believe the benefits of our approach may generalize to other application domains within radiology.

    View details for DOI 10.1109/JBHI.2015.2478255

    View details for Web of Science ID 000389846700014

    View details for PubMedID 26372661

    View details for PubMedCentralID PMC5164871

  • Early-Stage Non-Small Cell Lung Cancer: Quantitative Imaging Characteristics of (18)F Fluorodeoxyglucose PET/CT Allow Prediction of Distant Metastasis. Radiology Wu, J., Aguilera, T., Shultz, D., Gudur, M., Rubin, D. L., Loo, B. W., Diehn, M., Li, R. 2016; 281 (1): 270-278

    Abstract

    Purpose To identify quantitative imaging biomarkers at fluorine 18 ((18)F) positron emission tomography (PET) for predicting distant metastasis in patients with early-stage non-small cell lung cancer (NSCLC). Materials and Methods In this institutional review board-approved HIPAA-compliant retrospective study, the pretreatment (18)F fluorodeoxyglucose PET images in 101 patients treated with stereotactic ablative radiation therapy from 2005 to 2013 were analyzed. Data for 70 patients who were treated before 2011 were used for discovery purposes, while data from the remaining 31 patients were used for independent validation. Quantitative PET imaging characteristics including statistical, histogram-related, morphologic, and texture features were analyzed, from which 35 nonredundant and robust features were further evaluated. Cox proportional hazards regression model coupled with the least absolute shrinkage and selection operator was used to predict distant metastasis. Whether histologic type provided complementary value to imaging by combining both in a single prognostic model was also assessed. Results The optimal prognostic model included two image features that allowed quantification of intratumor heterogeneity and peak standardized uptake value. In the independent validation cohort, this model showed a concordance index of 0.71, which was higher than those of the maximum standardized uptake value and tumor volume, with concordance indexes of 0.67 and 0.64, respectively. The prognostic model also allowed separation of groups with low and high risk for developing distant metastasis (hazard ratio, 4.8; P = .0498, log-rank test), which compared favorably with maximum standardized uptake value and tumor volume (hazard ratio, 1.5 and 2.0, respectively; P = .73 and 0.54, log-rank test, respectively). When combined with histologic types, the prognostic power was further improved (hazard ratio, 6.9; P = .0289, log-rank test; and concordance index, 0.80). Conclusion PET imaging characteristics associated with distant metastasis that could potentially help practitioners to tailor appropriate therapy for individual patients with early-stage NSCLC were identified. (©) RSNA, 2016 Online supplemental material is available for this article.

    View details for DOI 10.1148/radiol.2016151829

    View details for PubMedID 27046074

  • Intratumor Partitioning of Serial Computed Tomography and FDG Positron Emission Tomography Images Identifies High-Risk Tumor Subregions and Predicts Patterns of Failure in Non-Small Cell Lung Cancer After Radiation Therapy 58th Annual Meeting of the American-Society-for-Radiation-Oncology (ASTRO) Wu, J., Gensheimer, M. F., Dong, X., Rubin, D. L., Napel, S., Diehn, M., Loo, B. W., Li, R. ELSEVIER SCIENCE INC. 2016: S100–S100
  • Accuracy, repeatability and reproducibility of a novel approach to quantify individual drusen in spectral-domain optical coherence tomography images De Sisternes, L., Jonna, G., Greven, M., Leng, T., Rubin, D. ASSOC RESEARCH VISION OPHTHALMOLOGY INC. 2016
  • Automated quantitative analysis of SD-OCT scans to predict visual outcome after epiretinal membrane (ERM) removal surgery Au, T. J., De Sisternes, L., Leng, T., Rubin, D. ASSOC RESEARCH VISION OPHTHALMOLOGY INC. 2016
  • Fully automated prediction of geographic atrophy growth using quantitative SD-OCT imaging biomarkers Leng, T., Niu, S., De Sisternes, L., Rubin, D. ASSOC RESEARCH VISION OPHTHALMOLOGY INC. 2016
  • Robust Intratumor Partitioning to Identify High-Risk Subregions in Lung Cancer: A Pilot Study. International journal of radiation oncology, biology, physics Wu, J., Gensheimer, M. F., Dong, X., Rubin, D. L., Napel, S., Diehn, M., Loo, B. W., Li, R. 2016; 95 (5): 1504-1512

    Abstract

    To develop an intratumor partitioning framework for identifying high-risk subregions from (18)F-fluorodeoxyglucose positron emission tomography (FDG-PET) and computed tomography (CT) imaging and to test whether tumor burden associated with the high-risk subregions is prognostic of outcomes in lung cancer.In this institutional review board-approved retrospective study, we analyzed the pretreatment FDG-PET and CT scans of 44 lung cancer patients treated with radiation therapy. A novel, intratumor partitioning method was developed, based on a 2-stage clustering process: first at the patient level, each tumor was over-segmented into many superpixels by k-means clustering of integrated PET and CT images; next, tumor subregions were identified by merging previously defined superpixels via population-level hierarchical clustering. The volume associated with each of the subregions was evaluated using Kaplan-Meier analysis regarding its prognostic capability in predicting overall survival (OS) and out-of-field progression (OFP).Three spatially distinct subregions were identified within each tumor that were highly robust to uncertainty in PET/CT co-registration. Among these, the volume of the most metabolically active and metabolically heterogeneous solid component of the tumor was predictive of OS and OFP on the entire cohort, with a concordance index or CI of 0.66-0.67. When restricting the analysis to patients with stage III disease (n=32), the same subregion achieved an even higher CI of 0.75 (hazard ratio 3.93, log-rank P=.002) for predicting OS, and a CI of 0.76 (hazard ratio 4.84, log-rank P=.002) for predicting OFP. In comparison, conventional imaging markers, including tumor volume, maximum standardized uptake value, and metabolic tumor volume using threshold of 50% standardized uptake value maximum, were not predictive of OS or OFP, with CI mostly below 0.60 (log-rank P>.05).We propose a robust intratumor partitioning method to identify clinically relevant, high-risk subregions in lung cancer. We envision that this approach will be applicable to identifying useful imaging biomarkers in many cancer types.

    View details for DOI 10.1016/j.ijrobp.2016.03.018

    View details for PubMedID 27212196

  • Using automatically extracted information from mammography reports for decision-support. Journal of biomedical informatics Bozkurt, S., Gimenez, F., Burnside, E. S., Gulkesen, K. H., Rubin, D. L. 2016; 62: 224-231

    Abstract

    To evaluate a system we developed that connects natural language processing (NLP) for information extraction from narrative text mammography reports with a Bayesian network for decision-support about breast cancer diagnosis. The ultimate goal of this system is to provide decision support as part of the workflow of producing the radiology report.We built a system that uses an NLP information extraction system (which extract BI-RADS descriptors and clinical information from mammography reports) to provide the necessary inputs to a Bayesian network (BN) decision support system (DSS) that estimates lesion malignancy from BI-RADS descriptors. We used this integrated system to predict diagnosis of breast cancer from radiology text reports and evaluated it with a reference standard of 300 mammography reports. We collected two different outputs from the DSS: (1) the probability of malignancy and (2) the BI-RADS final assessment category. Since NLP may produce imperfect inputs to the DSS, we compared the difference between using perfect ("reference standard") structured inputs to the DSS ("RS-DSS") vs NLP-derived inputs ("NLP-DSS") on the output of the DSS using the concordance correlation coefficient. We measured the classification accuracy of the BI-RADS final assessment category when using NLP-DSS, compared with the ground truth category established by the radiologist.The NLP-DSS and RS-DSS had closely matched probabilities, with a mean paired difference of 0.004±0.025. The concordance correlation of these paired measures was 0.95. The accuracy of the NLP-DSS to predict the correct BI-RADS final assessment category was 97.58%.The accuracy of the information extracted from mammography reports using the NLP system was sufficient to provide accurate DSS results. We believe our system could ultimately reduce the variation in practice in mammography related to assessment of malignant lesions and improve management decisions.

    View details for DOI 10.1016/j.jbi.2016.07.001

    View details for PubMedID 27388877

  • Fully Automated Prediction of Geographic Atrophy Growth Using Quantitative Spectral-Domain Optical Coherence Tomography Biomarkers. Ophthalmology Niu, S., de Sisternes, L., Chen, Q., Rubin, D. L., Leng, T. 2016; 123 (8): 1737-1750

    Abstract

    To develop a predictive model based on quantitative characteristics of geographic atrophy (GA) to estimate future potential regions of GA growth.Progression study and predictive model.One hundred eighteen spectral-domain (SD) optical coherence tomography (OCT) scans of 38 eyes in 29 patients.Imaging features of GA quantifying its extent and location, as well as characteristics at each topographic location related to individual retinal layer thickness and reflectivity, the presence of pathologic features (like reticular pseudodrusen or loss of photoreceptors), and other known risk factors of GA growth, were extracted automatically from 118 SD OCT scans of 38 eyes from 29 patients collected over a median follow-up of 2.25 years. We developed and evaluated a model to predict the magnitude and location of GA growth at given future times using the quantitative features as predictors in 3 possible scenarios.Potential regions of GA growth.In descending order of out-of-bag feature importance, the most predictive SD OCT biomarkers for predicting the future regions of GA growth were thickness loss of bands 11 through 14 (5.66), reflectivity of bands 11 and 12 (5.37), thickness of reticular pseudodrusen (5.01), thickness of bands 5 through 11 (4.82), reflectivity of bands 7 through 11 (4.78), GA projection image (4.73), increased minimum retinal intensity map (4.59), and GA eccentricity (4.49). The predicted GA regions in the 3 tested scenarios resulted in a Dice index mean ± standard deviation of 0.81±0.12, 0.84±0.10, and 0.87±0.06, respectively, when compared with the observed ground truth. Considering only the regions without evidence of GA at baseline, predicted regions of future GA growth showed relatively high Dice indices of 0.72±0.18, 0.74±0.17, and 0.72±0.22, respectively. Predictions and actual values of GA growth rate and future GA involvement in the central fovea showed high correlations.Experimental results demonstrated the potential of our predictive model to predict future regions where GA is likely to grow and to identify the most discriminant early indicator (thickness loss of bands 11 through 14) of regions susceptible to GA growth.

    View details for DOI 10.1016/j.ophtha.2016.04.042

    View details for PubMedID 27262765

  • Case-control study of mammographic density and breast cancer risk using processed digital mammograms BREAST CANCER RESEARCH Habel, L. A., Lipson, J. A., Achacoso, N., Rothstein, J. H., Yaffe, M. J., Liang, R. Y., Acton, L., McGuire, V., Whittemore, A. S., Rubin, D. L., Sieh, W. 2016; 18

    Abstract

    Full-field digital mammography (FFDM) has largely replaced film-screen mammography in the US. Breast density assessed from film mammograms is strongly associated with breast cancer risk, but data are limited for processed FFDM images used for clinical care.We conducted a case-control study nested among non-Hispanic white female participants of the Research Program in Genes, Environment and Health of Kaiser Permanente Northern California who were aged 40 to 74 years and had screening mammograms acquired on Hologic FFDM machines. Cases (n = 297) were women with a first invasive breast cancer diagnosed after a screening FFDM. For each case, up to five controls (n = 1149) were selected, matched on age and year of FFDM and image batch number, and who were still under follow-up and without a history of breast cancer at the age of diagnosis of the matched case. Percent density (PD) and dense area (DA) were assessed by a radiological technologist using Cumulus. Conditional logistic regression was used to estimate odds ratios (ORs) for breast cancer associated with PD and DA, modeled continuously in standard deviation (SD) increments and categorically in quintiles, after adjusting for body mass index, parity, first-degree family history of breast cancer, breast area, and menopausal hormone use.Median intra-reader reproducibility was high with a Pearson's r of 0.956 (range 0.902 to 0.983) for replicate PD measurements across 23 image batches. The overall mean was 20.02 (SD, 14.61) for PD and 27.63 cm(2) (18.22 cm(2)) for DA. The adjusted ORs for breast cancer associated with each SD increment were 1.70 (95 % confidence interval, 1.41-2.04) for PD, and 1.54 (1.34-1.77) for DA. The adjusted ORs for each quintile were: 1.00 (ref.), 1.49 (0.91-2.45), 2.57 (1.54-4.30), 3.22 (1.91-5.43), 4.88 (2.78-8.55) for PD, and 1.00 (ref.), 1.43 (0.85-2.40), 2.53 (1.53-4.19), 2.85 (1.73-4.69), 3.48 (2.14-5.65) for DA.PD and DA measured using Cumulus on processed FFDM images are positively associated with breast cancer risk, with similar magnitudes of association as previously reported for film-screen mammograms. Processed digital mammograms acquired for routine clinical care in a general practice setting are suitable for breast density and cancer research.

    View details for DOI 10.1186/s13058-016-0715-3

    View details for Web of Science ID 000377273200001

    View details for PubMedID 27209070

    View details for PubMedCentralID PMC4875652

  • Analysis of Inner and Outer Retinal Thickness in Patients Using Hydroxychloroquine Prior to Development of Retinopathy JAMA OPHTHALMOLOGY de Sisternes, L., Hu, J., Rubin, D. L., Marmor, M. F. 2016; 134 (5): 511-519
  • Automated classification of brain tumor type in whole-slide digital pathology images using local representative tiles MEDICAL IMAGE ANALYSIS Barker, J., Hoogi, A., Depeursinge, A., Rubin, D. L. 2016; 30: 60-71

    Abstract

    Computerized analysis of digital pathology images offers the potential of improving clinical care (e.g. automated diagnosis) and catalyzing research (e.g. discovering disease subtypes). There are two key challenges thwarting computerized analysis of digital pathology images: first, whole slide pathology images are massive, making computerized analysis inefficient, and second, diverse tissue regions in whole slide images that are not directly relevant to the disease may mislead computerized diagnosis algorithms. We propose a method to overcome both of these challenges that utilizes a coarse-to-fine analysis of the localized characteristics in pathology images. An initial surveying stage analyzes the diversity of coarse regions in the whole slide image. This includes extraction of spatially localized features of shape, color and texture from tiled regions covering the slide. Dimensionality reduction of the features assesses the image diversity in the tiled regions and clustering creates representative groups. A second stage provides a detailed analysis of a single representative tile from each group. An Elastic Net classifier produces a diagnostic decision value for each representative tile. A weighted voting scheme aggregates the decision values from these tiles to obtain a diagnosis at the whole slide level. We evaluated our method by automatically classifying 302 brain cancer cases into two possible diagnoses (glioblastoma multiforme (N = 182) versus lower grade glioma (N = 120)) with an accuracy of 93.1 % (p < 0.001). We also evaluated our method in the dataset provided for the 2014 MICCAI Pathology Classification Challenge, in which our method, trained and tested using 5-fold cross validation, produced a classification accuracy of 100% (p < 0.001). Our method showed high stability and robustness to parameter variation, with accuracy varying between 95.5% and 100% when evaluated for a wide range of parameters. Our approach may be useful to automatically differentiate between the two cancer subtypes.

    View details for DOI 10.1016/j.media.2015.12.002

    View details for Web of Science ID 000373546800005

    View details for PubMedID 26854941

  • Computational Identification of Tumor Anatomic Location Associated with Survival in 2 Large Cohorts of Human Primary Glioblastomas AMERICAN JOURNAL OF NEURORADIOLOGY Liu, T. T., Achrol, A. S., MITCHELL, L. A., Du, W. A., Loya, J. J., Rodriguez, S. A., Feroze, A., Westbroek, E. M., Yeom, K. W., Stuart, J. M., Chang, S. D., Harsh, G. R., Rubin, D. L. 2016; 37 (4): 621-628

    Abstract

    Tumor location has been shown to be a significant prognostic factor in patients with glioblastoma. The purpose of this study was to characterize glioblastoma lesions by identifying MR imaging voxel-based tumor location features that are associated with tumor molecular profiles, patient characteristics, and clinical outcomes.Preoperative T1 anatomic MR images of 384 patients with glioblastomas were obtained from 2 independent cohorts (n = 253 from the Stanford University Medical Center for training and n = 131 from The Cancer Genome Atlas for validation). An automated computational image-analysis pipeline was developed to determine the anatomic locations of tumor in each patient. Voxel-based differences in tumor location between good (overall survival of >17 months) and poor (overall survival of <11 months) survival groups identified in the training cohort were used to classify patients in The Cancer Genome Atlas cohort into 2 brain-location groups, for which clinical features, messenger RNA expression, and copy number changes were compared to elucidate the biologic basis of tumors located in different brain regions.Tumors in the right occipitotemporal periventricular white matter were significantly associated with poor survival in both training and test cohorts (both, log-rank P < .05) and had larger tumor volume compared with tumors in other locations. Tumors in the right periatrial location were associated with hypoxia pathway enrichment and PDGFRA amplification, making them potential targets for subgroup-specific therapies.Voxel-based location in glioblastoma is associated with patient outcome and may have a potential role for guiding personalized treatment.

    View details for DOI 10.3174/ajnr.A4631

    View details for Web of Science ID 000373346900014

  • Toward rapid learning in cancer treatment selection: An analytical engine for practice-based clinical data. Journal of biomedical informatics Finlayson, S. G., Levy, M., Reddy, S., Rubin, D. L. 2016; 60: 104-113

    Abstract

    Wide-scale adoption of electronic medical records (EMRs) has created an unprecedented opportunity for the implementation of Rapid Learning Systems (RLSs) that leverage primary clinical data for real-time decision support. In cancer, where large variations among patient features leave gaps in traditional forms of medical evidence, the potential impact of a RLS is particularly promising. We developed the Melanoma Rapid Learning Utility (MRLU), a component of the RLS, providing an analytical engine and user interface that enables physicians to gain clinical insights by rapidly identifying and analyzing cohorts of patients similar to their own.A new approach for clinical decision support in Melanoma was developed and implemented, in which patient-centered cohorts are generated from practice-based evidence and used to power on-the-fly stratified survival analyses. A database to underlie the system was generated from clinical, pharmaceutical, and molecular data from 237 patients with metastatic melanoma from two academic medical centers. The system was assessed in two ways: (1) ability to rediscover known knowledge and (2) potential clinical utility and usability through a user study of 13 practicing oncologists.The MRLU enables physician-driven cohort selection and stratified survival analysis. The system successfully identified several known clinical trends in melanoma, including frequency of BRAF mutations, survival rate of patients with BRAF mutant tumors in response to BRAF inhibitor therapy, and sex-based trends in prevalence and survival. Surveyed physician users expressed great interest in using such on-the-fly evidence systems in practice (mean response from relevant survey questions 4.54/5.0), and generally found the MRLU in particular to be both useful (mean score 4.2/5.0) and useable (4.42/5.0).The MRLU is an RLS analytical engine and user interface for Melanoma treatment planning that presents design principles useful in building RLSs. Further research is necessary to evaluate when and how to best use this functionality within the EMR clinical workflow for guiding clinical decision making.The MRLU is an important component in building a RLS for data driven precision medicine in Melanoma treatment that could be generalized to other clinical disorders.

    View details for DOI 10.1016/j.jbi.2016.01.005

    View details for PubMedID 26836975

    View details for PubMedCentralID PMC4836997

  • A combinatorial radiographic phenotype may stratify patient survival and be associated with invasion and proliferation characteristics in glioblastoma JOURNAL OF NEUROSURGERY Rao, A., Rao, G., Gutman, D. A., Flanders, A. E., Hwang, S. N., Rubin, D. L., Colen, R. R., Zinn, P. O., Jain, R., Wintermark, M., Kirby, J. S., Jaffe, C. C., Freymann, J. 2016; 124 (4): 1008-1017

    Abstract

    Individual MRI characteristics (e.g., volume) are routinely used to identify survival-associated phenotypes for glioblastoma (GBM). This study investigated whether combinations of MRI features can also stratify survival. Furthermore, the molecular differences between phenotype-induced groups were investigated.Ninety-two patients with imaging, molecular, and survival data from the TCGA (The Cancer Genome Atlas)-GBM collection were included in this study. For combinatorial phenotype analysis, hierarchical clustering was used. Groups were defined based on a cutpoint obtained via tree-based partitioning. Furthermore, differential expression analysis of microRNA (miRNA) and mRNA expression data was performed using GenePattern Suite. Functional analysis of the resulting genes and miRNAs was performed using Ingenuity Pathway Analysis. Pathway analysis was performed using Gene Set Enrichment Analysis.Clustering analysis reveals that image-based grouping of the patients is driven by 3 features: volume-class, hemorrhage, and T1/FLAIR-envelope ratio. A combination of these features stratifies survival in a statistically significant manner. A cutpoint analysis yields a significant survival difference in the training set (median survival difference: 12 months, p = 0.004) as well as a validation set (p = 0.0001). Specifically, a low value for any of these 3 features indicates favorable survival characteristics. Differential expression analysis between cutpoint-induced groups suggests that several immune-associated (natural killer cell activity, T-cell lymphocyte differentiation) and metabolism-associated (mitochondrial activity, oxidative phosphorylation) pathways underlie the transition of this phenotype. Integrating data for mRNA and miRNA suggests the roles of several genes regulating proliferation and invasion.A 3-way combination of MRI phenotypes may be capable of stratifying survival in GBM. Examination of molecular processes associated with groups created by this combinatorial phenotype suggests the role of biological processes associated with growth and invasion characteristics.

    View details for DOI 10.3171/2015.4.JNS142732

    View details for Web of Science ID 000372669100015

    View details for PubMedCentralID PMC4990448

  • Automated geographic atrophy segmentation for SD-OCT images using region-based C-V model via local similarity factor BIOMEDICAL OPTICS EXPRESS Niu, S., de Sisternes, L., Chen, Q., Leng, T., Rubin, D. L. 2016; 7 (2): 581-600

    Abstract

    Age-related macular degeneration (AMD) is the leading cause of blindness among elderly individuals. Geographic atrophy (GA) is a phenotypic manifestation of the advanced stages of non-exudative AMD. Determination of GA extent in SD-OCT scans allows the quantification of GA-related features, such as radius or area, which could be of important value to monitor AMD progression and possibly identify regions of future GA involvement. The purpose of this work is to develop an automated algorithm to segment GA regions in SD-OCT images. An en face GA fundus image is generated by averaging the axial intensity within an automatically detected sub-volume of the three dimensional SD-OCT data, where an initial coarse GA region is estimated by an iterative threshold segmentation method and an intensity profile set, and subsequently refined by a region-based Chan-Vese model with a local similarity factor. Two image data sets, consisting on 55 SD-OCT scans from twelve eyes in eight patients with GA and 56 SD-OCT scans from 56 eyes in 56 patients with GA, respectively, were utilized to quantitatively evaluate the automated segmentation algorithm. We compared results obtained by the proposed algorithm, manual segmentation by graders, a previously proposed method, and experimental commercial software. When compared to a manually determined gold standard, our algorithm presented a mean overlap ratio (OR) of 81.86% and 70% for the first and second data sets, respectively, while the previously proposed method OR was 72.60% and 65.88% for the first and second data sets, respectively, and the experimental commercial software OR was 62.40% for the second data set.

    View details for DOI 10.1364/BOE.7.000581

    View details for Web of Science ID 000369247000029

    View details for PubMedCentralID PMC4771473

  • Predicting non-small cell lung cancer prognosis by fully automated microscopic pathology image features. Nature communications Yu, K., Zhang, C., Berry, G. J., Altman, R. B., Ré, C., Rubin, D. L., Snyder, M. 2016; 7: 12474-?

    Abstract

    Lung cancer is the most prevalent cancer worldwide, and histopathological assessment is indispensable for its diagnosis. However, human evaluation of pathology slides cannot accurately predict patients' prognoses. In this study, we obtain 2,186 haematoxylin and eosin stained histopathology whole-slide images of lung adenocarcinoma and squamous cell carcinoma patients from The Cancer Genome Atlas (TCGA), and 294 additional images from Stanford Tissue Microarray (TMA) Database. We extract 9,879 quantitative image features and use regularized machine-learning methods to select the top features and to distinguish shorter-term survivors from longer-term survivors with stage I adenocarcinoma (P<0.003) or squamous cell carcinoma (P=0.023) in the TCGA data set. We validate the survival prediction framework with the TMA cohort (P<0.036 for both tumour types). Our results suggest that automatically derived image features can predict the prognosis of lung cancer patients and thereby contribute to precision oncology. Our methods are extensible to histopathology images of other organs.

    View details for DOI 10.1038/ncomms12474

    View details for PubMedID 27527408

  • A Rapid Segmentation-Insensitive 'Digital Biopsy' Method for Radiomic Feature Extraction; Method and Pilot Study Using CT Images of Non-Small Cell Lung Cancer Tomography Echegaray, S., Nair, V., Kadoch, M., Leung, A., Rubin, D., Gevaert, O., Napel Sandy , et al 2016; 2 (4): 283–94

    Abstract

    Quantitative imaging approaches compute features within images' regions of interest. Segmentation is rarely completely automatic, requiring time-consuming editing by experts. We propose a new paradigm, called "digital biopsy," that allows for the collection of intensity- and texture-based features from these regions at least 1 order of magnitude faster than the current manual or semiautomated methods. A radiologist reviewed automated segmentations of lung nodules from 100 preoperative volume computed tomography scans of patients with non-small cell lung cancer, and manually adjusted the nodule boundaries in each section, to be used as a reference standard, requiring up to 45 minutes per nodule. We also asked a different expert to generate a digital biopsy for each patient using a paintbrush tool to paint a contiguous region of each tumor over multiple cross-sections, a procedure that required an average of <3 minutes per nodule. We simulated additional digital biopsies using morphological procedures. Finally, we compared the features extracted from these digital biopsies with our reference standard using intraclass correlation coefficient (ICC) to characterize robustness. Comparing the reference standard segmentations to our digital biopsies, we found that 84/94 features had an ICC >0.7; comparing erosions and dilations, using a sphere of 1.5-mm radius, of our digital biopsies to the reference standard segmentations resulted in 41/94 and 53/94 features, respectively, with ICCs >0.7. We conclude that many intensity- and texture-based features remain consistent between the reference standard and our method while substantially reducing the amount of operator time required.

    View details for DOI 10.18383/j.tom.2016.00163

    View details for PubMedCentralID PMC5466872

  • Magnetic resonance perfusion image features uncover an angiogenic subgroup of glioblastoma patients with poor survival and better response to antiangiogenic treatment. Neuro-Oncology Liu, T. T., Achrol, A. S., Mitchell, L. A., Rodriguez, S. A., Feroze, A., Iv, M., Kim, C., Chaudhary, N., Gevaert, O., Stuart, J. M., Harsh, G. R., Chang, S. D., Rubin, D. L. 2016

    Abstract

    In previous clinical trials, antiangiogenic therapies such as bevacizumab did not show efficacy in patients with newly diagnosed glioblastoma (GBM). This may be a result of the heterogeneity of GBM, which has a variety of imaging-based phenotypes and gene expression patterns. In this study, we sought to identify a phenotypic subtype of GBM patients who have distinct tumor-image features and molecular activities and who may benefit from antiangiogenic therapies.Quantitative image features characterizing subregions of tumors and the whole tumor were extracted from preoperative and pretherapy perfusion magnetic resonance (MR) images of 117 GBM patients in 2 independent cohorts. Unsupervised consensus clustering was performed to identify robust clusters of GBM in each cohort. Cox survival and gene set enrichment analyses were conducted to characterize the clinical significance and molecular pathway activities of the clusters. The differential treatment efficacy of antiangiogenic therapy between the clusters was evaluated.A subgroup of patients with elevated perfusion features was identified and was significantly associated with poor patient survival after accounting for other clinical covariates (P values <.01; hazard ratios > 3) consistently found in both cohorts. Angiogenesis and hypoxia pathways were enriched in this subgroup of patients, suggesting the potential efficacy of antiangiogenic therapy. Patients of the angiogenic subgroups pooled from both cohorts, who had chemotherapy information available, had significantly longer survival when treated with antiangiogenic therapy (log-rank P=.022).Our findings suggest that an angiogenic subtype of GBM patients may benefit from antiangiogenic therapy with improved overall survival.

    View details for DOI 10.1093/neuonc/now270

  • A method for normalizing pathology images to improve feature extraction for quantitative pathology. Medical physics Tam, A., Barker, J., Rubin, D. 2016; 43 (1): 528-?

    Abstract

    With the advent of digital slide scanning technologies and the potential proliferation of large repositories of digital pathology images, many research studies can leverage these data for biomedical discovery and to develop clinical applications. However, quantitative analysis of digital pathology images is impeded by batch effects generated by varied staining protocols and staining conditions of pathological slides.To overcome this problem, this paper proposes a novel, fully automated stain normalization method to reduce batch effects and thus aid research in digital pathology applications. Their method, intensity centering and histogram equalization (ICHE), normalizes a diverse set of pathology images by first scaling the centroids of the intensity histograms to a common point and then applying a modified version of contrast-limited adaptive histogram equalization. Normalization was performed on two datasets of digitized hematoxylin and eosin (H&E) slides of different tissue slices from the same lung tumor, and one immunohistochemistry dataset of digitized slides created by restaining one of the H&E datasets.The ICHE method was evaluated based on image intensity values, quantitative features, and the effect on downstream applications, such as a computer aided diagnosis. For comparison, three methods from the literature were reimplemented and evaluated using the same criteria. The authors found that ICHE not only improved performance compared with un-normalized images, but in most cases showed improvement compared with previous methods for correcting batch effects in the literature.ICHE may be a useful preprocessing step a digital pathology image processing pipeline.

    View details for DOI 10.1118/1.4939130

    View details for PubMedID 26745946

  • Automated segmentation of optic disc in SD-OCT images and cup-to-disc ratios quantification by patch searching-based neural canal opening detection OPTICS EXPRESS Wu, M., Leng, T., de Sisternes, L., Rubin, D. L., Chen, Q. 2015; 23 (24): 31216-31229

    Abstract

    Glaucoma is one of the most common causes of blindness worldwide. Early detection of glaucoma is traditionally based on assessment of the cup-to-disc (C/D) ratio, an important indicator of structural changes to the optic nerve head. Here, we present an automated optic disc segmentation algorithm in 3-D spectral domain optical coherence tomography (SD-OCT) volumes to quantify this ratio. The proposed algorithm utilizes a two-stage strategy. First, it detects the neural canal opening (NCO) by finding the points with maximum curvature on the retinal pigment epithelium (RPE) boundary with a spatial correlation smoothness constraint on consecutive B-scans, and it approximately locates the coarse disc margin in the projection image using convex hull fitting. Then, a patch searching procedure using a probabilistic support vector machine (SVM) classifier finds the most likely patch with the NCO in its center in order to refine the segmentation result. Thus, a reference plane can be determined to calculate the C/D radio. Experimental results on 42 SD-OCT volumes from 17 glaucoma patients demonstrate that the proposed algorithm can achieve high segmentation accuracy and a low C/D ratio evaluation error. The unsigned border error for optic disc segmentation and the evaluation error for C/D ratio comparing with manual segmentation are 2.216 ± 1.406 pixels (0.067 ± 0.042 mm) and 0.045 ± 0.033, respectively.

    View details for DOI 10.1364/OE.23.031216

    View details for Web of Science ID 000366614100094

    View details for PubMedID 26698750

  • Multicenter imaging outcomes study of The Cancer Genome Atlas glioblastoma patient cohort: imaging predictors of overall and progression-free survival. Neuro-oncology Wangaryattawanich, P., Hatami, M., Wang, J., Thomas, G., Flanders, A., Kirby, J., Wintermark, M., Huang, E. S., Bakhtiari, A. S., Luedi, M. M., Hashmi, S. S., Rubin, D. L., Chen, J. Y., Hwang, S. N., Freymann, J., Holder, C. A., Zinn, P. O., Colen, R. R. 2015; 17 (11): 1525-1537

    Abstract

    Despite an aggressive therapeutic approach, the prognosis for most patients with glioblastoma (GBM) remains poor. The aim of this study was to determine the significance of preoperative MRI variables, both quantitative and qualitative, with regard to overall and progression-free survival in GBM.We retrospectively identified 94 untreated GBM patients from the Cancer Imaging Archive who had pretreatment MRI and corresponding patient outcomes and clinical information in The Cancer Genome Atlas. Qualitative imaging assessments were based on the Visually Accessible Rembrandt Images feature-set criteria. Volumetric parameters were obtained of the specific tumor components: contrast enhancement, necrosis, and edema/invasion. Cox regression was used to assess prognostic and survival significance of each image.Univariable Cox regression analysis demonstrated 10 imaging features and 2 clinical variables to be significantly associated with overall survival. Multivariable Cox regression analysis showed that tumor-enhancing volume (P = .03) and eloquent brain involvement (P < .001) were independent prognostic indicators of overall survival. In the multivariable Cox analysis of the volumetric features, the edema/invasion volume of more than 85 000 mm(3) and the proportion of enhancing tumor were significantly correlated with higher mortality (Ps = .004 and .003, respectively).Preoperative MRI parameters have a significant prognostic role in predicting survival in patients with GBM, thus making them useful for patient stratification and endpoint biomarkers in clinical trials.

    View details for DOI 10.1093/neuonc/nov117

    View details for PubMedID 26203066

  • Magnetic resonance image features identify glioblastoma phenotypic subtypes with distinct molecular pathway activities. Science translational medicine Itakura, H., Achrol, A. S., Mitchell, L. A., Loya, J. J., Liu, T., Westbroek, E. M., Feroze, A. H., Rodriguez, S., Echegaray, S., Azad, T. D., Yeom, K. W., Napel, S., Rubin, D. L., Chang, S. D., Harsh, G. R., Gevaert, O. 2015; 7 (303): 303ra138-?

    Abstract

    Glioblastoma (GBM) is the most common and highly lethal primary malignant brain tumor in adults. There is a dire need for easily accessible, noninvasive biomarkers that can delineate underlying molecular activities and predict response to therapy. To this end, we sought to identify subtypes of GBM, differentiated solely by quantitative magnetic resonance (MR) imaging features, that could be used for better management of GBM patients. Quantitative image features capturing the shape, texture, and edge sharpness of each lesion were extracted from MR images of 121 single-institution patients with de novo, solitary, unilateral GBM. Three distinct phenotypic "clusters" emerged in the development cohort using consensus clustering with 10,000 iterations on these image features. These three clusters--pre-multifocal, spherical, and rim-enhancing, names reflecting their image features--were validated in an independent cohort consisting of 144 multi-institution patients with similar tumor characteristics from The Cancer Genome Atlas (TCGA). Each cluster mapped to a unique set of molecular signaling pathways using pathway activity estimates derived from the analysis of TCGA tumor copy number and gene expression data with the PARADIGM (Pathway Recognition Algorithm Using Data Integration on Genomic Models) algorithm. Distinct pathways, such as c-Kit and FOXA, were enriched in each cluster, indicating differential molecular activities as determined by the image features. Each cluster also demonstrated differential probabilities of survival, indicating prognostic importance. Our imaging method offers a noninvasive approach to stratify GBM patients and also provides unique sets of molecular signatures to inform targeted therapy and personalized treatment of GBM.

    View details for DOI 10.1126/scitranslmed.aaa7582

    View details for PubMedID 26333934

  • Magnetic resonance image features identify glioblastoma phenotypic subtypes with distinct molecular pathway activities. Science translational medicine Itakura, H., Achrol, A. S., Mitchell, L. A., Loya, J. J., Liu, T., Westbroek, E. M., Feroze, A. H., Rodriguez, S., Echegaray, S., Azad, T. D., Yeom, K. W., Napel, S., Rubin, D. L., Chang, S. D., Harsh, G. R., Gevaert, O. 2015; 7 (303): 303ra138-?

    Abstract

    Glioblastoma (GBM) is the most common and highly lethal primary malignant brain tumor in adults. There is a dire need for easily accessible, noninvasive biomarkers that can delineate underlying molecular activities and predict response to therapy. To this end, we sought to identify subtypes of GBM, differentiated solely by quantitative magnetic resonance (MR) imaging features, that could be used for better management of GBM patients. Quantitative image features capturing the shape, texture, and edge sharpness of each lesion were extracted from MR images of 121 single-institution patients with de novo, solitary, unilateral GBM. Three distinct phenotypic "clusters" emerged in the development cohort using consensus clustering with 10,000 iterations on these image features. These three clusters--pre-multifocal, spherical, and rim-enhancing, names reflecting their image features--were validated in an independent cohort consisting of 144 multi-institution patients with similar tumor characteristics from The Cancer Genome Atlas (TCGA). Each cluster mapped to a unique set of molecular signaling pathways using pathway activity estimates derived from the analysis of TCGA tumor copy number and gene expression data with the PARADIGM (Pathway Recognition Algorithm Using Data Integration on Genomic Models) algorithm. Distinct pathways, such as c-Kit and FOXA, were enriched in each cluster, indicating differential molecular activities as determined by the image features. Each cluster also demonstrated differential probabilities of survival, indicating prognostic importance. Our imaging method offers a noninvasive approach to stratify GBM patients and also provides unique sets of molecular signatures to inform targeted therapy and personalized treatment of GBM.

    View details for DOI 10.1126/scitranslmed.aaa7582

    View details for PubMedID 26333934

    View details for PubMedCentralID PMC4666025

  • Restricted Summed-Area Projection for Geographic Atrophy Visualization in SD-OCT Images TRANSLATIONAL VISION SCIENCE & TECHNOLOGY Chen, Q., Niu, S., Shen, H., Leng, T., de Sisternes, L., Rubin, D. L. 2015; 4 (5)

    Abstract

    To enhance the rapid assessment of geographic atrophy (GA) across the macula in a single projection image generated from three-dimensional (3D) spectral-domain optical coherence tomography (SD-OCT) scans by introducing a novel restricted summed-area projection (RSAP) technique.We describe a novel en face GA visualization technique, the RSAP, by restricting the axial projection of SD-OCT images to the regions beneath the Bruch's membrane (BM) boundary and also considering the choroidal vasculature's influence on GA visualization. The technique analyzes the intensity distribution beneath the retinal pigment epithelium (RPE) layer to fit a cross-sectional surface in the sub-RPE region. The area is taken as the primary GA projection. A median filter is then adopted to smooth the generated GA projection image. The RSAP technique was evaluated in 99 3D SD-OCT data sets from 27 eyes of 21 patients presenting with advanced nonexudative age-related macular degeneration and GA. We used the mean difference between GA and background regions and GA separability metric to measure GA contrast and distinction in the generated images, respectively. We compared our results with two existing GA projection techniques, the summed-voxel projection (SVP) and Sub-RPE Slab techniques.Comparative results demonstrate that the RSAP technique is more effective in displaying GA than the SVP and Sub-RPE Slab. The average of the mean difference between GA and background regions and the GA separability based on SVP, Sub-RPE Slab, and RSAP were 0.129/0.880, 0.238/0.919, and 0.276/0.938, respectively.The RSAP technique was more effective for GA visualization than the conventional SVP and Sub-RPE Slab techniques. Our technique decreases choroidal vasculature influence on GA projection images by analyzing the intensity distribution characteristics in sub-RPE regions. The generated GA projection image with the RSAP technique has improved contrast and distinction.Our method for automated generation of GA projection images from SD-OCT images may improve the visualization of the macular abnormalities and the management of GA.

    View details for DOI 10.1167/tvst.4.5.2

    View details for Web of Science ID 000388661700002

    View details for PubMedID 26347016

    View details for PubMedCentralID PMC4559218

  • Comparing image search behaviour in the ARRS GoldMiner search engine and a clinical PACS/RIS JOURNAL OF BIOMEDICAL INFORMATICS De-Arteaga, M., Eggel, I., Do, B., Rubin, D., Kahn, C. E., Mueller, H. 2015; 56: 57-64

    Abstract

    Information search has changed the way we manage knowledge and the ubiquity of information access has made search a frequent activity, whether via Internet search engines or increasingly via mobile devices. Medical information search is in this respect no different and much research has been devoted to analyzing the way in which physicians aim to access information. Medical image search is a much smaller domain but has gained much attention as it has different characteristics than search for text documents. While web search log files have been analysed many times to better understand user behaviour, the log files of hospital internal systems for search in a PACS/RIS (Picture Archival and Communication System, Radiology Information System) have rarely been analysed. Such a comparison between a hospital PACS/RIS search and a web system for searching images of the biomedical literature is the goal of this paper. Objectives are to identify similarities and differences in search behaviour of the two systems, which could then be used to optimize existing systems and build new search engines. Log files of the ARRS GoldMiner medical image search engine (freely accessible on the Internet) containing 222,005 queries, and log files of Stanford's internal PACS/RIS search called radTF containing 18,068 queries were analysed. Each query was preprocessed and all query terms were mapped to the RadLex (Radiology Lexicon) terminology, a comprehensive lexicon of radiology terms created and maintained by the Radiological Society of North America, so the semantic content in the queries and the links between terms could be analysed, and synonyms for the same concept could be detected. RadLex was mainly created for the use in radiology reports, to aid structured reporting and the preparation of educational material (Lanlotz, 2006) [1]. In standard medical vocabularies such as MeSH (Medical Subject Headings) and UMLS (Unified Medical Language System) specific terms of radiology are often underrepresented, therefore RadLex was considered to be the best option for this task. The results show a surprising similarity between the usage behaviour in the two systems, but several subtle differences can also be noted. The average number of terms per query is 2.21 for GoldMiner and 2.07 for radTF, the used axes of RadLex (anatomy, pathology, findings, …) have almost the same distribution with clinical findings being the most frequent and the anatomical entity the second; also, combinations of RadLex axes are extremely similar between the two systems. Differences include a longer length of the sessions in radTF than in GoldMiner (3.4 and 1.9 queries per session on average). Several frequent search terms overlap but some strong differences exist in the details. In radTF the term "normal" is frequent, whereas in GoldMiner it is not. This makes intuitive sense, as in the literature normal cases are rarely described whereas in clinical work the comparison with normal cases is often a first step. The general similarity in many points is likely due to the fact that users of the two systems are influenced by their daily behaviour in using standard web search engines and follow this behaviour in their professional search. This means that many results and insights gained from standard web search can likely be transferred to more specialized search systems. Still, specialized log files can be used to find out more on reformulations and detailed strategies of users to find the right content.

    View details for DOI 10.1016/j.jbi.2015.04.013

    View details for Web of Science ID 000359752100005

    View details for PubMedID 26002820

  • 3D Riesz-wavelet based Covariance descriptors for texture classification of lung nodule tissue in CT. Conference proceedings : ... Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual Conference Cirujeda, P., Muller, H., Rubin, D., Aguilera, T. A., Loo, B. W., Diehn, M., Binefa, X., Depeursinge, A. 2015; 2015: 7909-7912

    Abstract

    In this paper we present a novel technique for characterizing and classifying 3D textured volumes belonging to different lung tissue types in 3D CT images. We build a volume-based 3D descriptor, robust to changes of size, rigid spatial transformations and texture variability, thanks to the integration of Riesz-wavelet features within a Covariance-based descriptor formulation. 3D Riesz features characterize the morphology of tissue density due to their response to changes in intensity in CT images. These features are encoded in a Covariance-based descriptor formulation: this provides a compact and flexible representation thanks to the use of feature variations rather than dense features themselves and adds robustness to spatial changes. Furthermore, the particular symmetric definite positive matrix form of these descriptors causes them to lay in a Riemannian manifold. Thus, descriptors can be compared with analytical measures, and accurate techniques from machine learning and clustering can be adapted to their spatial domain. Additionally we present a classification model following a "Bag of Covariance Descriptors" paradigm in order to distinguish three different nodule tissue types in CT: solid, ground-glass opacity, and healthy lung. The method is evaluated on top of an acquired dataset of 95 patients with manually delineated ground truth by radiation oncology specialists in 3D, and quantitative sensitivity and specificity values are presented.

    View details for DOI 10.1109/EMBC.2015.7320226

    View details for PubMedID 26738126

  • Radiogenomics of clear cell renal cell carcinoma: preliminary findings of The Cancer Genome Atlas-Renal Cell Carcinoma (TCGA-RCC) Imaging Research Group ABDOMINAL IMAGING Shinagare, A. B., Vikram, R., Jaffe, C., Akin, O., Kirby, J., Huang, E., Freymann, J., Sainani, N. I., Sadow, C. A., Bathala, T. K., Rubin, D. L., Oto, A., Heller, M. T., Surabhi, V. R., Katabathina, V., Silverman, S. G. 2015; 40 (6): 1684-1692

    Abstract

    To investigate associations between imaging features and mutational status of clear cell renal cell carcinoma (ccRCC).This multi-institutional, multi-reader study included 103 patients (77 men; median age 59 years, range 34-79) with ccRCC examined with CT in 81 patients, MRI in 19, and both CT and MRI in three; images were downloaded from The Cancer Imaging Archive, an NCI-funded project for genome-mapping and analyses. Imaging features [size (mm), margin (well-defined or ill-defined), composition (solid or cystic), necrosis (for solid tumors: 0%, 1%-33%, 34%-66% or >66%), growth pattern (endophytic, <50% exophytic, or ≥50% exophytic), and calcification (present, absent, or indeterminate)] were reviewed independently by three readers blinded to mutational data. The association of imaging features with mutational status (VHL, BAP1, PBRM1, SETD2, KDM5C, and MUC4) was assessed.Median tumor size was 49 mm (range 14-162 mm), 73 (71%) tumors had well-defined margins, 98 (95%) tumors were solid, 95 (92%) showed presence of necrosis, 46 (45%) had ≥50% exophytic component, and 18 (19.8%) had calcification. VHL (n = 52) and PBRM1 (n = 24) were the most common mutations. BAP1 mutation was associated with ill-defined margin and presence of calcification (p = 0.02 and 0.002, respectively, Pearson's χ (2) test); MUC4 mutation was associated with an exophytic growth pattern (p = 0.002, Mann-Whitney U test).BAP1 mutation was associated with ill-defined tumor margins and presence of calcification; MUC4 mutation was associated with exophytic growth. Given the known prognostic implications of BAP1 and MUC4 mutations, these results support using radiogenomics to aid in prognostication and management.

    View details for DOI 10.1007/s00261-015-0386-z

    View details for Web of Science ID 000359435300036

    View details for PubMedID 25753955

    View details for PubMedCentralID PMC4534327

  • Addition of MR imaging features and genetic biomarkers strengthens glioblastoma survival prediction in TCGA patients. Journal of neuroradiology. Journal de neuroradiologie Nicolasjilwan, M., Hu, Y., Yan, C., Meerzaman, D., Holder, C. A., Gutman, D., Jain, R., Colen, R., Rubin, D. L., Zinn, P. O., Hwang, S. N., Raghavan, P., Hammoud, D. A., Scarpace, L. M., Mikkelsen, T., Chen, J., Gevaert, O., Buetow, K., Freymann, J., Kirby, J., Flanders, A. E., Wintermark, M. 2015; 42 (4): 212-221

    Abstract

    The purpose of our study was to assess whether a model combining clinical factors, MR imaging features, and genomics would better predict overall survival of patients with glioblastoma (GBM) than either individual data type.The study was conducted leveraging The Cancer Genome Atlas (TCGA) effort supported by the National Institutes of Health. Six neuroradiologists reviewed MRI images from The Cancer Imaging Archive (http://cancerimagingarchive.net) of 102 GBM patients using the VASARI scoring system. The patients' clinical and genetic data were obtained from the TCGA website (http://www.cancergenome.nih.gov/). Patient outcome was measured in terms of overall survival time. The association between different categories of biomarkers and survival was evaluated using Cox analysis.The features that were significantly associated with survival were: (1) clinical factors: chemotherapy; (2) imaging: proportion of tumor contrast enhancement on MRI; and (3) genomics: HRAS copy number variation. The combination of these three biomarkers resulted in an incremental increase in the strength of prediction of survival, with the model that included clinical, imaging, and genetic variables having the highest predictive accuracy (area under the curve 0.679±0.068, Akaike's information criterion 566.7, P<0.001).A combination of clinical factors, imaging features, and HRAS copy number variation best predicts survival of patients with GBM.

    View details for DOI 10.1016/j.neurad.2014.02.006

    View details for PubMedID 24997477

  • Visual Prognosis of Eyes Recovering From Macular Hole Surgery Through Automated Quantitative Analysis of Spectral-Domain Optical Coherence Tomography (SD-OCT) Scans. Investigative ophthalmology & visual science de Sisternes, L., Hu, J., Rubin, D. L., Leng, T. 2015; 56 (8): 4631-4643

    Abstract

    To determine the value of topographic spectral-domain optical coherence tomography (SD-OCT) imaging features assessed after macular hole repair surgery in predicting visual acuity (VA) outcomes.An automated algorithm was developed to topographically outline and quantify area, extent, and location of defects in the ellipsoid zone (EZ) band and inner retina layers in SD-OCT scans. We analyzed the correlation of these values with VA in longitudinal observations from 35 patients who underwent successful macular hole surgery, in their first observation after surgery (within 2 months), and in a single observation within 6 to 12 months after surgery. Image features assessed at the first visit after surgery were also investigated as possible predictors of future VA improvement.Significant correlation with longitudinal VA was found for the extent, circularity, and ratio of defects in EZ band at the fovea and parafoveal regions. The ratio of defects in EZ band at the fovea, temporal-inner, and inferior-inner macula regions showed significant strong correlation with VA within 6 to 12 months post surgery. Patients with worse vision outcome at such time also had a significantly higher rate of inner retinal defects in the superior-outer region in their first postsurgery observation.A lowering extent of EZ band defects in the foveal and parafoveal regions is a good indicator of postsurgery VA recovery. Attention should also be given to postsurgical alterations in the inner retina, as patients with more extensive atrophic changes appear to have slower or worse VA recovery despite closure of the macular hole.

    View details for DOI 10.1167/iovs.14-16344

    View details for PubMedID 26200503

    View details for PubMedCentralID PMC4515949

  • Application of Improved Homogeneity Similarity-Based Denoising in Optical Coherence Tomography Retinal Images JOURNAL OF DIGITAL IMAGING Chen, Q., de Sisternes, L., Leng, T., Rubin, D. L. 2015; 28 (3): 346-361

    Abstract

    Image denoising is a fundamental preprocessing step of image processing in many applications developed for optical coherence tomography (OCT) retinal imaging-a high-resolution modality for evaluating disease in the eye. To make a homogeneity similarity-based image denoising method more suitable for OCT image removal, we improve it by considering the noise and retinal characteristics of OCT images in two respects: (1) median filtering preprocessing is used to make the noise distribution of OCT images more suitable for patch-based methods; (2) a rectangle neighborhood and region restriction are adopted to accommodate the horizontal stretching of retinal structures when observed in OCT images. As a performance measurement of the proposed technique, we tested the method on real and synthetic noisy retinal OCT images and compared the results with other well-known spatial denoising methods, including bilateral filtering, five partial differential equation (PDE)-based methods, and three patch-based methods. Our results indicate that our proposed method seems suitable for retinal OCT imaging denoising, and that, in general, patch-based methods can achieve better visual denoising results than point-based methods in this type of imaging, because the image patch can better represent the structured information in the images than a single pixel. However, the time complexity of the patch-based methods is substantially higher than that of the others.

    View details for DOI 10.1007/s10278-014-9742-8

    View details for Web of Science ID 000354950200014

    View details for PubMedID 25404105

  • Topographic OCT Segmentation of Inner and Outer Retina in Progressive Hydroxychloroquine Retinopathy Marmor, M. F., de Sisternes, L., Hu, J., Rubin, D. L. ASSOC RESEARCH VISION OPHTHALMOLOGY INC. 2015
  • Automated classification of usual interstitial pneumonia using regional volumetric texture analysis in high-resolution computed tomography. Investigative radiology Depeursinge, A., Chin, A. S., Leung, A. N., Terrone, D., Bristow, M., Rosen, G., Rubin, D. L. 2015; 50 (4): 261-267

    Abstract

    We propose a novel computational approach for the automated classification of classic versus atypical usual interstitial pneumonia (UIP).Thirty-three patients with UIP were enrolled in this study. They were classified as classic versus atypical UIP by a consensus of 2 thoracic radiologists with more than 15 years of experience using the American Thoracic Society evidence-based guidelines for computed tomography diagnosis of UIP. Two cardiothoracic fellows with 1 year of subspecialty training provided independent readings. The system is based on regional characterization of the morphological tissue properties of lung using volumetric texture analysis of multiple-detector computed tomography images. A simple digital atlas with 36 lung subregions is used to locate texture properties, from which the responses of multidirectional Riesz wavelets are obtained. Machine learning is used to aggregate and to map the regional texture attributes to a simple score that can be used to stratify patients with UIP into classic and atypical subtypes.We compared the predictions on the basis of regional volumetric texture analysis with the ground truth established by expert consensus. The area under the receiver operating characteristic curve of the proposed score was estimated to be 0.81 using a leave-one-patient-out cross-validation, with high specificity for classic UIP. The performance of our automated method was found to be similar to that of the 2 fellows and to the agreement between experienced chest radiologists reported in the literature. However, the errors of our method and the fellows occurred on different cases, which suggests that combining human and computerized evaluations may be synergistic.Our results are encouraging and suggest that an automated system may be useful in routine clinical practice as a diagnostic aid for identifying patients with complex lung disease such as classic UIP, obviating the need for invasive surgical lung biopsy and its associated risks.

    View details for DOI 10.1097/RLI.0000000000000127

    View details for PubMedID 25551822

    View details for PubMedCentralID PMC4355184

  • Content-based image retrieval in radiology: analysis of variability in human perception of similarity. Journal of medical imaging (Bellingham, Wash.) Faruque, J., Beaulieu, C. F., Rosenberg, J., Rubin, D. L., Yao, D., Napel, S. 2015; 2 (2): 025501-?

    Abstract

    We aim to develop a better understanding of perception of similarity in focal computed tomography (CT) liver images to determine the feasibility of techniques for developing reference sets for training and validating content-based image retrieval systems. In an observer study, four radiologists and six nonradiologists assessed overall similarity and similarity in 5 image features in 136 pairs of focal CT liver lesions. We computed intra- and inter-reader agreements in these similarity ratings and viewed the distributions of the ratings. The readers' ratings of overall similarity and similarity in each feature primarily appeared to be bimodally distributed. Median Kappa scores for intra-reader agreement ranged from 0.57 to 0.86 in the five features and from 0.72 to 0.82 for overall similarity. Median Kappa scores for inter-reader agreement ranged from 0.24 to 0.58 in the five features and were 0.39 for overall similarity. There was no significant difference in agreement for radiologists and nonradiologists. Our results show that developing perceptual similarity reference standards is a complex task. Moderate to high inter-reader variability precludes ease of dividing up the workload of rating perceptual similarity among many readers, while low intra-reader variability may make it possible to acquire large volumes of data by asking readers to view image pairs over many sessions.

    View details for DOI 10.1117/1.JMI.2.2.025501

    View details for PubMedID 26158112

    View details for PubMedCentralID PMC4478987

  • Automatic abstraction of imaging observations with their characteristics from mammography reports. Journal of the American Medical Informatics Association Bozkurt, S., Lipson, J. A., Senol, U., Rubin, D. L., Bulu, H. 2015; 22 (e1): e81-92

    Abstract

    Radiology reports are usually narrative, unstructured text, a format which hinders the ability to input report contents into decision support systems. In addition, reports often describe multiple lesions, and it is challenging to automatically extract information on each lesion and its relationships to characteristics, anatomic locations, and other information that describes it. The goal of our work is to develop natural language processing (NLP) methods to recognize each lesion in free-text mammography reports and to extract its corresponding relationships, producing a complete information frame for each lesion.We built an NLP information extraction pipeline in the General Architecture for Text Engineering (GATE) NLP toolkit. Sequential processing modules are executed, producing an output information frame required for a mammography decision support system. Each lesion described in the report is identified by linking it with its anatomic location in the breast. In order to evaluate our system, we selected 300 mammography reports from a hospital report database.The gold standard contained 797 lesions, and our system detected 815 lesions (780 true positives, 35 false positives, and 17 false negatives). The precision of detecting all the imaging observations with their modifiers was 94.9, recall was 90.9, and the F measure was 92.8.Our NLP system extracts each imaging observation and its characteristics from mammography reports. Although our application focuses on the domain of mammography, we believe our approach can generalize to other domains and may narrow the gap between unstructured clinical report text and structured information extraction needed for data mining and decision support.

    View details for DOI 10.1136/amiajnl-2014-003009

    View details for PubMedID 25352567

  • Predicting adenocarcinoma recurrence using computational texture models of nodule components in lung CT MEDICAL PHYSICS Depeursinge, A., Yanagawa, M., Leung, A. N., Rubin, D. L. 2015; 42 (4): 2054-2063

    Abstract

    To investigate the importance of presurgical computed tomography (CT) intensity and texture information from ground-glass opacities (GGO) and solid nodule components for the prediction of adenocarcinoma recurrence.For this study, 101 patients with surgically resected stage I adenocarcinoma were selected. During the follow-up period, 17 patients had disease recurrence with six associated cancer-related deaths. GGO and solid tumor components were delineated on presurgical CT scans by a radiologist. Computational texture models of GGO and solid regions were built using linear combinations of steerable Riesz wavelets learned with linear support vector machines (SVMs). Unlike other traditional texture attributes, the proposed texture models are designed to encode local image scales and directions that are specific to GGO and solid tissue. The responses of the locally steered models were used as texture attributes and compared to the responses of unaligned Riesz wavelets. The texture attributes were combined with CT intensities to predict tumor recurrence and patient hazard according to disease-free survival (DFS) time. Two families of predictive models were compared: LASSO and SVMs, and their survival counterparts: Cox-LASSO and survival SVMs.The best-performing predictive model of patient hazard was associated with a concordance index (C-index) of 0.81 ± 0.02 and was based on the combination of the steered models and CT intensities with survival SVMs. The same feature group and the LASSO model yielded the highest area under the receiver operating characteristic curve (AUC) of 0.8 ± 0.01 for predicting tumor recurrence, although no statistically significant difference was found when compared to using intensity features solely. For all models, the performance was found to be significantly higher when image attributes were based on the solid components solely versus using the entire tumors (p < 3.08 × 10(-5)).This study constitutes a novel perspective on how to interpret imaging information from CT examinations by suggesting that most of the information related to adenocarcinoma aggressiveness is related to the intensity and morphological properties of solid components of the tumor. The prediction of adenocarcinoma relapse was found to have low specificity but very high sensitivity. Our results could be useful in clinical practice to identify patients for which no recurrence is expected with a very high confidence using a presurgical CT scan only. It also provided an accurate estimation of the risk of recurrence after a given duration t from surgical resection (i.e., C-index = 0.81 ± 0.02).

    View details for DOI 10.1118/1.4916088

    View details for Web of Science ID 000352273200059

    View details for PubMedID 25832095

    View details for PubMedCentralID PMC4385100

  • Ontology-based Image Navigation: Exploring 3.0-T MR Neurography of the Brachial Plexus Using AIM and RadLex RADIOGRAPHICS Wang, K. C., Salunkhe, A. R., Morrison, J. J., Lee, P. P., Mejino, J. L., Detwiler, L. T., Brinkley, J. F., Siegel, E. L., Rubin, D. L., Carrino, J. A. 2015; 35 (1): 142-151

    Abstract

    Disorders of the peripheral nervous system have traditionally been evaluated using clinical history, physical examination, and electrodiagnostic testing. In selected cases, imaging modalities such as magnetic resonance (MR) neurography may help further localize or characterize abnormalities associated with peripheral neuropathies, and the clinical importance of such techniques is increasing. However, MR image interpretation with respect to peripheral nerve anatomy and disease often presents a diagnostic challenge because the relevant knowledge base remains relatively specialized. Using the radiology knowledge resource RadLex®, a series of RadLex queries, the Annotation and Image Markup standard for image annotation, and a Web services-based software architecture, the authors developed an application that allows ontology-assisted image navigation. The application provides an image browsing interface, allowing users to visually inspect the imaging appearance of anatomic structures. By interacting directly with the images, users can access additional structure-related information that is derived from RadLex (eg, muscle innervation, muscle attachment sites). These data also serve as conceptual links to navigate from one portion of the imaging atlas to another. With 3.0-T MR neurography of the brachial plexus as the initial area of interest, the resulting application provides support to radiologists in the image interpretation process by allowing efficient exploration of the MR imaging appearance of relevant nerve segments, muscles, bone structures, vascular landmarks, anatomic spaces, and entrapment sites, and the investigation of neuromuscular relationships.

    View details for DOI 10.1148/rg.351130072

    View details for PubMedID 25590394

  • Automated Grading of Gliomas using Deep Learning in Digital Pathology Images: A modular approach with ensemble of convolutional neural networks. AMIA ... Annual Symposium proceedings / AMIA Symposium. AMIA Symposium Ertosun, M. G., Rubin, D. L. 2015; 2015: 1899-1908

    Abstract

    Brain glioma is the most common primary malignant brain tumors in adults with different pathologic subtypes: Lower Grade Glioma (LGG) Grade II, Lower Grade Glioma (LGG) Grade III, and Glioblastoma Multiforme (GBM) Grade IV. The survival and treatment options are highly dependent of this glioma grade. We propose a deep learning-based, modular classification pipeline for automated grading of gliomas using digital pathology images. Whole tissue digitized images of pathology slides obtained from The Cancer Genome Atlas (TCGA) were used to train our deep learning modules. Our modular pipeline provides diagnostic quality statistics, such as precision, sensitivity and specificity, of the individual deep learning modules, and (1) facilitates training given the limited data in this domain, (2) enables exploration of different deep learning structures for each module, (3) leads to developing less complex modules that are simpler to analyze, and (4) provides flexibility, permitting use of single modules within the framework or use of other modeling or machine learning applications, such as probabilistic graphical models or support vector machines. Our modular approach helps us meet the requirements of minimum accuracy levels that are demanded by the context of different decision points within a multi-class classification scheme. Convolutional Neural Networks are trained for each module for each sub-task with more than 90% classification accuracies on validation data set, and achieved classification accuracy of 96% for the task of GBM vs LGG classification, 71% for further identifying the grade of LGG into Grade II or Grade III on independent data set coming from new patients from the multi-institutional repository.

    View details for PubMedID 26958289

  • The National Cancer Informatics Program (NCIP) Annotation and Image Markup (AIM) Foundation Model JOURNAL OF DIGITAL IMAGING Mongkolwat, P., Kleper, V., Talbot, S., Rubin, D. 2014; 27 (6): 692-701

    Abstract

    Knowledge contained within in vivo imaging annotated by human experts or computer programs is typically stored as unstructured text and separated from other associated information. The National Cancer Informatics Program (NCIP) Annotation and Image Markup (AIM) Foundation information model is an evolution of the National Institute of Health's (NIH) National Cancer Institute's (NCI) Cancer Bioinformatics Grid (caBIG®) AIM model. The model applies to various image types created by various techniques and disciplines. It has evolved in response to the feedback and changing demands from the imaging community at NCI. The foundation model serves as a base for other imaging disciplines that want to extend the type of information the model collects. The model captures physical entities and their characteristics, imaging observation entities and their characteristics, markups (two- and three-dimensional), AIM statements, calculations, image source, inferences, annotation role, task context or workflow, audit trail, AIM creator details, equipment used to create AIM instances, subject demographics, and adjudication observations. An AIM instance can be stored as a Digital Imaging and Communications in Medicine (DICOM) structured reporting (SR) object or Extensible Markup Language (XML) document for further processing and analysis. An AIM instance consists of one or more annotations and associated markups of a single finding along with other ancillary information in the AIM model. An annotation describes information about the meaning of pixel data in an image. A markup is a graphical drawing placed on the image that depicts a region of interest. This paper describes fundamental AIM concepts and how to use and extend AIM for various imaging disciplines.

    View details for DOI 10.1007/s10278-014-9710-3

    View details for Web of Science ID 000344805600002

    View details for PubMedCentralID PMC4391072

  • A FALSE COLOR FUSION STRATEGY FOR DRUSEN AND GEOGRAPHIC ATROPHY VISUALIZATION IN OPTICAL COHERENCE TOMOGRAPHY IMAGES RETINA-THE JOURNAL OF RETINAL AND VITREOUS DISEASES Chen, Q., Leng, T., Niu, S., Shi, J., de Sisternes, L., Rubin, D. L. 2014; 34 (12): 2346-2358

    Abstract

    To display drusen and geographic atrophy (GA) in a single projection image from three-dimensional spectral domain optical coherence tomography images based on a novel false color fusion strategy.We present a false color fusion strategy to combine drusen and GA projection images. The drusen projection image is generated with a restricted summed-voxel projection (axial sum of the reflectivity values in a spectral domain optical coherence tomography cube, limited to the region where drusen is present). The GA projection image is generated by incorporating two GA characteristics: bright choroid and thin retina pigment epithelium. The false color fusion method was evaluated in 82 three-dimensional optical coherence tomography data sets obtained from 7 patients, for which 2 readers independently identified drusen and GA as the gold standard. The mean drusen and GA overlap ratio was used as the metric to determine accuracy of visualization of the proposed method when compared with the conventional summed-voxel projection, (axial sum of the reflectivity values in the complete spectral domain optical coherence tomography cube) technique and color fundus photographs.Comparative results demonstrate that the false color image is more effective in displaying drusen and GA than summed-voxel projection and CFP. The mean drusen/GA overlap ratios based on the conventional summed-voxel projection method, color fundus photographs, and the false color fusion method were 6.4%/100%, 64.1%/66.7%, and 85.6%/100%, respectively.The false color fusion method was more effective for simultaneous visualization of drusen and GA than the conventional summed-voxel projection method and color fundus photographs, and it seems promising as an alternative method for visualizing drusen and GA in the retinal fundus, which commonly occur together and can be confusing to differentiate without methods such as this proposed one.

    View details for Web of Science ID 000345911300010

    View details for PubMedID 25062439

  • Quantitative SD-OCT imaging biomarkers as indicators of age-related macular degeneration progression. Investigative ophthalmology & visual science de Sisternes, L., Simon, N., Tibshirani, R., Leng, T., Rubin, D. L. 2014; 55 (11): 7093-7103

    Abstract

    Purpose: We developed a statistical model based on quantitative characteristics of drusen to estimate the likelihood of conversion from early and intermediate age-related macular degeneration (AMD) to its advanced exudative form (AMD progression) in the short term (less than 5 years), a crucial task to enable early intervention and improve outcomes. Methods: Image features of drusen quantifying their number, morphology, and reflectivity properties, as well as the longitudinal evolution in these characteristics, were automatically extracted from 2146 spectral domain optical coherence tomography (SD-OCT) scans of 330 AMD eyes in 244 patients collected over a period of 5 years, with 36 eyes showing progression during clinical follow-up. We developed and evaluated a statistical model to predict the likelihood of progression at pre-determined times using clinical and image features as predictors. Results: Area, volume, height, and reflectivity of drusen were informative features distinguishing between progressing and non-progressing cases. Discerning progression at follow-up (mean 6.16 months) resulted in a mean area under the receiver operating characteristic curve (AUC) of 0.74 ((0.58, 0.85) 95% confidence interval (CI)). The maximum predictive performance was observed at 11 months after a patient's first early AMD diagnosis, with mean AUC 0.92 ((0.83, 0.98) 95% CI). Those eyes predicted to progress showed a much higher progression rate than those predicted not to progress at any given time from the initial visit. Conclusions: Our results demonstrate the potential ability of our model to identify those AMD patients at risk of progressing to exudative AMD from an early or intermediate stage.

    View details for DOI 10.1167/iovs.14-14918

    View details for PubMedID 25301882

  • Automated retinal layers segmentation in SD-OCT images using dual-gradient and spatial correlation smoothness constraint COMPUTERS IN BIOLOGY AND MEDICINE Niu, S., Chen, Q., de Sisternes, L., Rubin, D. L., Zhang, W., Liu, Q. 2014; 54: 116-128

    Abstract

    Automatic segmentation of retinal layers in spectral domain optical coherence tomography (SD-OCT) images plays a vital role in the quantitative assessment of retinal disease, because it provides detailed information which is hard to process manually. A number of algorithms to automatically segment retinal layers have been developed; however, accurate edge detection is challenging. We developed an automatic algorithm for segmenting retinal layers based on dual-gradient and spatial correlation smoothness constraint. The proposed algorithm utilizes a customized edge flow to produce the edge map and a convolution operator to obtain local gradient map in the axial direction. A valid search region is then defined to identify layer boundaries. Finally, a spatial correlation smoothness constraint is applied to remove anomalous points at the layer boundaries. Our approach was tested on two datasets including 10 cubes from 10 healthy eyes and 15 cubes from 6 patients with age-related macular degeneration. A quantitative evaluation of our method was performed on more than 600 images from cubes obtained in five healthy eyes. Experimental results demonstrated that the proposed method can estimate six layer boundaries accurately. Mean absolute boundary positioning differences and mean absolute thickness differences (mean±SD) were 4.43±3.32 μm and 0.22±0.24 μm, respectively.

    View details for DOI 10.1016/j.compbiomed.2014.08.028

    View details for Web of Science ID 000345189800014

  • On combining image-based and ontological semantic dissimilarities for medical image retrieval applications. Medical image analysis Kurtz, C., Depeursinge, A., Napel, S., Beaulieu, C. F., Rubin, D. L. 2014; 18 (7): 1082-1100

    Abstract

    Computer-assisted image retrieval applications can assist radiologists by identifying similar images in archives as a means to providing decision support. In the classical case, images are described using low-level features extracted from their contents, and an appropriate distance is used to find the best matches in the feature space. However, using low-level image features to fully capture the visual appearance of diseases is challenging and the semantic gap between these features and the high-level visual concepts in radiology may impair the system performance. To deal with this issue, the use of semantic terms to provide high-level descriptions of radiological image contents has recently been advocated. Nevertheless, most of the existing semantic image retrieval strategies are limited by two factors: they require manual annotation of the images using semantic terms and they ignore the intrinsic visual and semantic relationships between these annotations during the comparison of the images. Based on these considerations, we propose an image retrieval framework based on semantic features that relies on two main strategies: (1) automatic "soft" prediction of ontological terms that describe the image contents from multi-scale Riesz wavelets and (2) retrieval of similar images by evaluating the similarity between their annotations using a new term dissimilarity measure, which takes into account both image-based and ontological term relations. The combination of these strategies provides a means of accurately retrieving similar images in databases based on image annotations and can be considered as a potential solution to the semantic gap problem. We validated this approach in the context of the retrieval of liver lesions from computed tomographic (CT) images and annotated with semantic terms of the RadLex ontology. The relevance of the retrieval results was assessed using two protocols: evaluation relative to a dissimilarity reference standard defined for pairs of images on a 25-images dataset, and evaluation relative to the diagnoses of the retrieved images on a 72-images dataset. A normalized discounted cumulative gain (NDCG) score of more than 0.92 was obtained with the first protocol, while AUC scores of more than 0.77 were obtained with the second protocol. This automatical approach could provide real-time decision support to radiologists by showing them similar images with associated diagnoses and, where available, responses to therapies.

    View details for DOI 10.1016/j.media.2014.06.009

    View details for PubMedID 25036769

    View details for PubMedCentralID PMC4173098

  • Predicting Visual Semantic Descriptive Terms From Radiological Image Data: Preliminary Results With Liver Lesions in CT. IEEE transactions on medical imaging Depeursinge, A., Kurtz, C., Beaulieu, C., Napel, S., Rubin, D. 2014; 33 (8): 1669-1676

    Abstract

    We describe a framework to model visual semantics of liver lesions in CT images in order to predict the visual semantic terms (VST) reported by radiologists in describing these lesions. Computational models of VST are learned from image data using linear combinations of high-order steerable Riesz wavelets and support vector machines (SVM). In a first step, these models are used to predict the presence of each semantic term that describes liver lesions. In a second step, the distances between all VST models are calculated to establish a nonhierarchical computationally-derived ontology of VST containing inter-term synonymy and complementarity. A preliminary evaluation of the proposed framework was carried out using 74 liver lesions annotated with a set of 18 VSTs from the RadLex ontology. A leave-one-patient-out cross-validation resulted in an average area under the ROC curve of 0.853 for predicting the presence of each VST. The proposed framework is expected to foster human-computer synergies for the interpretation of radiological images while using rotation-covariant computational models of VSTs to 1) quantify their local likelihood and 2) explicitly link them with pixel-based image content in the context of a given imaging domain.

    View details for DOI 10.1109/TMI.2014.2321347

    View details for PubMedID 24808406

    View details for PubMedCentralID PMC4129229

  • Imaging genomic mapping of an invasive MRI phenotype predicts patient outcome and metabolic dysfunction: a TCGA glioma phenotype research group project BMC MEDICAL GENOMICS Colen, R. R., Vangel, M., Wang, J., Gutman, D. A., Hwang, S. N., Wintermark, M., Jain, R., Jilwan-Nicolas, M., Chen, J. Y., Raghavan, P., Holder, C. A., Rubin, D., Huang, E., Kirby, J., Freymann, J., Jaffe, C. C., Flanders, A., Zinn, P. O. 2014; 7

    Abstract

    Invasion of tumor cells into adjacent brain parenchyma is a major cause of treatment failure in glioblastoma. Furthermore, invasive tumors are shown to have a different genomic composition and metabolic abnormalities that allow for a more aggressive GBM phenotype and resistance to therapy. We thus seek to identify those genomic abnormalities associated with a highly aggressive and invasive GBM imaging-phenotype.We retrospectively identified 104 treatment-naïve glioblastoma patients from The Cancer Genome Atlas (TCGA) whom had gene expression profiles and corresponding MR imaging available in The Cancer Imaging Archive (TCIA). The standardized VASARI feature-set criteria were used for the qualitative visual assessments of invasion. Patients were assigned to classes based on the presence (Class A) or absence (Class B) of statistically significant invasion parameters to create an invasive imaging signature; imaging genomic analysis was subsequently performed using GenePattern Comparative Marker Selection module (Broad Institute).Our results show that patients with a combination of deep white matter tracts and ependymal invasion (Class A) on imaging had a significant decrease in overall survival as compared to patients with absence of such invasive imaging features (Class B) (8.7 versus 18.6 months, p < 0.001). Mitochondrial dysfunction was the top canonical pathway associated with Class A gene expression signature. The MYC oncogene was predicted to be the top activation regulator in Class A.We demonstrate that MRI biomarker signatures can identify distinct GBM phenotypes associated with highly significant survival differences and specific molecular pathways. This study identifies mitochondrial dysfunction as the top canonical pathway in a very aggressive GBM phenotype. Thus, imaging-genomic analyses may prove invaluable in detecting novel targetable genomic pathways.

    View details for DOI 10.1186/1755-8794-7-30

    View details for Web of Science ID 000338464600001

    View details for PubMedCentralID PMC4057583

  • A hierarchical knowledge-based approach for retrieving similar medical images described with semantic annotations JOURNAL OF BIOMEDICAL INFORMATICS Kurtz, C., Beaulieu, C. F., Napel, S., Rubin, D. L. 2014; 49: 227-244

    Abstract

    Computer-assisted image retrieval applications could assist radiologist interpretations by identifying similar images in large archives as a means to providing decision support. However, the semantic gap between low-level image features and their high level semantics may impair the system performances. Indeed, it can be challenging to comprehensively characterize the images using low-level imaging features to fully capture the visual appearance of diseases on images, and recently the use of semantic terms has been advocated to provide semantic descriptions of the visual contents of images. However, most of the existing image retrieval strategies do not consider the intrinsic properties of these terms during the comparison of the images beyond treating them as simple binary (presence/absence) features. We propose a new framework that includes semantic features in images and that enables retrieval of similar images in large databases based on their semantic relations. It is based on two main steps: (1) annotation of the images with semantic terms extracted from an ontology, and (2) evaluation of the similarity of image pairs by computing the similarity between the terms using the Hierarchical Semantic-Based Distance (HSBD) coupled to an ontological measure. The combination of these two steps provides a means of capturing the semantic correlations among the terms used to characterize the images that can be considered as a potential solution to deal with the semantic gap problem. We validate this approach in the context of the retrieval and the classification of 2D regions of interest (ROIs) extracted from computed tomographic (CT) images of the liver. Under this framework, retrieval accuracy of more than 0.96 was obtained on a 30-images dataset using the Normalized Discounted Cumulative Gain (NDCG) index that is a standard technique used to measure the effectiveness of information retrieval algorithms when a separate reference standard is available. Classification results of more than 95% were obtained on a 77-images dataset. For comparison purpose, the use of the Earth Mover's Distance (EMD), which is an alternative distance metric that considers all the existing relations among the terms, led to results retrieval accuracy of 0.95 and classification results of 93% with a higher computational cost. The results provided by the presented framework are competitive with the state-of-the-art and emphasize the usefulness of the proposed methodology for radiology image retrieval and classification.

    View details for DOI 10.1016/j.jbi.2014.02.018

    View details for Web of Science ID 000337772200023

    View details for PubMedCentralID PMC4058405

  • An improved optical coherence tomography-derived fundus projection image for drusen visualization. Retina (Philadelphia, Pa.) Chen, Q., Leng, T., Zheng, L. L., Kutzscher, L., de Sisternes, L., Rubin, D. L. 2014; 34 (5): 996-1005

    Abstract

    To develop and evaluate an improved method of generating en face fundus images from three-dimensional optical coherence tomography images which enhances the visualization of drusen.We describe a novel approach, the restricted summed-voxel projection (RSVP), to generate en face projection images of the retinal surface combined with an image processing method to enhance drusen visualization. The RSVP approach is an automated method that restricts the projection to the retinal pigment epithelium layer neighborhood. Additionally, drusen visualization is improved through an image processing technique that fills drusen with bright pixels. The choroid layer is also excluded when creating the RSVP to eliminate bright pixels beneath drusen that could be confused with drusen when geographic atrophy is present. The RSVP method was evaluated in 46 patients and 3-dimensional optical coherence tomography data sets were obtained from 8 patients, for which 2 readers independently identified drusen as the gold standard. The mean drusen overlap ratio was used as the metric to determine the accuracy of visualization of the RSVP method when compared with the conventional summed-voxel projection technique.Comparative results demonstrate that the RSVP method was more effective than the conventional summed-voxel projection in displaying drusen and retinal vessels, and was more useful in detecting drusen. The mean drusen overlap ratios based on the conventional summed-voxel projection method and the RSVP method were 2.1% and 89.3%, respectively.The RSVP method was more effective for drusen visualization than the conventional summed-voxel projection method, and it may be useful for macular assessment in patients with nonexudative age-related macular degeneration.

    View details for DOI 10.1097/IAE.0000000000000018

    View details for PubMedID 24177190

  • A Robust Classifier to Distinguish Noise from fMRI Independent Components PLOS ONE Sochat, V., Supekar, K., Bustillo, J., Calhoun, V., Turner, J. A., Rubin, D. L. 2014; 9 (4)

    Abstract

    Analyzing Functional Magnetic Resonance Imaging (fMRI) of resting brains to determine the spatial location and activity of intrinsic brain networks--a novel and burgeoning research field--is limited by the lack of ground truth and the tendency of analyses to overfit the data. Independent Component Analysis (ICA) is commonly used to separate the data into signal and Gaussian noise components, and then map these components on to spatial networks. Identifying noise from this data, however, is a tedious process that has proven hard to automate, particularly when data from different institutions, subjects, and scanners is used. Here we present an automated method to delineate noisy independent components in ICA using a data-driven infrastructure that queries a database of 246 spatial and temporal features to discover a computational signature of different types of noise. We evaluated the performance of our method to detect noisy components from healthy control fMRI (sensitivity = 0.91, specificity = 0.82, cross validation accuracy (CVA) = 0.87, area under the curve (AUC) = 0.93), and demonstrate its generalizability by showing equivalent performance on (1) an age- and scanner-matched cohort of schizophrenia patients from the same institution (sensitivity = 0.89, specificity = 0.83, CVA = 0.86), (2) an age-matched cohort on an equivalent scanner from a different institution (sensitivity = 0.88, specificity = 0.88, CVA = 0.88), and (3) an age-matched cohort on a different scanner from a different institution (sensitivity = 0.72, specificity = 0.92, CVA = 0.79). We additionally compare our approach with a recently published method. Our results suggest that our method is robust to noise variations due to population as well as scanner differences, thereby making it well suited to the goal of automatically distinguishing noise from functional networks to enable investigation of human brain function.

    View details for DOI 10.1371/journal.pone.0095493

    View details for Web of Science ID 000335226500115

    View details for PubMedID 24748378

    View details for PubMedCentralID PMC3991682

  • Classification of hepatic lesions using the matching metric COMPUTER VISION AND IMAGE UNDERSTANDING Adcock, A., Rubin, D., Carlsson, G. 2014; 121: 36-42
  • Automated measurement of longitudinal IS/OS junction abnormalities on SD-OCT in postoperative macular holes Leng, T., de Sisternes, L., Rubin, D. ASSOC RESEARCH VISION OPHTHALMOLOGY INC. 2014
  • Automated Segmentation and Quantification of Retinal Layers in Patients with Hydroxychloroquine Toxicity de Sisternes, L., Marmor, M. F., Leng, T., Rubin, D. ASSOC RESEARCH VISION OPHTHALMOLOGY INC. 2014
  • Automated Tracking of Quantitative Assessments of Tumor Burden in Clinical Trials TRANSLATIONAL ONCOLOGY Rubin, D. L., Willrett, D., O'Connor, M. J., Hage, C., Kurtz, C., Moreira, D. A. 2014; 7 (1): 23-35

    Abstract

    THERE ARE TWO KEY CHALLENGES HINDERING EFFECTIVE USE OF QUANTITATIVE ASSESSMENT OF IMAGING IN CANCER RESPONSE ASSESSMENT: 1) Radiologists usually describe the cancer lesions in imaging studies subjectively and sometimes ambiguously, and 2) it is difficult to repurpose imaging data, because lesion measurements are not recorded in a format that permits machine interpretation and interoperability. We have developed a freely available software platform on the basis of open standards, the electronic Physician Annotation Device (ePAD), to tackle these challenges in two ways. First, ePAD facilitates the radiologist in carrying out cancer lesion measurements as part of routine clinical trial image interpretation workflow. Second, ePAD records all image measurements and annotations in a data format that permits repurposing image data for analyses of alternative imaging biomarkers of treatment response. To determine the impact of ePAD on radiologist efficiency in quantitative assessment of imaging studies, a radiologist evaluated computed tomography (CT) imaging studies from 20 subjects having one baseline and three consecutive follow-up imaging studies with and without ePAD. The radiologist made measurements of target lesions in each imaging study using Response Evaluation Criteria in Solid Tumors 1.1 criteria, initially with the aid of ePAD, and then after a 30-day washout period, the exams were reread without ePAD. The mean total time required to review the images and summarize measurements of target lesions was 15% (P < .039) shorter using ePAD than without using this tool. In addition, it was possible to rapidly reanalyze the images to explore lesion cross-sectional area as an alternative imaging biomarker to linear measure. We conclude that ePAD appears promising to potentially improve reader efficiency for quantitative assessment of CT examinations, and it may enable discovery of future novel image-based biomarkers of cancer treatment response.

    View details for DOI 10.1593/tlo.13796

    View details for Web of Science ID 000342684300004

    View details for PubMedID 24772204

    View details for PubMedCentralID PMC3998692

  • Errors in Quantitative Image Analysis due to Platform-Dependent Image Scaling. Translational oncology Chenevert, T. L., Malyarenko, D. I., Newitt, D., Li, X., Jayatilake, M., Tudorica, A., Fedorov, A., Kikinis, R., Liu, T. T., Muzi, M., Oborski, M. J., Laymon, C. M., Li, X., Thomas, Y., Jayashree, K., Mountz, J. M., Kinahan, P. E., Rubin, D. L., Fennessy, F., Huang, W., Hylton, N., Ross, B. D. 2014; 7 (1): 65-71

    Abstract

    To evaluate the ability of various software (SW) tools used for quantitative image analysis to properly account for source-specific image scaling employed by magnetic resonance imaging manufacturers.A series of gadoteridol-doped distilled water solutions (0%, 0.5%, 1%, and 2% volume concentrations) was prepared for manual substitution into one (of three) phantom compartments to create "variable signal," whereas the other two compartments (containing mineral oil and 0.25% gadoteriol) were held unchanged. Pseudodynamic images were acquired over multiple series using four scanners such that the histogram of pixel intensities varied enough to provoke variable image scaling from series to series. Additional diffusion-weighted images were acquired of an ice-water phantom to generate scanner-specific apparent diffusion coefficient (ADC) maps. The resulting pseudodynamic images and ADC maps were analyzed by eight centers of the Quantitative Imaging Network using 16 different SW tools to measure compartment-specific region-of-interest intensity.Images generated by one of the scanners appeared to have additional intensity scaling that was not accounted for by the majority of tested quantitative image analysis SW tools. Incorrect image scaling leads to intensity measurement bias near 100%, compared to nonscaled images.Corrective actions for image scaling are suggested for manufacturers and quantitative imaging community.

    View details for PubMedID 24772209

  • Neuroanatomical domain of the foundational model of anatomy ontology JOURNAL OF BIOMEDICAL SEMANTICS Nichols, B. N., Mejino, J. L., Detwiler, L. T., Nilsen, T. T., Martone, M. E., Turner, J. A., Rubin, D. L., Brinkley, J. F. 2014; 5

    Abstract

    The diverse set of human brain structure and function analysis methods represents a difficult challenge for reconciling multiple views of neuroanatomical organization. While different views of organization are expected and valid, no widely adopted approach exists to harmonize different brain labeling protocols and terminologies. Our approach uses the natural organizing framework provided by anatomical structure to correlate terminologies commonly used in neuroimaging.The Foundational Model of Anatomy (FMA) Ontology provides a semantic framework for representing the anatomical entities and relationships that constitute the phenotypic organization of the human body. In this paper we describe recent enhancements to the neuroanatomical content of the FMA that models cytoarchitectural and morphological regions of the cerebral cortex, as well as white matter structure and connectivity. This modeling effort is driven by the need to correlate and reconcile the terms used in neuroanatomical labeling protocols. By providing an ontological framework that harmonizes multiple views of neuroanatomical organization, the FMA provides developers with reusable and computable knowledge for a range of biomedical applications.A requirement for facilitating the integration of basic and clinical neuroscience data from diverse sources is a well-structured ontology that can incorporate, organize, and associate neuroanatomical data. We applied the ontological framework of the FMA to align the vocabularies used by several human brain atlases, and to encode emerging knowledge about structural connectivity in the brain. We highlighted several use cases of these extensions, including ontology reuse, neuroimaging data annotation, and organizing 3D brain models.

    View details for DOI 10.1186/2041-1480-5-1

    View details for Web of Science ID 000343707200001

    View details for PubMedID 24398054

    View details for PubMedCentralID PMC3944952

  • A novel method to assess incompleteness of mammography reports. AMIA ... Annual Symposium proceedings / AMIA Symposium. AMIA Symposium Gimenez, F. J., Wu, Y., Burnside, E. S., Rubin, D. L. 2014; 2014: 1758-1767

    Abstract

    Mammography has been shown to improve outcomes of women with breast cancer, but it is subject to inter-reader variability. One well-documented source of such variability is in the content of mammography reports. The mammography report is of crucial importance, since it documents the radiologist's imaging observations, interpretation of those observations in terms of likelihood of malignancy, and suggested patient management. In this paper, we define an incompleteness score to measure how incomplete the information content is in the mammography report and provide an algorithm to calculate this metric. We then show that the incompleteness score can be used to predict errors in interpretation. This method has 82.6% accuracy at predicting errors in interpretation and can possibly reduce total diagnostic errors by up to 21.7%. Such a method can easily be modified to suit other domains that depend on quality reporting.

    View details for PubMedID 25954448

  • Automated drusen segmentation and quantification in SD-OCT images. Medical image analysis Chen, Q., Leng, T., Zheng, L., Kutzscher, L., Ma, J., de Sisternes, L., Rubin, D. L. 2013; 17 (8): 1058-1072

    Abstract

    Spectral domain optical coherence tomography (SD-OCT) is a useful tool for the visualization of drusen, a retinal abnormality seen in patients with age-related macular degeneration (AMD); however, objective assessment of drusen is thwarted by the lack of a method to robustly quantify these lesions on serial OCT images. Here, we describe an automatic drusen segmentation method for SD-OCT retinal images, which leverages a priori knowledge of normal retinal morphology and anatomical features. The highly reflective and locally connected pixels located below the retinal nerve fiber layer (RNFL) are used to generate a segmentation of the retinal pigment epithelium (RPE) layer. The observed and expected contours of the RPE layer are obtained by interpolating and fitting the shape of the segmented RPE layer, respectively. The areas located between the interpolated and fitted RPE shapes (which have nonzero area when drusen occurs) are marked as drusen. To enhance drusen quantification, we also developed a novel method of retinal projection to generate an en face retinal image based on the RPE extraction, which improves the quality of drusen visualization over the current approach to producing retinal projections from SD-OCT images based on a summed-voxel projection (SVP), and it provides a means of obtaining quantitative features of drusen in the en face projection. Visualization of the segmented drusen is refined through several post-processing steps, drusen detection to eliminate false positive detections on consecutive slices, drusen refinement on a projection view of drusen, and drusen smoothing. Experimental evaluation results demonstrate that our method is effective for drusen segmentation. In a preliminary analysis of the potential clinical utility of our methods, quantitative drusen measurements, such as area and volume, can be correlated with the drusen progression in non-exudative AMD, suggesting that our approach may produce useful quantitative imaging biomarkers to follow this disease and predict patient outcome.

    View details for DOI 10.1016/j.media.2013.06.003

    View details for PubMedID 23880375

  • A picture is worth a thousand words: needs assessment for multimedia radiology reports in a large tertiary care medical center. Academic radiology Nayak, L., Beaulieu, C. F., Rubin, D. L., Lipson, J. A. 2013; 20 (12): 1577-1583

    Abstract

    Radiology reports are the major, and often only, means of communication between radiologists and their referring clinicians. The purposes of this study are to identify referring physicians' preferences about radiology reports and to quantify their perceived value of multimedia reports (with embedded images) compared with narrative text reports.We contacted 1800 attending physicians from a range of specialties at large tertiary care medical center via e-mail and a hospital newsletter linking to a 24-question electronic survey between July and November 2012. One hundred sixty physicians responded, yielding a response rate of 8.9%. Survey results were analyzed using Statistical Analysis Software (SAS Institute Inc, Cary, NC).Of the 160 referring physicians respondents, 142 (89%) indicated a general interest in reports with embedded images and completed the remainder of the survey questions. Of 142 respondents, 103 (73%) agreed or strongly agreed that reports with embedded images could improve the quality of interactions with radiologists; 129 respondents (91%) agreed or strongly agreed that having access to significant images enhances understanding of a text-based report; 110 respondents (77%) agreed or strongly agreed that multimedia reports would significantly improve referring physician satisfaction; and 85 respondents (60%) felt strongly or very strongly that multimedia reports would significantly improve patient care and outcomes.Creating accessible, readable, and automatic multimedia reports should be a high priority to enhance the practice and satisfaction of referring physicians, improve patient care, and emphasize the critical role radiology plays in current medical care.

    View details for DOI 10.1016/j.acra.2013.09.002

    View details for PubMedID 24200485

  • Semi-automatic geographic atrophy segmentation for SD-OCT images BIOMEDICAL OPTICS EXPRESS Chen, Q., de Sisternes, L., Leng, T., Zheng, L., Kutzscher, L., Rubin, D. L. 2013; 4 (12): 2729-2750

    Abstract

    Geographic atrophy (GA) is a condition that is associated with retinal thinning and loss of the retinal pigment epithelium (RPE) layer. It appears in advanced stages of non-exudative age-related macular degeneration (AMD) and can lead to vision loss. We present a semi-automated GA segmentation algorithm for spectral-domain optical coherence tomography (SD-OCT) images. The method first identifies and segments a surface between the RPE and the choroid to generate retinal projection images in which the projection region is restricted to a sub-volume of the retina where the presence of GA can be identified. Subsequently, a geometric active contour model is employed to automatically detect and segment the extent of GA in the projection images. Two image data sets, consisting on 55 SD-OCT scans from twelve eyes in eight patients with GA and 56 SD-OCT scans from 56 eyes in 56 patients with GA, respectively, were utilized to qualitatively and quantitatively evaluate the proposed GA segmentation method. Experimental results suggest that the proposed algorithm can achieve high segmentation accuracy. The mean GA overlap ratios between our proposed method and outlines drawn in the SD-OCT scans, our method and outlines drawn in the fundus auto-fluorescence (FAF) images, and the commercial software (Carl Zeiss Meditec proprietary software, Cirrus version 6.0) and outlines drawn in FAF images were 72.60%, 65.88% and 59.83%, respectively.

    View details for DOI 10.1364/BOE.4.002729

    View details for Web of Science ID 000328078300002

    View details for PubMedID 24409376

    View details for PubMedCentralID PMC3862151

  • Dynamic contrast-enhanced MRI-based biomarkers of therapeutic response in triple-negative breast cancer. Journal of the American Medical Informatics Association Golden, D. I., Lipson, J. A., Telli, M. L., Ford, J. M., Rubin, D. L. 2013; 20 (6): 1059-1066

    Abstract

    To predict the response of breast cancer patients to neoadjuvant chemotherapy (NAC) using features derived from dynamic contrast-enhanced (DCE) MRI.60 patients with triple-negative early-stage breast cancer receiving NAC were evaluated. Features assessed included clinical data, patterns of tumor response to treatment determined by DCE-MRI, MRI breast imaging-reporting and data system descriptors, and quantitative lesion kinetic texture derived from the gray-level co-occurrence matrix (GLCM). All features except for patterns of response were derived before chemotherapy; GLCM features were determined before and after chemotherapy. Treatment response was defined by the presence of residual invasive tumor and/or positive lymph nodes after chemotherapy. Statistical modeling was performed using Lasso logistic regression.Pre-chemotherapy imaging features predicted all measures of response except for residual tumor. Feature sets varied in effectiveness at predicting different definitions of treatment response, but in general, pre-chemotherapy imaging features were able to predict pathological complete response with area under the curve (AUC)=0.68, residual lymph node metastases with AUC=0.84 and residual tumor with lymph node metastases with AUC=0.83. Imaging features assessed after chemotherapy yielded significantly improved model performance over those assessed before chemotherapy for predicting residual tumor, but no other outcomes.DCE-MRI features can be used to predict whether triple-negative breast cancer patients will respond to NAC. Models such as the ones presented could help to identify patients not likely to respond to treatment and to direct them towards alternative therapies.

    View details for DOI 10.1136/amiajnl-2012-001460

    View details for PubMedID 23785100

  • Imaging Informatics: Essential Tools for the Delivery of Imaging Services ACADEMIC RADIOLOGY Mendelson, D. S., Rubin, D. L. 2013; 20 (10): 1195-1212

    Abstract

    There are rapid changes occurring in the health care environment. Radiologists face new challenges but also new opportunities. The purpose of this report is to review how new informatics tools and developments can help the radiologist respond to the drive for safety, quality, and efficiency. These tools will be of assistance in conducting research and education. They not only provide greater efficiency in traditional operations but also open new pathways for the delivery of new services and imaging technologies. Our future as a specialty is dependent on integrating these informatics solutions into our daily practice.

    View details for DOI 10.1016/j.acra.2013.07.006

    View details for Web of Science ID 000325194600003

    View details for PubMedID 24029051

    View details for PubMedCentralID PMC4072254

  • Modeling Perceptual Similarity Measures in CT Images of Focal Liver Lesions JOURNAL OF DIGITAL IMAGING Faruque, J., Rubin, D. L., Beaulieu, C. F., Napel, S. 2013; 26 (4): 714-720

    Abstract

    Motivation: A gold standard for perceptual similarity in medical images is vital to content-based image retrieval, but inter-reader variability complicates development. Our objective was to develop a statistical model that predicts the number of readers (N) necessary to achieve acceptable levels of variability. Materials and Methods: We collected 3 radiologists' ratings of the perceptual similarity of 171 pairs of CT images of focal liver lesions rated on a 9-point scale. We modeled the readers' scores as bimodal distributions in additive Gaussian noise and estimated the distribution parameters from the scores using an expectation maximization algorithm. We (a) sampled 171 similarity scores to simulate a ground truth and (b) simulated readers by adding noise, with standard deviation between 0 and 5 for each reader. We computed the mean values of 2-50 readers' scores and calculated the agreement (AGT) between these means and the simulated ground truth, and the inter-reader agreement (IRA), using Cohen's Kappa metric. Results: IRA for the empirical data ranged from =0.41 to 0.66. For between 1.5 and 2.5, IRA between three simulated readers was comparable to agreement in the empirical data. For these values , AGT ranged from =0.81 to 0.91. As expected, AGT increased with N, ranging from =0.83 to 0.92 for N = 2 to 50, respectively, with =2. Conclusion: Our simulations demonstrated that for moderate to good IRA, excellent AGT could nonetheless be obtained. This model may be used to predict the required N to accurately evaluate similarity in arbitrary size datasets.

    View details for DOI 10.1007/s10278-012-9557-4

    View details for Web of Science ID 000322434700017

    View details for PubMedID 23254627

  • Quantitative Imaging Biomarker Ontology (QIBO) for Knowledge Representation of Biomedical Imaging Biomarkers JOURNAL OF DIGITAL IMAGING Buckler, A. J., Ouellette, M., Danagoulian, J., Wernsing, G., Liu, T. T., Savig, E., Suzek, B. E., Rubin, D. L., Paik, D. 2013; 26 (4): 630-641

    Abstract

    A widening array of novel imaging biomarkers is being developed using ever more powerful clinical and preclinical imaging modalities. These biomarkers have demonstrated effectiveness in quantifying biological processes as they occur in vivo and in the early prediction of therapeutic outcomes. However, quantitative imaging biomarker data and knowledge are not standardized, representing a critical barrier to accumulating medical knowledge based on quantitative imaging data. We use an ontology to represent, integrate, and harmonize heterogeneous knowledge across the domain of imaging biomarkers. This advances the goal of developing applications to (1) improve precision and recall of storage and retrieval of quantitative imaging-related data using standardized terminology; (2) streamline the discovery and development of novel imaging biomarkers by normalizing knowledge across heterogeneous resources; (3) effectively annotate imaging experiments thus aiding comprehension, re-use, and reproducibility; and (4) provide validation frameworks through rigorous specification as a basis for testable hypotheses and compliance tests. We have developed the Quantitative Imaging Biomarker Ontology (QIBO), which currently consists of 488 terms spanning the following upper classes: experimental subject, biological intervention, imaging agent, imaging instrument, image post-processing algorithm, biological target, indicated biology, and biomarker application. We have demonstrated that QIBO can be used to annotate imaging experiments with standardized terms in the ontology and to generate hypotheses for novel imaging biomarker-disease associations. Our results established the utility of QIBO in enabling integrated analysis of quantitative imaging data.

    View details for DOI 10.1007/s10278-013-9599-2

    View details for Web of Science ID 000322434700006

    View details for PubMedID 23589184

    View details for PubMedCentralID PMC3705004

  • Snake model-based lymphoma segmentation for sequential CT images COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE Chen, Q., Quan, F., Xu, J., Rubin, D. L. 2013; 111 (2): 366-375

    Abstract

    The measurement of the size of lesions in follow-up CT examinations of cancer patients is important to evaluate the success of treatment. This paper presents an automatic algorithm for identifying and segmenting lymph nodes in CT images across longitudinal time points. Firstly, a two-step image registration method is proposed to locate the lymph nodes including coarse registration based on body region detection and fine registration based on a double-template matching algorithm. Then, to make the initial segmentation approximate the boundaries of lymph nodes, the initial image registration result is refined with intensity and edge information. Finally, a snake model is used to evolve the refined initial curve and obtain segmentation results. Our algorithm was tested on 26 lymph nodes at multiple time points from 14 patients. The image at the earlier time point was used as the baseline image to be used in evaluating the follow-up image, resulting in 76 total test cases. Of the 76 test cases, we made a 76 (100%) successful detection and 38/40 (95%) correct clinical assessment according to Response Evaluation Criteria in Solid Tumors (RECIST). The quantitative evaluation based on several metrics, such as average Hausdorff distance, indicates that our algorithm is produces good results. In addition, the proposed algorithm is fast with an average computing time 2.58s. The proposed segmentation algorithm for lymph nodes is fast and can achieve high segmentation accuracy, which may be useful to automate the tracking and evaluation of cancer therapy.

    View details for DOI 10.1016/j.cmpb.2013.05.019

    View details for Web of Science ID 000321345400011

    View details for PubMedID 23787027

  • Comprehensivemolecular characterization of clear cell renal cell carcinoma NATURE Creighton, C. J., Morgan, M., Gunaratne, P. H., Wheeler, D. A., Gibbs, R. A., Robertson, A. G., Chu, A., Beroukhim, R., Cibulskis, K., Signoretti, S., Vandin, F., Wu, H., Raphael, B. J., Verhaak, R. G., Tamboli, P., Torres-Garcia, W., Akbani, R., Weinstein, J. N., Reuter, V., Hsieh, J. J., Brannon, A. R., Hakimi, A. A., Jacobsen, A., Ciriello, G., Reva, B., Ricketts, C. J., Linehan, W. M., Stuart, J. M., Rathmell, W. K., Shen, H., Laird, P. W., Muzny, D., Davis, C., Morgan, M., Xi, L., Chang, K., Kakkar, N., Trevino, L. R., Benton, S., Reid, J. G., Morton, D., Doddapaneni, H., Han, Y., Lewis, L., Dinh, H., Kovar, C., Zhu, Y., Santibanez, J., Wang, M., Hale, W., Kalra, D., Creighton, C. J., Wheeler, D. A., Gibbs, R. A., Getz, G., Cibulskis, K., Lawrence, M. S., Sougnez, C., Carter, S. L., Sivachenko, A., Lichtenstein, L., Stewart, C., Voet, D., Fisher, S., Gabriel, S. B., Lander, E., Beroukhim, R., Schumacher, S. E., Tabak, B., Saksena, G., Onofrio, R. C., Carter, S. L., Cherniack, A. D., Gentry, J., Ardlie, K., Sougnez, C., Getz, G., Gabriel, S. B., Meyerson, M., Robertson, A. G., Chu, A., Chun, H. E., Mungall, A. J., Sipahimalani, P., Stoll, D., Ally, A., Balasundaram, M., Butterfield, Y. S., Carlsen, R., Carter, C., Chuah, E., Coope, R. J., Dhalla, N., Gorski, S., Guin, R., Hirst, C., Hirst, M., Holt, R. A., Lebovitz, C., Lee, D., Li, H. I., Mayo, M., Moore, R. A., Pleasance, E., Plettner, P., Schein, J. E., Shafiei, A., Slobodan, J. R., Tam, A., Thiessen, N., Varhol, R. J., Wye, N., Zhao, Y., Birol, I., Jones, S. J., Marra, M. A., Auman, J. T., Tan, D., Jones, C. D., Hoadley, K. A., Mieczkowski, P. A., Mose, L. E., Jefferys, S. R., Topal, M., Liquori, C., Turman, Y. J., Shi, Y., Waring, S., Buda, E., Walsh, J., Wu, J., Bodenheimer, T., Hoyle, A. P., Simons, J. V., Soloway, M., Balu, S., Parker, J. S., Hayes, D. N., Perou, C. M., Kucherlapati, R., Park, P., Shen, H., Triche, T., Weisenberger, D. J., Lai, P. H., Bootwalla, M. S., Maglinte, D. T., Mahurkar, S., Berman, B. P., Van den Berg, D. J., Cope, L., Baylin, S. B., Laird, P. W., Creighton, C. J., Wheeler, D. A., Getz, G., Noble, M. S., DiCara, D., Zhang, H., Cho, J., Heiman, D. I., Gehlenborg, N., Voet, D., Mallard, W., Lin, P., Frazer, S., Stojanov, P., Liu, Y., Zhou, L., Kim, J., Lawrence, M. S., Chin, L., Vandin, F., Wu, H., Raphael, B. J., Benz, C., Yau, C., Reynolds, S. M., Shmulevich, I., Verhaak, R. G., Torres-Garcia, W., Vegesna, R., Kim, H., Zhang, W., Cogdell, D., Jonasch, E., Ding, Z., Lu, Y., Akbani, R., Zhang, N., Unruh, A. K., Casasent, T. D., Wakefield, C., Tsavachidou, D., Chin, L., Mills, G. B., Weinstein, J. N., Jacobsen, A., Brannon, A. R., Ciriello, G., Schultz, N., Hakimi, A. A., Reva, B., Antipin, Y., Gao, J., Cerami, E., Gross, B., Aksoy, B. A., Sinha, R., Weinhold, N., Sumer, S. O., Taylor, B. S., Shen, R., Ostrovnaya, I., Hsieh, J. J., Berger, M. F., Ladanyi, M., Sander, C., Fei, S. S., Stout, A., Spellman, P. T., Rubin, D. L., Liu, T. T., Stuart, J. M., Sam Ng, S., Paull, E. O., Carlin, D., Goldstein, T., Waltman, P., Ellrott, K., Zhu, J., Haussler, D., Gunaratne, P. H., Xiao, W., Shelton, C., Gardner, J., Penny, R., Sherman, M., Mallery, D., Morris, S., Paulauskis, J., Burnett, K., Shelton, T., Signoretti, S., Kaelin, W. G., Choueiri, T., Atkins, M. B., Penny, R., Burnett, K., Mallery, D., Curley, E., Tickoo, S., Reuter, V., Rathmell, W. K., Thorne, L., Boice, L., Huang, M., Fisher, J. C., Linehan, W. M., Vocke, C. D., Peterson, J., Worrell, R., Merino, M. J., Schmidt, L. S., Tamboli, P., Czerniak, B. A., Aldape, K. D., Wood, C. G., Boyd, J., Weaver, J., Iacocca, M. V., Petrelli, N., Witkin, G., Brown, J., Czerwinski, C., Huelsenbeck-Dill, L., Rabeno, B., Myers, J., Morrison, C., Bergsten, J., Eckman, J., Harr, J., Smith, C., Tucker, K., Zach, L. A., Bshara, W., Gaudioso, C., Morrison, C., Dhir, R., Maranchie, J., Nelson, J., Parwani, A., Potapova, O., Fedosenko, K., Cheville, J. C., Thompson, R. H., Signoretti, S., Kaelin, W. G., Atkins, M. B., Tickoo, S., Reuter, V., Linehan, W. M., Vocke, C. D., Peterson, J., Merino, M. J., Schmidt, L. S., Tamboli, P., Mosquera, J. M., Rubin, M. A., Blute, M. L., Rathmell, W. K., Pihl, T., Jensen, M., Sfeir, R., Kahn, A., Chu, A., Kothiyal, P., Snyder, E., Pontius, J., Ayala, B., Backus, M., Walton, J., Baboud, J., Berton, D., Nicholls, M., Srinivasan, D., Raman, R., Girshik, S., Kigonya, P., Alonso, S., Sanbhadti, R., Barletta, S., Pot, D., Sheth, M., Demchok, J. A., Davidsen, T., Wang, Z., Yang, L., Tarnuzzer, R. W., Zhang, J., Eley, G., Ferguson, M. L., Shaw, K. R., Guyer, M. S., Ozenberger, B. A., Sofia, H. J. 2013; 499 (7456): 43-?

    Abstract

    Genetic changes underlying clear cell renal cell carcinoma (ccRCC) include alterations in genes controlling cellular oxygen sensing (for example, VHL) and the maintenance of chromatin states (for example, PBRM1). We surveyed more than 400 tumours using different genomic platforms and identified 19 significantly mutated genes. The PI(3)K/AKT pathway was recurrently mutated, suggesting this pathway as a potential therapeutic target. Widespread DNA hypomethylation was associated with mutation of the H3K36 methyltransferase SETD2, and integrative analysis suggested that mutations involving the SWI/SNF chromatin remodelling complex (PBRM1, ARID1A, SMARCA4) could have far-reaching effects on other pathways. Aggressive cancers demonstrated evidence of a metabolic shift, involving downregulation of genes involved in the TCA cycle, decreased AMPK and PTEN protein levels, upregulation of the pentose phosphate pathway and the glutamine transporter genes, increased acetyl-CoA carboxylase protein, and altered promoter methylation of miR-21 (also known as MIR21) and GRB10. Remodelling cellular metabolism thus constitutes a recurrent pattern in ccRCC that correlates with tumour stage and severity and offers new views on the opportunities for disease treatment.

    View details for DOI 10.1038/nature12222

    View details for Web of Science ID 000321285600029

    View details for PubMedID 23792563

  • MR Imaging Predictors of Molecular Profile and Survival: Multi-institutional Study of the TCGA Glioblastoma Data Set RADIOLOGY Gutman, D. A., Cooper, L. A., Hwang, S. N., Holder, C. A., Gao, J., Aurora, T. D., Dunn, W. D., Scarpace, L., Mikkelsen, T., Jain, R., Wintermark, M., Jilwan, M., Raghavan, P., Huang, E., Clifford, R. J., Mongkolwat, P., Kleper, V., Freymann, J., Kirby, J., Zinn, P. O., Moreno, C. S., Jaffe, C., Colen, R., Rubin, D. L., Saltz, J., Flanders, A., Brat, D. J. 2013; 267 (2): 560-569

    Abstract

    To conduct a comprehensive analysis of radiologist-made assessments of glioblastoma (GBM) tumor size and composition by using a community-developed controlled terminology of magnetic resonance (MR) imaging visual features as they relate to genetic alterations, gene expression class, and patient survival.Because all study patients had been previously deidentified by the Cancer Genome Atlas (TCGA), a publicly available data set that contains no linkage to patient identifiers and that is HIPAA compliant, no institutional review board approval was required. Presurgical MR images of 75 patients with GBM with genetic data in the TCGA portal were rated by three neuroradiologists for size, location, and tumor morphology by using a standardized feature set. Interrater agreements were analyzed by using the Krippendorff α statistic and intraclass correlation coefficient. Associations between survival, tumor size, and morphology were determined by using multivariate Cox regression models; associations between imaging features and genomics were studied by using the Fisher exact test.Interrater analysis showed significant agreement in terms of contrast material enhancement, nonenhancement, necrosis, edema, and size variables. Contrast-enhanced tumor volume and longest axis length of tumor were strongly associated with poor survival (respectively, hazard ratio: 8.84, P = .0253, and hazard ratio: 1.02, P = .00973), even after adjusting for Karnofsky performance score (P = .0208). Proneural class GBM had significantly lower levels of contrast enhancement (P = .02) than other subtypes, while mesenchymal GBM showed lower levels of nonenhanced tumor (P < .01).This analysis demonstrates a method for consistent image feature annotation capable of reproducibly characterizing brain tumors; this study shows that radiologists' estimations of macroscopic imaging features can be combined with genetic alterations and gene expression subtypes to provide deeper insight to the underlying biologic properties of GBM subsets.

    View details for DOI 10.1148/radiol.13120118

    View details for Web of Science ID 000318069700028

    View details for PubMedID 23392431

    View details for PubMedCentralID PMC3632807

  • Quantitative evaluation of drusen on photographs. Ophthalmology Rubin, D. L., de Sisternes, L., Kutzscher, L., Chen, Q., Leng, T., Zheng, L. L. 2013; 120 (3): 644-644 e2

    View details for DOI 10.1016/j.ophtha.2012.09.052

    View details for PubMedID 23714606

  • ACR-AAPM-SIIM Practice Guideline for Determinants of Image Quality in Digital Mammography JOURNAL OF DIGITAL IMAGING Kanal, K. M., Krupinski, E., Berns, E. A., Geiser, W. R., Karellas, A., Mainiero, M. B., Martin, M. C., Patel, S. B., Rubin, D. L., Shepard, J. D., Siegel, E. L., Wolfman, J. A., Mian, T. A., Mahoney, M. C. 2013; 26 (1): 10-25

    View details for DOI 10.1007/s10278-012-9521-3

    View details for Web of Science ID 000314040500004

    View details for PubMedID 22992865

    View details for PubMedCentralID PMC3553374

  • Image patch-based method for automated classification and detection of focal liver lesions on CT Conference on Medical Imaging - Computer-Aided Diagnosis Safdari, M., Pasari, R., Rubin, D., Greenspan, H. SPIE-INT SOC OPTICAL ENGINEERING. 2013

    View details for DOI 10.1117/12.2008624

    View details for Web of Science ID 000322261500032

  • Qualitative and quantitative image-based biomarkers of therapeutic response in triple-negative breast cancer. AMIA Summits on Translational Science proceedings AMIA Summit on Translational Science Golden, D. I., Lipson, J. A., Telli, M. L., Ford, J. M., Rubin, D. L. 2013; 2013: 62-?

    Abstract

    Experimental targeted treatments for neoadjuvant chemotherapy for triple-negative breast cancer are currently underway, and a current challenge is predicting which patients will respond to these therapies. In this study, we use data from dynamic contrast-enhanced MRI (DCE-MRI) images to predict whether patients with triple negative breast cancer will respond to an experimental neoadjuvant chemotherapy regimen. Using pre-therapy image-based features that are both qualitative (e.g., morphological BI-RADS categories) and quantitative (e.g., lesion texture), we built a model that was able to predict whether patients will have residual invasive cancer with lymph nodes metastases following therapy (receiver operating characteristic area under the curve of 0.83, sensitivity=0.73, specificity=0.83). This model's performance is at a level that is potentially clinically valuable for predicting which patients may or may not benefit from similar treatments in the future.

    View details for PubMedID 24303300

  • Informatics methods to enable sharing of quantitative imaging research data MAGNETIC RESONANCE IMAGING Levy, M. A., Freymann, J. B., Kirby, J. S., Fedorov, A., Fennessy, F. M., Eschrich, S. A., Berglund, A. E., Fenstermacher, D. A., Tan, Y., Guo, X., Casavant, T. L., Brown, B. J., Braun, T. A., Dekker, A., Roelofs, E., Mountz, J. M., Boada, F., Laymon, C., Oborski, M., Rubin, D. L. 2012; 30 (9): 1249-1256

    Abstract

    The National Cancer Institute Quantitative Research Network (QIN) is a collaborative research network whose goal is to share data, algorithms and research tools to accelerate quantitative imaging research. A challenge is the variability in tools and analysis platforms used in quantitative imaging. Our goal was to understand the extent of this variation and to develop an approach to enable sharing data and to promote reuse of quantitative imaging data in the community.We performed a survey of the current tools in use by the QIN member sites for representation and storage of their QIN research data including images, image meta-data and clinical data. We identified existing systems and standards for data sharing and their gaps for the QIN use case. We then proposed a system architecture to enable data sharing and collaborative experimentation within the QIN.There are a variety of tools currently used by each QIN institution. We developed a general information system architecture to support the QIN goals. We also describe the remaining architecture gaps we are developing to enable members to share research images and image meta-data across the network.As a research network, the QIN will stimulate quantitative imaging research by pooling data, algorithms and research tools. However, there are gaps in current functional requirements that will need to be met by future informatics development. Special attention must be given to the technical requirements needed to translate these methods into the clinical research workflow to enable validation and qualification of these novel imaging biomarkers.

    View details for DOI 10.1016/j.mri.2012.04.007

    View details for Web of Science ID 000309946000006

    View details for PubMedID 22770688

    View details for PubMedCentralID PMC3466343

  • Informatics in Radiology Improving Clinical Work Flow through an AIM Database: A Sample Web-based Lesion Tracking Application RADIOGRAPHICS Abajian, A. C., Levy, M., Rubin, D. L. 2012; 32 (5): 1543-1552

    Abstract

    Quantitative assessments on images are crucial to clinical decision making, especially in cancer patients, in whom measurements of lesions are tracked over time. However, the potential value of quantitative approaches to imaging is impeded by the difficulty and time-intensive nature of compiling this information from prior studies and reporting corresponding information on current studies. The authors believe that the quantitative imaging work flow can be automated by making temporal data computationally accessible. In this article, they demonstrate the utility of the Annotation and Image Markup standard in a World Wide Web-based application that was developed to automatically summarize prior and current quantitative imaging measurements. The system calculates the Response Evaluation Criteria in Solid Tumors metric, along with several alternative indicators of cancer treatment response, by using the data stored in the annotation files. The application also allows the user to overlay the recorded metrics on the original images for visual inspection. Clinical evaluation of the system demonstrates its potential utility in accelerating the standard radiology work flow and in providing a means to evaluate alternative response metrics that are difficult to compute by hand. The system, which illustrates the utility of capturing quantitative information in a standard format and linking it to the image from which it was derived, could enhance quantitative imaging in clinical practice without adversely affecting the current work flow.

    View details for DOI 10.1148/rg.325115752

    View details for Web of Science ID 000308632900027

    View details for PubMedID 22745220

    View details for PubMedCentralID PMC3439633

  • Automatic classification of mammography reports by BI-RADS breast tissue composition class JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION Percha, B., Nassif, H., Lipson, J., Burnside, E., Rubin, D. 2012; 19 (5): 913-916

    Abstract

    Because breast tissue composition partially predicts breast cancer risk, classification of mammography reports by breast tissue composition is important from both a scientific and clinical perspective. A method is presented for using the unstructured text of mammography reports to classify them into BI-RADS breast tissue composition categories. An algorithm that uses regular expressions to automatically determine BI-RADS breast tissue composition classes for unstructured mammography reports was developed. The algorithm assigns each report to a single BI-RADS composition class: 'fatty', 'fibroglandular', 'heterogeneously dense', 'dense', or 'unspecified'. We evaluated its performance on mammography reports from two different institutions. The method achieves >99% classification accuracy on a test set of reports from the Marshfield Clinic (Wisconsin) and Stanford University. Since large-scale studies of breast cancer rely heavily on breast tissue composition information, this method could facilitate this research by helping mine large datasets to correlate breast composition with other covariates.

    View details for DOI 10.1136/amiajnl-2011-000607

    View details for Web of Science ID 000307934600032

    View details for PubMedID 22291166

    View details for PubMedCentralID PMC3422822

  • The Role of Informatics in Health Care Reform ACADEMIC RADIOLOGY Liu, Y. I., Rubin, D. L. 2012; 19 (9): 1094-1099

    Abstract

    Improving health care quality while simultaneously reducing cost has become a high priority of health care reform. Informatics is crucial in tackling this challenge. The American Recovery and Reinvestment Act of 2009 mandates adaptation and "meaningful use " of health information technology. In this review, we will highlight several areas in which informatics can make significant contributions, with a focus on radiology. We also discuss informatics related to the increasing imperatives of state and local regulations (such as radiation dose tracking) and quality initiatives.

    View details for DOI 10.1016/j.acra.2012.05.006

    View details for PubMedID 22771052

  • Quantifying the margin sharpness of lesions on radiological images for content-based image retrieval MEDICAL PHYSICS Xu, J., Nadel, S., Greenspan, H., Beaulieu, C. F., Agrawal, N., Rubin, D. 2012; 39 (9): 5405-5418

    Abstract

    To develop a method to quantify the margin sharpness of lesions on CT and to evaluate it in simulations and CT scans of liver and lung lesions.The authors computed two attributes of margin sharpness: the intensity difference between a lesion and its surroundings, and the sharpness of the intensity transition across the lesion boundary. These two attributes were extracted from sigmoid curves fitted along lines automatically drawn orthogonal to the lesion margin. The authors then represented the margin characteristics for each lesion by a feature vector containing histograms of these parameters. The authors created 100 simulated CT scans of lesions over a range of intensity difference and margin sharpness, and used the concordance correlation between the known parameter and the corresponding computed feature as a measure of performance. The authors also evaluated their method in 79 liver lesions (44 patients: 23 M, 21 F, mean age 61) and 58 lung nodules (57 patients: 24 M, 33 F, mean age 66). The methodology presented takes into consideration the boundary of the liver and lung during feature extraction in clinical images to ensure that the margin feature do not get contaminated by anatomy other than the normal organ surrounding the lesions. For evaluation in these clinical images, the authors created subjective independent reference standards for pairwise margin sharpness similarity in the liver and lung cohorts, and compared rank orderings of similarity used using our sharpness feature to that expected from the reference standards using mean normalized discounted cumulative gain (NDCG) over all query images. In addition, the authors compared their proposed feature with two existing techniques for lesion margin characterization using the simulated and clinical datasets. The authors also evaluated the robustness of their features against variations in delineation of the lesion margin by simulating five types of deformations of the lesion margin. Equivalence across deformations was assessed using Schuirmann's paired two one-sided tests.In simulated images, the concordance correlation between measured gradient and actual gradient was 0.994. The mean (s.d.) and standard deviation NDCG score for the retrieval of K images, K = 5, 10, and 15, were 84% (8%), 85% (7%), and 85% (7%) for CT images containing liver lesions, and 82% (7%), 84% (6%), and 85% (4%) for CT images containing lung nodules, respectively. The authors' proposed method outperformed the two existing margin characterization methods in average NDCG scores over all K, by 1.5% and 3% in datasets containing liver lesion, and 4.5% and 5% in datasets containing lung nodules. Equivalence testing showed that the authors' feature is more robust across all margin deformations (p < 0.05) than the two existing methods for margin sharpness characterization in both simulated and clinical datasets.The authors have described a new image feature to quantify the margin sharpness of lesions. It has strong correlation with known margin sharpness in simulated images and in clinical CT images containing liver lesions and lung nodules. This image feature has excellent performance for retrieving images with similar margin characteristics, suggesting potential utility, in conjunction with other lesion features, for content-based image retrieval applications.

    View details for DOI 10.1118/1.4739507

    View details for Web of Science ID 000309334500012

    View details for PubMedID 22957608

    View details for PubMedCentralID PMC3432101

  • Prognostic PET F-18-FDG Uptake Imaging Features Are Associated with Major Oncogenomic Alterations in Patients with Resected Non-Small Cell Lung Cancer CANCER RESEARCH Nair, V. S., Gevaert, O., Davidzon, G., Napel, S., Graves, E. E., Hoang, C. D., Shrager, J. B., Quon, A., Rubin, D. L., Plevritis, S. K. 2012; 72 (15): 3725-3734

    Abstract

    Although 2[18F]fluoro-2-deoxy-d-glucose (FDG) uptake during positron emission tomography (PET) predicts post-surgical outcome in patients with non-small cell lung cancer (NSCLC), the biologic basis for this observation is not fully understood. Here, we analyzed 25 tumors from patients with NSCLCs to identify tumor PET-FDG uptake features associated with gene expression signatures and survival. Fourteen quantitative PET imaging features describing FDG uptake were correlated with gene expression for single genes and coexpressed gene clusters (metagenes). For each FDG uptake feature, an associated metagene signature was derived, and a prognostic model was identified in an external cohort and then tested in a validation cohort of patients with NSCLC. Four of eight single genes associated with FDG uptake (LY6E, RNF149, MCM6, and FAP) were also associated with survival. The most prognostic metagene signature was associated with a multivariate FDG uptake feature [maximum standard uptake value (SUV(max)), SUV(variance), and SUV(PCA2)], each highly associated with survival in the external [HR, 5.87; confidence interval (CI), 2.49-13.8] and validation (HR, 6.12; CI, 1.08-34.8) cohorts, respectively. Cell-cycle, proliferation, death, and self-recognition pathways were altered in this radiogenomic profile. Together, our findings suggest that leveraging tumor genomics with an expanded collection of PET-FDG imaging features may enhance our understanding of FDG uptake as an imaging biomarker beyond its association with glycolysis.

    View details for DOI 10.1158/0008-5472.CAN-11-3943

    View details for PubMedID 22710433

  • Non-Small Cell Lung Cancer: Identifying Prognostic Imaging Biomarkers by Leveraging Public Gene Expression Microarray Data-Methods and Preliminary Results RADIOLOGY Gevaert, O., Xu, J., Hoang, C. D., Leung, A. N., Xu, Y., Quon, A., Rubin, D. L., Napel, S., Plevritis, S. K. 2012; 264 (2): 387-396

    Abstract

    To identify prognostic imaging biomarkers in non-small cell lung cancer (NSCLC) by means of a radiogenomics strategy that integrates gene expression and medical images in patients for whom survival outcomes are not available by leveraging survival data in public gene expression data sets.A radiogenomics strategy for associating image features with clusters of coexpressed genes (metagenes) was defined. First, a radiogenomics correlation map is created for a pairwise association between image features and metagenes. Next, predictive models of metagenes are built in terms of image features by using sparse linear regression. Similarly, predictive models of image features are built in terms of metagenes. Finally, the prognostic significance of the predicted image features are evaluated in a public gene expression data set with survival outcomes. This radiogenomics strategy was applied to a cohort of 26 patients with NSCLC for whom gene expression and 180 image features from computed tomography (CT) and positron emission tomography (PET)/CT were available.There were 243 statistically significant pairwise correlations between image features and metagenes of NSCLC. Metagenes were predicted in terms of image features with an accuracy of 59%-83%. One hundred fourteen of 180 CT image features and the PET standardized uptake value were predicted in terms of metagenes with an accuracy of 65%-86%. When the predicted image features were mapped to a public gene expression data set with survival outcomes, tumor size, edge shape, and sharpness ranked highest for prognostic significance.This radiogenomics strategy for identifying imaging biomarkers may enable a more rapid evaluation of novel imaging modalities, thereby accelerating their translation to personalized medicine.

    View details for DOI 10.1148/radiol.12111607

    View details for PubMedID 22723499

  • Informatics in Radiology An Open-Source and Open-Access Cancer Biomedical Informatics Grid Annotation and Image Markup Template Builder RADIOGRAPHICS Mongkolwat, P., Channin, D. S., Kleper, V., Rubin, D. L. 2012; 32 (4): 1223-?

    Abstract

    In a routine clinical environment or clinical trial, a case report form or structured reporting template can be used to quickly generate uniform and consistent reports. Annotation and image markup (AIM), a project supported by the National Cancer Institute's cancer biomedical informatics grid, can be used to collect information for a case report form or structured reporting template. AIM is designed to store, in a single information source, (a) the description of pixel data with use of markups or graphical drawings placed on the image, (b) calculation results (which may or may not be directly related to the markups), and (c) supplemental information. To facilitate the creation of AIM annotations with data entry templates, an AIM template schema and an open-source template creation application were developed to assist clinicians, image researchers, and designers of clinical trials to quickly create a set of data collection items, thereby ultimately making image information more readily accessible.

    View details for DOI 10.1148/rg.324115080

    View details for Web of Science ID 000306285600024

    View details for PubMedID 22556315

    View details for PubMedCentralID PMC3393884

  • Radiogenomic analysis indicates MR images are potentially predictive of EGFR mutation status in glioblastoma multiforme Gevaert, O., Mitchell, L., Xu, J., Yu, C., Rubin, D., Zaharchuk, G., Napel, S., Plevritis, S. AMER ASSOC CANCER RESEARCH. 2012
  • A Comprehensive Descriptor of Shape: Method and Application to Content-Based Retrieval of Similar Appearing Lesions in Medical Images JOURNAL OF DIGITAL IMAGING Xu, J., Faruque, J., Beaulieu, C. F., Rubin, D., Napel, S. 2012; 25 (1): 121-128

    Abstract

    We have developed a method to quantify the shape of liver lesions in CT images and to evaluate its performance for retrieval of images with similarly-shaped lesions. We employed a machine learning method to combine several shape descriptors and defined similarity measures for a pair of shapes as a weighted combination of distances calculated based on each feature. We created a dataset of 144 simulated shapes and established several reference standards for similarity and computed the optimal weights so that the retrieval result agrees best with the reference standard. Then we evaluated our method on a clinical database consisting of 79 portal-venous-phase CT liver images, where we derived a reference standard of similarity from radiologists' visual evaluation. Normalized Discounted Cumulative Gain (NDCG) was calculated to compare this ordering with the expected ordering based on the reference standard. For the simulated lesions, the mean NDCG values ranged from 91% to 100%, indicating that our methods for combining features were very accurate in representing true similarity. For the clinical images, the mean NDCG values were still around 90%, suggesting a strong correlation between the computed similarity and the independent similarity reference derived the radiologists.

    View details for DOI 10.1007/s10278-011-9388-8

    View details for Web of Science ID 000304113400018

    View details for PubMedID 21547518

    View details for PubMedCentralID PMC3264721

  • Integration of Imaging Signs into RadLex JOURNAL OF DIGITAL IMAGING Shore, M. W., Rubin, D. L., Kahn, C. E. 2012; 25 (1): 50-55

    Abstract

    Imaging signs form an important part of the language of radiology, but are not represented in established lexicons. We sought to incorporate imaging signs into RSNA's RadLex® ontology of radiology terms. Names of imaging signs and their definitions were culled from books, journal articles, dictionaries, and biomedical web sites. Imaging signs were added into RadLex as subclasses of the term "imaging sign," which was defined in RadLex as a subclass of "imaging observation." A total of 743 unique imaging signs were added to RadLex with their 392 synonyms to yield a total of 1,135 new terms. All included definitions and related RadLex terms, including imaging modality, anatomy, and disorder, when appropriate. The information will allow RadLex users to identify imaging signs by modality (e.g., ultrasound signs) and to find all signs related to specific pathophysiology. The addition of imaging signs to RadLex augments its use to index the radiology literature, create and interpret clinical radiology reports, and retrieve relevant cases and images.

    View details for DOI 10.1007/s10278-011-9386-x

    View details for Web of Science ID 000304113400009

    View details for PubMedID 21494902

    View details for PubMedCentralID PMC3264717

  • Automatic annotation of radiological observations in liver CT images. AMIA ... Annual Symposium proceedings / AMIA Symposium. AMIA Symposium Gimenez, F., Xu, J., Liu, Y., Liu, T., Beaulieu, C., Rubin, D., Napel, S. 2012; 2012: 257-263

    Abstract

    We aim to predict radiological observations using computationally-derived imaging features extracted from computed tomography (CT) images. We created a dataset of 79 CT images containing liver lesions identified and annotated by a radiologist using a controlled vocabulary of 76 semantic terms. Computationally-derived features were extracted describing intensity, texture, shape, and edge sharpness. Traditional logistic regression was compared to L(1)-regularized logistic regression (LASSO) in order to predict the radiological observations using computational features. The approach was evaluated by leave one out cross-validation. Informative radiological observations such as lesion enhancement, hypervascular attenuation, and homogeneous retention were predicted well by computational features. By exploiting relationships between computational and semantic features, this approach could lead to more accurate and efficient radiology reporting.

    View details for PubMedID 23304295

  • Using the Semantic Web and Web Apps to Connect Radiologists and Oncologists 21st IEEE International Workshop on Enabling Technologies - Infrastructure for Collaborative Enterprises (WETICE) Serique, K. A., Snyder, A., Willrett, D., Rubin, D. L., Moreira, D. A. IEEE. 2012: 480–485
  • Automated temporal tracking and segmentation of lymphoma on serial CT examinations MEDICAL PHYSICS Xu, J., Greenspan, H., Napel, S., Rubin, D. L. 2011; 38 (11): 5879-5886

    Abstract

    It is challenging to reproducibly measure and compare cancer lesions on numerous follow-up studies; the process is time-consuming and error-prone. In this paper, we show a method to automatically and reproducibly identify and segment abnormal lymph nodes in serial computed tomography (CT) exams.Our method leverages initial identification of enlarged (abnormal) lymph nodes in the baseline scan. We then identify an approximate region for the node in the follow-up scans using nonrigid image registration. The baseline scan is also used to locate regions of normal, non-nodal tissue surrounding the lymph node and to map them onto the follow-up scans, in order to reduce the search space to locate the lymph node on the follow-up scans. Adaptive region-growing and clustering algorithms are then used to obtain the final contours for segmentation. We applied our method to 24 distinct enlarged lymph nodes at multiple time points from 14 patients. The scan at the earlier time point was used as the baseline scan to be used in evaluating the follow-up scan, resulting in 70 total test cases (e.g., a series of scans obtained at 4 time points results in 3 test cases). For each of the 70 cases, a "reference standard" was obtained by manual segmentation by a radiologist. Assessment according to response evaluation criteria in solid tumors (RECIST) using our method agreed with RECIST assessments made using the reference standard segmentations in all test cases, and by calculating node overlap ratio and Hausdorff distance between the computer and radiologist-generated contours.Compared to the reference standard, our method made the correct RECIST assessment for all 70 cases. The average overlap ratio was 80.7 ± 9.7% s.d., and the average Hausdorff distance was 3.2 ± 1.8 mm s.d. The concordance correlation between automated and manual segmentations was 0.978 (95% confidence interval 0.962, 0.984). The 100% agreement in our sample between our method and the standard with regard to RECIST classification suggests that the true disagreement rate is no more than 6%.Our automated lymph node segmentation method achieves excellent overall segmentation performance and provides equivalent RECIST assessment. It potentially will be useful to streamline and improve cancer lesion measurement and tracking and to improve assessment of cancer treatment response.

    View details for DOI 10.1118/1.3643027

    View details for Web of Science ID 000296534000008

    View details for PubMedID 22047352

    View details for PubMedCentralID PMC3210189

  • Informatics in Radiology Measuring and Improving Quality in Radiology: Meeting the Challenge with Informatics RADIOGRAPHICS Rubin, D. L. 2011; 31 (6): 1511-1527

    Abstract

    Quality is becoming a critical issue for radiology. Measuring and improving quality is essential not only to ensure optimum effectiveness of care and comply with increasing regulatory requirements, but also to combat current trends leading to commoditization of radiology services. A key challenge to implementing quality improvement programs is to develop methods to collect knowledge related to quality care and to deliver that knowledge to practitioners at the point of care. There are many dimensions to quality in radiology that need to be measured, monitored, and improved, including examination appropriateness, procedure protocol, accuracy of interpretation, communication of imaging results, and measuring and monitoring performance improvement in quality, safety, and efficiency. Informatics provides the key technologies that can enable radiologists to measure and improve quality. However, few institutions recognize the opportunities that informatics methods provide to improve safety and quality. The information technology infrastructure in most hospitals is limited, and they have suboptimal adoption of informatics techniques. Institutions can tackle the challenges of assessing and improving quality in radiology by means of informatics.

    View details for DOI 10.1148/rg.316105207

    View details for Web of Science ID 000295985200003

    View details for PubMedID 21997979

  • Managing Biomedical Image Metadata for Search and Retrieval of Similar Images JOURNAL OF DIGITAL IMAGING Korenblum, D., Rubin, D., Napel, S., Rodriguez, C., Beaulieu, C. 2011; 24 (4): 739-748

    Abstract

    Radiology images are generally disconnected from the metadata describing their contents, such as imaging observations ("semantic" metadata), which are usually described in text reports that are not directly linked to the images. We developed a system, the Biomedical Image Metadata Manager (BIMM) to (1) address the problem of managing biomedical image metadata and (2) facilitate the retrieval of similar images using semantic feature metadata. Our approach allows radiologists, researchers, and students to take advantage of the vast and growing repositories of medical image data by explicitly linking images to their associated metadata in a relational database that is globally accessible through a Web application. BIMM receives input in the form of standard-based metadata files using Web service and parses and stores the metadata in a relational database allowing efficient data query and maintenance capabilities. Upon querying BIMM for images, 2D regions of interest (ROIs) stored as metadata are automatically rendered onto preview images included in search results. The system's "match observations" function retrieves images with similar ROIs based on specific semantic features describing imaging observation characteristics (IOCs). We demonstrate that the system, using IOCs alone, can accurately retrieve images with diagnoses matching the query images, and we evaluate its performance on a set of annotated liver lesion images. BIMM has several potential applications, e.g., computer-aided detection and diagnosis, content-based image retrieval, automating medical analysis protocols, and gathering population statistics like disease prevalences. The system provides a framework for decision support systems, potentially improving their diagnostic accuracy and selection of appropriate therapies.

    View details for DOI 10.1007/s10278-010-9328-z

    View details for Web of Science ID 000292888700020

    View details for PubMedID 20844917

    View details for PubMedCentralID PMC3138941

  • Current and Future Trends in Imaging Informatics for Oncology CANCER JOURNAL Levy, M. A., Rubin, D. L. 2011; 17 (4): 203-210

    Abstract

    Clinical imaging plays an essential role in cancer care and research for diagnosis, prognosis, and treatment response assessment. Major advances in imaging informatics to support medical imaging have been made during the last several decades. More recent informatics advances focus on the special needs of oncologic imaging, yet gaps still remain. We review the current state, limitations, and future trends in imaging informatics for oncology care including clinical and clinical research systems. We review information systems to support cancer clinical workflows including oncologist ordering of radiology studies, radiologist review and reporting of image findings, and oncologist review and integration of imaging information for clinical decision making. We discuss informatics approaches to oncologic imaging including, but not limited to, controlled terminologies, image annotation, and image-processing algorithms. With the ongoing development of novel imaging modalities and imaging biomarkers, we expect these systems will continue to evolve and mature.

    View details for DOI 10.1097/PPO.0b013e3182272f04

    View details for Web of Science ID 000293265100003

    View details for PubMedID 21799326

  • A Bayesian Network for Differentiating Benign From Malignant Thyroid Nodules Using Sonographic and Demographic Features AMERICAN JOURNAL OF ROENTGENOLOGY Liu, Y. I., Kamaya, A., Desser, T. S., Rubin, D. L. 2011; 196 (5): W598-W605

    Abstract

    The objective of our study was to create a Bayesian network (BN) that incorporates a multitude of imaging features and patient demographic characteristics to guide radiologists in assessing the likelihood of malignancy in suspicious-appearing thyroid nodules.We built a BN to combine multiple indicators of the malignant potential of thyroid nodules including both imaging and demographic factors. The imaging features and conditional probabilities relating those features to diagnoses were compiled from an extensive literature review. To evaluate our network, we randomly selected 54 benign and 45 malignant nodules from 93 adult patients who underwent ultrasound-guided biopsy. The final diagnosis in each case was pathologically established. We compared the performance of our network with that of two radiologists who independently evaluated each case on a 5-point scale of suspicion for malignancy. Probability estimates of malignancy from the BN and radiologists were compared using receiver operating characteristic (ROC) analysis.The network performed comparably to the two expert radiologists. Using each radiologist's assessment of the imaging features as input to the network, the differences between the area under the ROC curve (A(z)) for the BN and for the radiologists were -0.03 (BN vs radiologist 1, 0.85 vs 0.88) and -0.01 (BN vs radiologist 2, 0.76 vs 0.77).We created a BN that incorporates a range of sonographic and demographic features and provides a probability about whether a thyroid nodule is benign or malignant. The BN distinguished between benign and malignant thyroid nodules as well as the expert radiologists did.

    View details for DOI 10.2214/AJR.09.4037

    View details for PubMedID 21512051

  • A practical method for transforming free-text eligibility criteria into computable criteria JOURNAL OF BIOMEDICAL INFORMATICS Tu, S. W., Peleg, M., Carini, S., Bobak, M., Ross, J., Rubin, D., Sim, I. 2011; 44 (2): 239-250

    Abstract

    Formalizing eligibility criteria in a computer-interpretable language would facilitate eligibility determination for study subjects and the identification of studies on similar patient populations. Because such formalization is extremely labor intensive, we transform the problem from one of fully capturing the semantics of criteria directly in a formal expression language to one of annotating free-text criteria in a format called ERGO annotation. The annotation can be done manually, or it can be partially automated using natural-language processing techniques. We evaluated our approach in three ways. First, we assessed the extent to which ERGO annotations capture the semantics of 1000 eligibility criteria randomly drawn from ClinicalTrials.gov. Second, we demonstrated the practicality of the annotation process in a feasibility study. Finally, we demonstrate the computability of ERGO annotation by using it to (1) structure a library of eligibility criteria, (2) search for studies enrolling specified study populations, and (3) screen patients for potential eligibility for a study. We therefore demonstrate a new and practical method for incrementally capturing the semantics of free-text eligibility criteria into computable form.

    View details for DOI 10.1016/j.jbi.2010.09.007

    View details for Web of Science ID 000289030100006

    View details for PubMedID 20851207

    View details for PubMedCentralID PMC3129371

  • Content-Based Image Retrieval in Radiology: Current Status and Future Directions JOURNAL OF DIGITAL IMAGING Akgul, C. B., Rubin, D. L., Napel, S., Beaulieu, C. F., Greenspan, H., Acar, B. 2011; 24 (2): 208-222

    Abstract

    Diagnostic radiology requires accurate interpretation of complex signals in medical images. Content-based image retrieval (CBIR) techniques could be valuable to radiologists in assessing medical images by identifying similar images in large archives that could assist with decision support. Many advances have occurred in CBIR, and a variety of systems have appeared in nonmedical domains; however, permeation of these methods into radiology has been limited. Our goal in this review is to survey CBIR methods and systems from the perspective of application to radiology and to identify approaches developed in nonmedical applications that could be translated to radiology. Radiology images pose specific challenges compared with images in the consumer domain; they contain varied, rich, and often subtle features that need to be recognized in assessing image similarity. Radiology images also provide rich opportunities for CBIR: rich metadata about image semantics are provided by radiologists, and this information is not yet being used to its fullest advantage in CBIR systems. By integrating pixel-based and metadata-based image feature analysis, substantial advances of CBIR in medicine could ensue, with CBIR systems becoming an important tool in radiology practice.

    View details for DOI 10.1007/s10278-010-9290-9

    View details for Web of Science ID 000288394700007

    View details for PubMedID 20376525

    View details for PubMedCentralID PMC3056970

  • Evaluation of Negation and Uncertainty Detection and its Impact on Precision and Recall in Search JOURNAL OF DIGITAL IMAGING Wu, A. S., Do, B. H., Kim, J., Rubin, D. L. 2011; 24 (2): 234-242

    Abstract

    Radiology reports contain information that can be mined using a search engine for teaching, research, and quality assurance purposes. Current search engines look for exact matches to the search term, but they do not differentiate between reports in which the search term appears in a positive context (i.e., being present) from those in which the search term appears in the context of negation and uncertainty. We describe RadReportMiner, a context-aware search engine, and compare its retrieval performance with a generic search engine, Google Desktop. We created a corpus of 464 radiology reports which described at least one of five findings (appendicitis, hydronephrosis, fracture, optic neuritis, and pneumonia). Each report was classified by a radiologist as positive (finding described to be present) or negative (finding described to be absent or uncertain). The same reports were then classified by RadReportMiner and Google Desktop. RadReportMiner achieved a higher precision (81%), compared with Google Desktop (27%; p < 0.0001). RadReportMiner had a lower recall (72%) compared with Google Desktop (87%; p = 0.006). We conclude that adding negation and uncertainty identification to a word-based radiology report search engine improves the precision of search results over a search engine that does not take this information into account. Our approach may be useful to adopt into current report retrieval systems to help radiologists to more accurately search for radiology reports.

    View details for DOI 10.1007/s10278-009-9250-4

    View details for Web of Science ID 000288394700009

    View details for PubMedID 19902298

    View details for PubMedCentralID PMC3056979

  • Ontology-Assisted Analysis of Web Queries to Determine the Knowledge Radiologists Seek JOURNAL OF DIGITAL IMAGING Rubin, D. L., Flanders, A., Kim, W., Siddiqui, K. M., Kahn, C. E. 2011; 24 (1): 160-164

    Abstract

    Radiologists frequently search the Web to find information they need to improve their practice, and knowing the types of information they seek could be useful for evaluating Web resources. Our goal was to develop an automated method to categorize unstructured user queries using a controlled terminology and to infer the type of information users seek. We obtained the query logs from two commonly used Web resources for radiology. We created a computer algorithm to associate RadLex-controlled vocabulary terms with the user queries. Using the RadLex hierarchy, we determined the high-level category associated with each RadLex term to infer the type of information users were seeking. To test the hypothesis that the term category assignments to user queries are non-random, we compared the distributions of the term categories in RadLex with those in user queries using the chi square test. Of the 29,669 unique search terms found in user queries, 15,445 (52%) could be mapped to one or more RadLex terms by our algorithm. Each query contained an average of one to two RadLex terms, and the dominant categories of RadLex terms in user queries were diseases and anatomy. While the same types of RadLex terms were predominant in both RadLex itself and user queries, the distribution of types of terms in user queries and RadLex were significantly different (p < 0.0001). We conclude that RadLex can enable processing and categorization of user queries of Web resources and enable understanding the types of information users seek from radiology knowledge resources on the Web.

    View details for DOI 10.1007/s10278-010-9289-2

    View details for Web of Science ID 000286469600018

    View details for PubMedID 20354755

    View details for PubMedCentralID PMC3046796

  • The Biomedical Resource Ontology (BRO) to enable resource discovery in clinical and translational research JOURNAL OF BIOMEDICAL INFORMATICS Tenenbaum, J. D., Whetzel, P. L., Anderson, K., Borromeo, C. D., Dinov, I. D., Gabriel, D., Kirschner, B., Mirel, B., Morris, T., Noy, N., Nyulas, C., Rubenson, D., Saxman, P. R., Singh, H., Whelan, N., Wright, Z., Athey, B. D., Becich, M. J., Ginsburg, G. S., Musen, M. A., Smith, K. A., Tarantal, A. F., Rubin, D. L., Lyster, P. 2011; 44 (1): 137-145

    Abstract

    The biomedical research community relies on a diverse set of resources, both within their own institutions and at other research centers. In addition, an increasing number of shared electronic resources have been developed. Without effective means to locate and query these resources, it is challenging, if not impossible, for investigators to be aware of the myriad resources available, or to effectively perform resource discovery when the need arises. In this paper, we describe the development and use of the Biomedical Resource Ontology (BRO) to enable semantic annotation and discovery of biomedical resources. We also describe the Resource Discovery System (RDS) which is a federated, inter-institutional pilot project that uses the BRO to facilitate resource discovery on the Internet. Through the RDS framework and its associated Biositemaps infrastructure, the BRO facilitates semantic search and discovery of biomedical resources, breaking down barriers and streamlining scientific research that will improve human health.

    View details for DOI 10.1016/j.jbi.2010.10.003

    View details for Web of Science ID 000288289900015

    View details for PubMedID 20955817

    View details for PubMedCentralID PMC3050430

  • Informatics in Radiology An Information Model of the DICOM Standard RADIOGRAPHICS Kahn, C. E., Langlotz, C. P., Channin, D. S., Rubin, D. L. 2011; 31 (1): 295-U356

    Abstract

    The Digital Imaging and Communications in Medicine (DICOM) Standard is a key foundational technology for radiology. However, its complexity creates challenges for information system developers because the current DICOM specification requires human interpretation and is subject to nonstandard implementation. To address this problem, a formally sound and computationally accessible information model of the DICOM Standard was created. The DICOM Standard was modeled as an ontology, a machine-accessible and human-interpretable representation that may be viewed and manipulated by information-modeling tools. The DICOM Ontology includes a real-world model and a DICOM entity model. The real-world model describes patients, studies, images, and other features of medical imaging. The DICOM entity model describes connections between real-world entities and the classes that model the corresponding DICOM information entities. The DICOM Ontology was created to support the Cancer Biomedical Informatics Grid (caBIG) initiative, and it may be extended to encompass the entire DICOM Standard and serve as a foundation of medical imaging systems for research and patient care.

    View details for DOI 10.1148/rg.311105085

    View details for Web of Science ID 000286608900024

    View details for PubMedID 20980665

    View details for PubMedCentralID PMC3399709

  • Informatics in Radiology RADTF: A Semantic Search-enabled, Natural Language Processor-generated Radiology Teaching File RADIOGRAPHICS Do, B. H., Wu, A., Biswal, S., Kamaya, A., Rubin, D. L. 2010; 30 (7): 2039-2048

    Abstract

    Storing and retrieving radiology cases is an important activity for education and clinical research, but this process can be time-consuming. In the process of structuring reports and images into organized teaching files, incidental pathologic conditions not pertinent to the primary teaching point can be omitted, as when a user saves images of an aortic dissection case but disregards the incidental osteoid osteoma. An alternate strategy for identifying teaching cases is text search of reports in radiology information systems (RIS), but retrieved reports are unstructured, teaching-related content is not highlighted, and patient identifying information is not removed. Furthermore, searching unstructured reports requires sophisticated retrieval methods to achieve useful results. An open-source, RadLex(®)-compatible teaching file solution called RADTF, which uses natural language processing (NLP) methods to process radiology reports, was developed to create a searchable teaching resource from the RIS and the picture archiving and communication system (PACS). The NLP system extracts and de-identifies teaching-relevant statements from full reports to generate a stand-alone database, thus converting existing RIS archives into an on-demand source of teaching material. Using RADTF, the authors generated a semantic search-enabled, Web-based radiology archive containing over 700,000 cases with millions of images. RADTF combines a compact representation of the teaching-relevant content in radiology reports and a versatile search engine with the scale of the entire RIS-PACS collection of case material.

    View details for DOI 10.1148/rg.307105083

    View details for Web of Science ID 000284094200021

    View details for PubMedID 20801868

  • Automated Retrieval of CT Images of Liver Lesions on the Basis of Image Similarity: Method and Preliminary Results RADIOLOGY Napel, S. A., Beaulieu, C. F., Rodriguez, C., Cui, J., Xu, J., Gupta, A., Korenblum, D., Greenspan, H., Ma, Y., Rubin, D. L. 2010; 256 (1): 243-252

    Abstract

    To develop a system to facilitate the retrieval of radiologic images that contain similar-appearing lesions and to perform a preliminary evaluation of this system with a database of computed tomographic (CT) images of the liver and an external standard of image similarity.Institutional review board approval was obtained for retrospective analysis of deidentified patient images. Thereafter, 30 portal venous phase CT images of the liver exhibiting one of three types of liver lesions (13 cysts, seven hemangiomas, 10 metastases) were selected. A radiologist used a controlled lexicon and a tool developed for complete and standardized description of lesions to identify and annotate each lesion with semantic features. In addition, this software automatically computed image features on the basis of image texture and boundary sharpness. Semantic and computer-generated features were weighted and combined into a feature vector representing each image. An independent reference standard was created for pairwise image similarity. This was used in a leave-one-out cross-validation to train weights that optimized the rankings of images in the database in terms of similarity to query images. Performance was evaluated by using precision-recall curves and normalized discounted cumulative gain (NDCG), a common measure for the usefulness of information retrieval.When used individually, groups of semantic, texture, and boundary features resulted in various levels of performance in retrieving relevant lesions. However, combining all features produced the best overall results. Mean precision was greater than 90% at all values of recall, and mean, best, and worst case retrieval accuracy was greater than 95%, 100%, and greater than 78%, respectively, with NDCG.Preliminary assessment of this approach shows excellent retrieval results for three types of liver lesions visible on portal venous CT images, warranting continued development and validation in a larger and more comprehensive database.

    View details for DOI 10.1148/radiol.10091694

    View details for Web of Science ID 000279106900029

    View details for PubMedID 20505065

    View details for PubMedCentralID PMC2897688

  • Learning a Bayesian Classifier for Thyroid Nodule Evaluation 110th Annual Meeting of the American-Roentgen-Ray-Society Liu, Y., Kamaya, A., Desser, T., Rubin, D. AMER ROENTGEN RAY SOC. 2010
  • A Systemic Search for Patterns for Thyroid Nodule Evaluation Using a Bayesian Classifier 110th Annual Meeting of the American-Roentgen-Ray-Society Liu, Y., Kamaya, A., Desser, T., Rubin, D. AMER ROENTGEN RAY SOC. 2010
  • The caBIG (TM) Annotation and Image Markup Project JOURNAL OF DIGITAL IMAGING Channin, D. S., Mongkolwat, P., Kleper, V., Sepukar, K., Rubin, D. L. 2010; 23 (2): 217-225

    Abstract

    Image annotation and markup are at the core of medical interpretation in both the clinical and the research setting. Digital medical images are managed with the DICOM standard format. While DICOM contains a large amount of meta-data about whom, where, and how the image was acquired, DICOM says little about the content or meaning of the pixel data. An image annotation is the explanatory or descriptive information about the pixel data of an image that is generated by a human or machine observer. An image markup is the graphical symbols placed over the image to depict an annotation. While DICOM is the standard for medical image acquisition, manipulation, transmission, storage, and display, there are no standards for image annotation and markup. Many systems expect annotation to be reported verbally, while markups are stored in graphical overlays or proprietary formats. This makes it difficult to extract and compute with both of them. The goal of the Annotation and Image Markup (AIM) project is to develop a mechanism, for modeling, capturing, and serializing image annotation and markup data that can be adopted as a standard by the medical imaging community. The AIM project produces both human- and machine-readable artifacts. This paper describes the AIM information model, schemas, software libraries, and tools so as to prepare researchers and developers for their use of AIM.

    View details for DOI 10.1007/s10278-009-9193-9

    View details for Web of Science ID 000275551400014

    View details for PubMedID 19294468

    View details for PubMedCentralID PMC2837161

  • Imaging informatics: toward capturing and processing semantic information in radiology images. Yearbook of medical informatics Rubin, D. L., Napel, S. 2010: 34-42

    Abstract

    To identify challenges and opportunities in imaging informatics that can lead to the use of images for discovery, and that can potentially improve the diagnostic accuracy of imaging professionals.Recent articles on imaging informatics and related articles from PubMed were reviewed and analyzed. Some new developments and challenges that recent research in imaging informatics will meet are identified and discussed.While much literature continues to be devoted to traditional imaging informatics topics of image processing, visualization, and computerized detection, three new trends are emerging: (1) development of ontologies to describe radiology reports and images, (2) structured reporting and image annotation methods to make image semantics explicit and machine-accessible, and (3) applications that use semantic image information for decision support to improve radiologist interpretation performance. The informatics methods being developed have similarities and synergies with recent work in the biomedical informatics community that leverage large high-throughput data sets, and future research in imaging informatics will build on these advances to enable discovery by mining large image databases.Imaging informatics is beginning to develop and apply knowledge representation and analysis methods to image datasets. This type of work, already commonplace in biomedical research with large scale molecular and clinical datasets, will lead to new ways for computers to work with image data. The new advances hold promise for integrating imaging with the rest of the patient record as well as molecular data, for new data-driven discoveries in imaging analogous to that in bioinformatics, and for improved quality of radiology practice.

    View details for PubMedID 20938568

  • The Annotation and Image Mark-up Project RADIOLOGY Channin, D. S., Mongkolwat, P., Kleper, V., Rubin, D. L. 2009; 253 (3): 590-592

    View details for DOI 10.1148/radiol.2533090135

    View details for Web of Science ID 000272247300003

    View details for PubMedID 19952021

  • BioPortal: ontologies and integrated data resources at the click of a mouse NUCLEIC ACIDS RESEARCH Noy, N. F., Shah, N. H., Whetzel, P. L., Dai, B., Dorf, M., Griffith, N., Jonquet, C., Rubin, D. L., Storey, M., Chute, C. G., Musen, M. A. 2009; 37: W170-W173

    Abstract

    Biomedical ontologies provide essential domain knowledge to drive data integration, information retrieval, data annotation, natural-language processing and decision support. BioPortal (http://bioportal.bioontology.org) is an open repository of biomedical ontologies that provides access via Web services and Web browsers to ontologies developed in OWL, RDF, OBO format and Protégé frames. BioPortal functionality includes the ability to browse, search and visualize ontologies. The Web interface also facilitates community-based participation in the evaluation and evolution of ontology content by providing features to add notes to ontology terms, mappings between terms and ontology reviews based on criteria such as usability, domain coverage, quality of content, and documentation and support. BioPortal also enables integrated search of biomedical data resources such as the Gene Expression Omnibus (GEO), ClinicalTrials.gov, and ArrayExpress, through the annotation and indexing of these resources with ontologies in BioPortal. Thus, BioPortal not only provides investigators, clinicians, and developers 'one-stop shopping' to programmatically access biomedical ontologies, but also provides support to integrate data from a variety of biomedical resources.

    View details for DOI 10.1093/nar/gkp440

    View details for Web of Science ID 000267889100031

    View details for PubMedID 19483092

    View details for PubMedCentralID PMC2703982

  • Informatics Methods to Enable Patient-centered Radiology ACADEMIC RADIOLOGY Rubin, D. L. 2009; 16 (5): 524-534

    Abstract

    Informatics methods and systems in support of clinical care are well established in the health care enterprise. The new paradigm of patient-centered radiology creates new requirements and challenges that can be enabled by informatics. In particular, computer support can help referring physicians tailor their imaging requests to those procedures that would be most helpful for their patients'clinical context. Informatics methods can assist radiologists in recognizing important findings in images as well as helping them decide the best course of action for patients given the radiologic imaging results and other clinical data. Finally, informatics methods can help engage patients in their care by providing information about their imaging procedures and results. All of these informatics technologies share in common the ability to bring together critical knowledge filtered according to the specific requirements of patients undergoing radiologic imaging, a key component of patient-centered radiology. The goals of this article are to review the opportunities for informatics in supporting patient-centered radiology, to demonstrate the potential utility of these methods, and to point radiologists to the ways that informatics will help them provide care that is tailored to each patient.

    View details for DOI 10.1016/j.acra.2009.01.009

    View details for Web of Science ID 000265229500004

    View details for PubMedID 19345892

  • A Controlled Vocabulary to Represent Sonographic Features of the Thyroid and its Application in a Bayesian Network to Predict Thyroid Nodule Malignancy 109th Annual Meeting of the American-Roentgen-Ray-Society Liu, Y., Kamaya, A., Desser, T., Rubin, D. AMER ROENTGEN RAY SOC. 2009
  • Computational neuroanatomy: ontology-based representation of neural components and connectivity 1st Summit on Translational Bioinformatics Rubin, D. L., Talos, I., Halle, M., Musen, M. A., Kikinis, R. BIOMED CENTRAL LTD. 2009

    Abstract

    A critical challenge in neuroscience is organizing, managing, and accessing the explosion in neuroscientific knowledge, particularly anatomic knowledge. We believe that explicit knowledge-based approaches to make neuroscientific knowledge computationally accessible will be helpful in tackling this challenge and will enable a variety of applications exploiting this knowledge, such as surgical planning.We developed ontology-based models of neuroanatomy to enable symbolic lookup, logical inference and mathematical modeling of neural systems. We built a prototype model of the motor system that integrates descriptive anatomic and qualitative functional neuroanatomical knowledge. In addition to modeling normal neuroanatomy, our approach provides an explicit representation of abnormal neural connectivity in disease states, such as common movement disorders. The ontology-based representation encodes both structural and functional aspects of neuroanatomy. The ontology-based models can be evaluated computationally, enabling development of automated computer reasoning applications.Neuroanatomical knowledge can be represented in machine-accessible format using ontologies. Computational neuroanatomical approaches such as described in this work could become a key tool in translational informatics, leading to decision support applications that inform and guide surgical planning and personalized care for neurological disease in the future.

    View details for Web of Science ID 000265602500004

    View details for PubMedID 19208191

    View details for PubMedCentralID PMC2646240

  • A Controlled Vocabulary to Represent Sonographic Features of the Thyroid and its application in a Bayesian Network to Predict Thyroid Nodule Malignancy. Summit on translational bioinformatics Liu, Y. I., Kamaya, A., Desser, T. S., Rubin, D. L. 2009; 2009: 68-72

    Abstract

    It is challenging to distinguish benign from malignant thyroid nodules on high resolution ultrasound. Many ultrasound features have been studied individually as predictors for thyroid malignancy, none with a high degree of accuracy, and there is no consistent vocabulary used to describe the features. Our hypothesis is that a standard vocabulary will advance accuracy. We performed a systemic literature review and identified all the sonographic features that have been well studied in thyroid cancers. We built a controlled vocabulary for describing sonographic features and to enable us to unify data in the literature on the predictive power of each feature. We used this terminology to build a Bayesian network to predict thyroid malignancy. Our Bayesian network performed similar to or slightly better than experienced radiologists. Controlled terminology for describing thyroid radiology findings could be useful to characterize thyroid nodules and could enable decision support applications.

    View details for PubMedID 21347173

  • Annotation and Image Markup: Accessing and Interoperating with the Semantic Content in Medical Imaging IEEE INTELLIGENT SYSTEMS Rubin, D. L., Supekar, K., Mongkolwat, P., Kleper, V., Channin, D. S. 2009; 24 (1): 57-65
  • A semantic image annotation model to enable integrative translational research. Summit on translational bioinformatics Rubin, D. L., Mongkolwat, P., Channin, D. S. 2009; 2009: 106-110

    Abstract

    Integrating and relating images with clinical and molecular data is a crucial activity in translational research, but challenging because the information in images is not explicit in standard computer-accessible formats. We have developed an ontology-based representation of the semantic contents of radiology images called AIM (Annotation and Image Markup). AIM specifies the quantitative and qualitative content that researchers extract from images. The AIM ontology enables semantic image annotation and markup, specifying the entities and relations necessary to describe images. AIM annotations, represented as instances in the ontology, enable key use cases for images in translational research such as disease status assessment, query, and inter-observer variation analysis. AIM will enable ontology-based query and mining of images, and integration of images with data in other ontology-annotated bioinformatics databases. Our ultimate goal is to enable researchers to link images with related scientific data so they can learn the biological and physiological significance of the image content.

    View details for PubMedID 21347180

  • Computing Human Image Annotation Annual International Conference of the IEEE-Engineering-in-Medicine-and-Biology-Society Channin, D. S., Mongkolwat, P., Kleper, V., Rubin, D. L. IEEE. 2009: 7065–7068

    Abstract

    An image annotation is the explanatory or descriptive information about the pixel data of an image that is generated by a human (or machine) observer. An image markup is the graphical symbols placed over the image to depict an annotation. In the majority of current, clinical and research imaging practice, markup is captured in proprietary formats and annotations are referenced only in free text radiology reports. This makes these annotations difficult to query, retrieve and compute upon, hampering their integration into other data mining and analysis efforts. This paper describes the National Cancer Institute's Cancer Biomedical Informatics Grid's (caBIG) Annotation and Image Markup (AIM) project, focusing on how to use AIM to query for annotations. The AIM project delivers an information model for image annotation and markup. The model uses controlled terminologies for important concepts. All of the classes and attributes of the model have been harmonized with the other models and common data elements in use at the National Cancer Institute. The project also delivers XML schemata necessary to instantiate AIMs in XML as well as a software application for translating AIM XML into DICOM S/R and HL7 CDA. Large collections of AIM annotations can be built and then queried as Grid or Web services. Using the tools of the AIM project, image annotations and their markup can be captured and stored in human and machine readable formats. This enables the inclusion of human image observation and inference as part of larger data mining and analysis activities.

    View details for Web of Science ID 000280543605223

    View details for PubMedID 19964202

  • Semantic reasoning with image annotations for tumor assessment. AMIA ... Annual Symposium proceedings / AMIA Symposium. AMIA Symposium Levy, M. A., O'Connor, M. J., Rubin, D. L. 2009; 2009: 359-363

    Abstract

    Identifying, tracking and reasoning about tumor lesions is a central task in cancer research and clinical practice that could potentially be automated. However, information about tumor lesions in imaging studies is not easily accessed by machines for automated reasoning. The Annotation and Image Markup (AIM) information model recently developed for the cancer Biomedical Informatics Grid provides a method for encoding the semantic information related to imaging findings, enabling their storage and transfer. However, it is currently not possible to apply automated reasoning methods to image information encoded in AIM. We have developed a methodology and a suite of tools for transforming AIM image annotations into OWL, and an ontology for reasoning with the resulting image annotations for tumor lesion assessment. Our methods enable automated inference of semantic information about cancer lesions in images.

    View details for PubMedID 20351880

  • Comparison of concept recognizers for building the Open Biomedical Annotator 2nd Summit on Translational Bioinformatics Shah, N. H., Bhatia, N., Jonquet, C., Rubin, D., Chiang, A. P., Musen, M. A. BIOMED CENTRAL LTD. 2009

    Abstract

    The National Center for Biomedical Ontology (NCBO) is developing a system for automated, ontology-based access to online biomedical resources (Shah NH, et al.: Ontology-driven indexing of public datasets for translational bioinformatics. BMC Bioinformatics 2009, 10(Suppl 2):S1). The system's indexing workflow processes the text metadata of diverse resources such as datasets from GEO and ArrayExpress to annotate and index them with concepts from appropriate ontologies. This indexing requires the use of a concept-recognition tool to identify ontology concepts in the resource's textual metadata. In this paper, we present a comparison of two concept recognizers - NLM's MetaMap and the University of Michigan's Mgrep. We utilize a number of data sources and dictionaries to evaluate the concept recognizers in terms of precision, recall, speed of execution, scalability and customizability. Our evaluations demonstrate that Mgrep has a clear edge over MetaMap for large-scale service oriented applications. Based on our analysis we also suggest areas of potential improvements for Mgrep. We have subsequently used Mgrep to build the Open Biomedical Annotator service. The Annotator service has access to a large dictionary of biomedical terms derived from the United Medical Language System (UMLS) and NCBO ontologies. The Annotator also leverages the hierarchical structure of the ontologies and their mappings to expand annotations. The Annotator service is available to the community as a REST Web service for creating ontology-based annotations of their data.

    View details for Web of Science ID 000270371700015

    View details for PubMedID 19761568

    View details for PubMedCentralID PMC2745685

  • Creating and Curating a Terminology for Radiology: Ontology Modeling and Analysis JOURNAL OF DIGITAL IMAGING Rubin, D. L. 2008; 21 (4): 355-362

    Abstract

    The radiology community has recognized the need to create a standard terminology to improve the clarity of reports, to reduce radiologist variation, to enable access to imaging information, and to improve the quality of practice. This need has recently led to the development of RadLex, a controlled terminology for radiology. The creation of RadLex has proved challenging in several respects: It has been difficult for users to peruse the large RadLex taxonomies and for curators to navigate the complex terminology structure to check it for errors and omissions. In this work, we demonstrate that the RadLex terminology can be translated into an ontology, a representation of terminologies that is both human-browsable and machine-processable. We also show that creating this ontology permits computational analysis of RadLex and enables its use in a variety of computer applications. We believe that adopting an ontology representation of RadLex will permit more widespread use of the terminology and make it easier to collect feedback from the community that will ultimately lead to improving RadLex.

    View details for DOI 10.1007/s10278-007-9073-0

    View details for Web of Science ID 000260689900001

    View details for PubMedID 17874267

    View details for PubMedCentralID PMC3043845

  • Network analysis of intrinsic functional brain connectivity in Alzheimer's disease PLOS COMPUTATIONAL BIOLOGY Supekar, K., Menon, V., Rubin, D., Musen, M., Greicius, M. D. 2008; 4 (6)

    Abstract

    Functional brain networks detected in task-free ("resting-state") functional magnetic resonance imaging (fMRI) have a small-world architecture that reflects a robust functional organization of the brain. Here, we examined whether this functional organization is disrupted in Alzheimer's disease (AD). Task-free fMRI data from 21 AD subjects and 18 age-matched controls were obtained. Wavelet analysis was applied to the fMRI data to compute frequency-dependent correlation matrices. Correlation matrices were thresholded to create 90-node undirected-graphs of functional brain networks. Small-world metrics (characteristic path length and clustering coefficient) were computed using graph analytical methods. In the low frequency interval 0.01 to 0.05 Hz, functional brain networks in controls showed small-world organization of brain activity, characterized by a high clustering coefficient and a low characteristic path length. In contrast, functional brain networks in AD showed loss of small-world properties, characterized by a significantly lower clustering coefficient (p<0.01), indicative of disrupted local connectivity. Clustering coefficients for the left and right hippocampus were significantly lower (p<0.01) in the AD group compared to the control group. Furthermore, the clustering coefficient distinguished AD participants from the controls with a sensitivity of 72% and specificity of 78%. Our study provides new evidence that there is disrupted organization of functional brain networks in AD. Small-world metrics can characterize the functional organization of the brain in AD, and our findings further suggest that these network measures may be useful as an imaging-based biomarker to distinguish AD from healthy aging.

    View details for DOI 10.1371/journal.pcbi.1000100

    View details for Web of Science ID 000259786700013

    View details for PubMedID 18584043

    View details for PubMedCentralID PMC2435273

  • A prototype symbolic model of canonical functional neuroanatomy of the motor system JOURNAL OF BIOMEDICAL INFORMATICS Talos, I., Rubin, D. L., Halle, M., Musen, M., Kikinis, R. 2008; 41 (2): 251-263

    Abstract

    Recent advances in bioinformatics have opened entire new avenues for organizing, integrating and retrieving neuroscientific data, in a digital, machine-processable format, which can be at the same time understood by humans, using ontological, symbolic data representations. Declarative information stored in ontological format can be perused and maintained by domain experts, interpreted by machines, and serve as basis for a multitude of decision support, computerized simulation, data mining, and teaching applications. We have developed a prototype symbolic model of canonical neuroanatomy of the motor system. Our symbolic model is intended to support symbolic look up, logical inference and mathematical modeling by integrating descriptive, qualitative and quantitative functional neuroanatomical knowledge. Furthermore, we show how our approach can be extended to modeling impaired brain connectivity in disease states, such as common movement disorders. In developing our ontology, we adopted a disciplined modeling approach, relying on a set of declared principles, a high-level schema, Aristotelian definitions, and a frame-based authoring system. These features, along with the use of the Unified Medical Language System (UMLS) vocabulary, enable the alignment of our functional ontology with an existing comprehensive ontology of human anatomy, and thus allow for combining the structural and functional views of neuroanatomy for clinical decision support and neuroanatomy teaching applications. Although the scope of our current prototype ontology is limited to a particular functional system in the brain, it may be possible to adapt this approach for modeling other brain functional systems as well.

    View details for DOI 10.1016/j.jbi.2007.11.003

    View details for Web of Science ID 000255360000005

    View details for PubMedID 18164666

    View details for PubMedCentralID PMC2376098

  • A data warehouse for integrating radiologic and pathologic data. Journal of the American College of Radiology Rubin, D. L., Desser, T. S. 2008; 5 (3): 210-217

    Abstract

    Much of the information needed for radiology teaching and research is not in the picture archiving and communication system but distributed in hospital information systems throughout the medical enterprise. Our objective is to describe the design, methodology, and implementation of a data warehouse to integrate and make accessible the types of medical data pertinent to radiology research and teaching, and to encourage implementation of similar approaches throughout the radiologic community.We identified desiderata of radiology data warehouses and designed and implemented a prototype system (RadBank) to meet these needs. RadBank was built with open-source software tools on a Linux platform with a relational database. We created a text report parsing module that recognizes the structure of radiology reports and makes individual sections available for indexing and search. A database schema was designed to link radiology and pathology reports and to enable users to retrieve cases using flexible queries.Our system contains more than 2 million radiology and pathology reports, and allows full text search by patient history, findings, and diagnosis by radiology and pathology. RadBank has helped radiologists at our institution find teaching cases and identify research cohorts.Data warehouses can provide radiologists access to important clinical information contained in radiology and pathology reports, and supplement the image information in picture archiving and communication system workstations. We believe that data warehouses similar to our system can be implemented in other radiology departments within a reasonable budget to make their vast radiologic-pathologic case material accessible for education and research.

    View details for DOI 10.1016/j.jacr.2007.09.004

    View details for PubMedID 18312970

  • Tool support to enable evaluation of the clinical response to treatment. AMIA ... Annual Symposium proceedings / AMIA Symposium. AMIA Symposium Levy, M. A., Rubin, D. L. 2008: 399-403

    Abstract

    Objective criteria for measuring response to cancer treatment are critical to clinical research and practice. The National Cancer Institute has developed the Response Evaluation Criteria in Solid Tumors (RECIST) method to quantify treatment response. RECIST evaluates response by assessing a set of measurable target lesions in baseline and follow-up radiographic studies. However, applying RECIST consistently is challenging due to inter-observer variability among oncologists and radiologists in choice and measurement of target lesions. We analyzed the radiologist-oncologist workflow to determine whether the information collected is sufficient for reliably applying RECIST. We evaluated radiology reports and image markup (radiologists), and clinical flow sheets (oncologists). We found current reporting of radiology results insufficient for consistent application of RECIST, compared with flow sheets. We identified use cases and functional requirements for an informatics tool that could improve consistency and accuracy in applying methods such as RECIST.

    View details for PubMedID 18998923

  • iPad: Semantic annotation and markup of radiological images. AMIA ... Annual Symposium proceedings / AMIA Symposium. AMIA Symposium Rubin, D. L., Rodriguez, C., Shah, P., Beaulieu, C. 2008: 626-630

    Abstract

    Radiological images contain a wealth of information,such as anatomy and pathology, which is often not explicit and computationally accessible. Information schemes are being developed to describe the semantic content of images, but such schemes can be unwieldy to operationalize because there are few tools to enable users to capture structured information easily as part of the routine research workflow. We have created iPad, an open source tool enabling researchers and clinicians to create semantic annotations on radiological images. iPad hides the complexity of the underlying image annotation information model from users, permitting them to describe images and image regions using a graphical interface that maps their descriptions to structured ontologies semi-automatically. Image annotations are saved in a variety of formats,enabling interoperability among medical records systems, image archives in hospitals, and the Semantic Web. Tools such as iPad can help reduce the burden of collecting structured information from images, and it could ultimately enable researchers and physicians to exploit images on a very large scale and glean the biological and physiological significance of image content.

    View details for PubMedID 18999144

    View details for PubMedCentralID PMC2655990

  • A Bayesian classifier for differentiating benign versus malignant thyroid nodules using sonographic features. AMIA ... Annual Symposium proceedings / AMIA Symposium. AMIA Symposium Liu, Y. I., Kamaya, A., Desser, T. S., Rubin, D. L. 2008: 419-423

    Abstract

    Thyroid nodules are a common, yet challenging clinical problem. The vast majority of these nodules are benign; however, deciding which nodule should undergo biopsy is difficult because the imaging appearance of benign and malignant thyroid nodules overlap. High resolution ultrasound is the primary imaging modality for evaluating thyroid nodules. Many sonographic features have been studied individually as predictors for thyroid malignancy. There has been little work to create predictive models that combine multiple predictors, both imaging features and demographic factors. We have created a Bayesian classifier to predict whether a thyroid nodule is benign or malignant using sonographic and demographic findings. Our classifier performed similar to or slightly better than experienced radiologists when evaluated using 41 thyroid nodules with known pathologic diagnosis. This classifier could be helpful in providing practitioners an objective basis for deciding whether to biopsy suspicious thyroid nodules.

    View details for PubMedID 18999209

  • Biomedical ontologies: a functional perspective BRIEFINGS IN BIOINFORMATICS Rubin, D. L., Shah, N. H., Noy, N. F. 2008; 9 (1): 75-90

    Abstract

    The information explosion in biology makes it difficult for researchers to stay abreast of current biomedical knowledge and to make sense of the massive amounts of online information. Ontologies--specifications of the entities, their attributes and relationships among the entities in a domain of discourse--are increasingly enabling biomedical researchers to accomplish these tasks. In fact, bio-ontologies are beginning to proliferate in step with accruing biological data. The myriad of ontologies being created enables researchers not only to solve some of the problems in handling the data explosion but also introduces new challenges. One of the key difficulties in realizing the full potential of ontologies in biomedical research is the isolation of various communities involved: some workers spend their career developing ontologies and ontology-related tools, while few researchers (biologists and physicians) know how ontologies can accelerate their research. The objective of this review is to give an overview of biomedical ontology in practical terms by providing a functional perspective--describing how bio-ontologies can and are being used. As biomedical scientists begin to recognize the many different ways ontologies enable biomedical research, they will drive the emergence of new computer applications that will help them exploit the wealth of research data now at their fingertips.

    View details for DOI 10.1093/bib/bbm059

    View details for Web of Science ID 000251864600008

    View details for PubMedID 18077472

  • BioPortal: ontologies and data resources with the click of a mouse. AMIA ... Annual Symposium proceedings / AMIA Symposium. AMIA Symposium Musen, M. A., Shah, N. H., Noy, N. F., Dai, B. Y., Dorf, M., Griffith, N., Buntrok, J., Jonquet, C., Montegut, M. J., Rubin, D. L. 2008: 1223-1224

    View details for PubMedID 18999306

  • Protege: A tool for managing and using terminology in radiology applications JOURNAL OF DIGITAL IMAGING Rubin, D. L., Noy, N. F., Musen, M. A. 2007; 20: 34-46

    Abstract

    The development of standard terminologies such as RadLex is becoming important in radiology applications, such as structured reporting, teaching file authoring, report indexing, and text mining. The development and maintenance of these terminologies are challenging, however, because there are few specialized tools to help developers to browse, visualize, and edit large taxonomies. Protégé ( http://protege.stanford.edu ) is an open-source tool that allows developers to create and to manage terminologies and ontologies. It is more than a terminology-editing tool, as it also provides a platform for developers to use the terminologies in end-user applications. There are more than 70,000 registered users of Protégé who are using the system to manage terminologies and ontologies in many different domains. The RadLex project has recently adopted Protégé for managing its radiology terminology. Protégé provides several features particularly useful to managing radiology terminologies: an intuitive graphical user interface for navigating large taxonomies, visualization components for viewing complex term relationships, and a programming interface so developers can create terminology-driven radiology applications. In addition, Protégé has an extensible plug-in architecture, and its large user community has contributed a rich library of components and extensions that provide much additional useful functionalities. In this report, we describe Protégé's features and its particular advantages in the radiology domain in the creation, maintenance, and use of radiology terminology.

    View details for DOI 10.1007/s10278-007-9065-0

    View details for Web of Science ID 000250825300004

    View details for PubMedID 17687607

    View details for PubMedCentralID PMC2039856

  • Annotation and query of tissue microarray data using the NCI Thesaurus BMC BIOINFORMATICS Shah, N. H., Rubin, D. L., Espinosa, I., Montgomery, K., Musen, M. A. 2007; 8

    Abstract

    The Stanford Tissue Microarray Database (TMAD) is a repository of data serving a consortium of pathologists and biomedical researchers. The tissue samples in TMAD are annotated with multiple free-text fields, specifying the pathological diagnoses for each sample. These text annotations are not structured according to any ontology, making future integration of this resource with other biological and clinical data difficult.We developed methods to map these annotations to the NCI thesaurus. Using the NCI-T we can effectively represent annotations for about 86% of the samples. We demonstrate how this mapping enables ontology driven integration and querying of tissue microarray data. We have deployed the mapping and ontology driven querying tools at the TMAD site for general use.We have demonstrated that we can effectively map the diagnosis-related terms describing a sample in TMAD to the NCI-T. The NCI thesaurus terms have a wide coverage and provide terms for about 86% of the samples. In our opinion the NCI thesaurus can facilitate integration of this resource with other biological data.

    View details for DOI 10.1186/1471-2105-8-296

    View details for Web of Science ID 000249734300001

    View details for PubMedID 17686183

    View details for PubMedCentralID PMC1988837

  • Knowledge Zone: A Public Repository of Peer-Reviewed Biomedical Ontologies 12th World Congress on Health (Medical) Informatics Supekar, K., Rubin, D., Noy, N., Musen, M. I O S PRESS. 2007: 812–816

    Abstract

    Reuse of ontologies is important for achieving better interoperability among health systems and relieving knowledge engineers from the burden of developing ontologies from scratch. Most of the work that aims to facilitate ontology reuse has focused on building ontology libraries that are simple repositories of ontologies or has led to keyword-based search tools that search among ontologies. To our knowledge, there are no operational methodologies that allow users to evaluate ontologies and to compare them in order to choose the most appropriate ontology for their task. In this paper, we present, Knowledge Zone - a Web-based portal that allows users to submit their ontologies, to associate metadata with their ontologies, to search for existing ontologies, to find ontology rankings based on user reviews, to post their own reviews, and to rate reviews.

    View details for Web of Science ID 000272064000163

    View details for PubMedID 17911829

  • LesionViewer: a tool for tracking cancer lesions over time. AMIA ... Annual Symposium proceedings / AMIA Symposium. AMIA Symposium Levy, M. A., Garg, A., Tam, A., Garten, Y., Rubin, D. L. 2007: 443-447

    Abstract

    Oncologists managing cancer patients use radiology imaging studies to evaluate changes in measurable cancer lesions. Currently, the textual radiology report summarizes the findings, but is disconnected from the primary image data. This makes it difficult for the physician to obtain a visual overview of the location and behavior of the disease. LesionViewer is a prototype software system designed to assist clinicians in comprehending and reviewing radiology imaging studies. The interface provides an Anatomical Summary View of the location of lesions identified in a series of studies, and direct navigation to the relevant primary image data. LesionViewer's Disease Summary View provides a temporal abstraction of the disease behavior between studies utilizing methods of the RECIST guideline. In a usability study, nine physicians used the system to accurately perform clinical tasks appropriate to the analysis of radiology reports and image data. All users reported they would use the system if available.

    View details for PubMedID 18693875

  • An ontology for PACS integration JOURNAL OF DIGITAL IMAGING Kahn, C. E., Channin, D. S., Rubin, D. L. 2006; 19 (4): 316-327

    Abstract

    An ontology describes a set of classes and the relationships among them. We explored the use of an ontology to integrate picture archiving and communication systems (PACS) with other information systems in the clinical enterprise. We created an ontological model of thoracic radiology that contained knowledge of anatomy, imaging procedures, and performed procedure steps. We explored the use of the model in two use cases: (1) to determine examination completeness and (2) to identify reference (comparison) images obtained in the same imaging projection. The model incorporated a total of 138 classes, including radiology orderables, procedures, procedure steps, imaging modalities, patient positions, and imaging planes. Radiological knowledge was encoded as relationships among these classes. The ontology successfully met the information requirements of the two use-case scenarios. Ontologies can represent radiological and clinical knowledge to integrate PACS with the clinical enterprise and to support the radiology interpretation process.

    View details for DOI 10.1007/s10278-006-0627-3

    View details for Web of Science ID 000242824200004

    View details for PubMedID 16763933

    View details for PubMedCentralID PMC3045159

  • Using ontologies linked with geometric models to reason about penetrating injuries ARTIFICIAL INTELLIGENCE IN MEDICINE Rubin, D. L., Dameron, O., Bashir, Y., Grossman, D., Dev, P., Musen, M. A. 2006; 37 (3): 167-176

    Abstract

    Medical assessment of penetrating injuries is a difficult and knowledge-intensive task, and rapid determination of the extent of internal injuries is vital for triage and for determining the appropriate treatment. Physical examination and computed tomographic (CT) imaging data must be combined with detailed anatomic, physiologic, and biomechanical knowledge to assess the injured subject. We are developing a methodology to automate reasoning about penetrating injuries using canonical knowledge combined with specific subject image data.In our approach, we build a three-dimensional geometric model of a subject from segmented images. We link regions in this model to entities in two knowledge sources: (1) a comprehensive ontology of anatomy containing organ identities, adjacencies, and other information useful for anatomic reasoning and (2) an ontology of regional perfusion containing formal definitions of arterial anatomy and corresponding regions of perfusion. We created computer reasoning services ("problem solvers") that use the ontologies to evaluate the geometric model of the subject and deduce the consequences of penetrating injuries.We developed and tested our methods using data from the Visible Human. Our problem solvers can determine the organs that are injured given particular trajectories of projectiles, whether vital structures--such as a coronary artery--are injured, and they can predict the propagation of injury ensuing after vital structures are injured.We have demonstrated the capability of using ontologies with medical images to support computer reasoning about injury based on those images. Our methodology demonstrates an approach to creating intelligent computer applications that reason with image data, and it may have value in helping practitioners in the assessment of penetrating injury.

    View details for DOI 10.1016/j.artmed.2006.03.006

    View details for Web of Science ID 000238992500002

    View details for PubMedID 16730959

  • National Center for Biomedical Ontology: Advancing biomedicine through structured organization of scientific knowledge OMICS-A JOURNAL OF INTEGRATIVE BIOLOGY Rubin, D. L., Lewis, S. E., Mungall, C. J., Misra, S., Westerfield, M., Ashburner, M., Sim, I., Chute, C. G., Solbrig, H., Storey, M., Smith, B., Day-Richter, J., Noy, N. F., Musen, M. A. 2006; 10 (2): 185-198

    Abstract

    The National Center for Biomedical Ontology is a consortium that comprises leading informaticians, biologists, clinicians, and ontologists, funded by the National Institutes of Health (NIH) Roadmap, to develop innovative technology and methods that allow scientists to record, manage, and disseminate biomedical information and knowledge in machine-processable form. The goals of the Center are (1) to help unify the divergent and isolated efforts in ontology development by promoting high quality open-source, standards-based tools to create, manage, and use ontologies, (2) to create new software tools so that scientists can use ontologies to annotate and analyze biomedical data, (3) to provide a national resource for the ongoing evaluation, integration, and evolution of biomedical ontologies and associated tools and theories in the context of driving biomedical projects (DBPs), and (4) to disseminate the tools and resources of the Center and to identify, evaluate, and communicate best practices of ontology development to the biomedical community. Through the research activities within the Center, collaborations with the DBPs, and interactions with the biomedical community, our goal is to help scientists to work more effectively in the e-science paradigm, enhancing experiment design, experiment execution, data analysis, information synthesis, hypothesis generation and testing, and understand human disease.

    View details for Web of Science ID 000240210900015

    View details for PubMedID 16901225

  • Coverage of emergency after-hours ultrasound cases: Survey of practices at US teaching hospitals ACADEMIC RADIOLOGY Desser, T. S., Rubin, D. L., Schraedley-Desmond, P. 2006; 13 (2): 249-253

    Abstract

    Diagnostic ultrasound examinations may be performed after-hours by physicians if technologists are not available or cases are complex. Our experience suggested there is wide variability in how ultrasound coverage is provided after-hours, which motivated us to conduct a formal survey of teaching programs around the country.Four hundred five members of the Association of Program Directors in Radiology were contacted by e-mail and sent a link to a five-part questionnaire posted on the Web. Respondents were asked whether ultrasound cases after-hours are performed in their institutions by radiology residents, technologists on the premises after-hours, technologists on-call, or some combination. Data on the type of program, number of beds in the primary hospital, number of residents in the program, and geographic location of the program were recorded. Responses were automatically written to a data file stored on a Web server and the imported into an Excel spreadsheet for data analysis. A chi(2) analysis was performed to assess associations among the variables and statistical significance.A total of 79 programs responded to the survey. Of those, 32% provided coverage with ultrasound technologists on call, 24% by ultrasound technologists on the premises, 13% provided combination coverage, and 10% provided coverage solely with residents on call. There was no association among number of residents in the program, location of the program, or type of program (university, community, or affiliated) and type of coverage provided.There is wide variability in methods for providing coverage of after-hours ultrasound cases. However, on-site or on-call coverage of emergency cases by technologists did not appear to depend significantly on program location, program type, or program size.

    View details for DOI 10.1016/j.acra.2005.09.091

    View details for PubMedID 16428062

  • Ontology-based representation of simulation models of physiology. AMIA ... Annual Symposium proceedings / AMIA Symposium. AMIA Symposium Rubin, D. L., Grossman, D., Neal, M., Cook, D. L., Bassingthwaighte, J. B., Musen, M. A. 2006: 664-668

    Abstract

    Dynamic simulation models of physiology are often represented as a set of mathematical equations. Such models are very useful for studying and understanding the dynamic behavior of physiological variables. However, the sheer number of equations and variables can make these models unwieldy, difficult to under-stand, and challenging to maintain. We describe a symbolic, ontologically-guided methodology for representing a physiological model of the circulation. We created an ontology describing the types of equations in the model as well as the anatomic components and how they are connected to form a circulatory loop. The ontology provided an explicit representation of the model, both its mathematical and anatomic content, abstracting and hiding much of the mathematical complexity. The ontology also provided a framework to construct a graphical representation of the model, providing a simpler visualization than the large set of mathematical equations. Our approach may help model builders to maintain, debug, and extend simulation models.

    View details for PubMedID 17238424

  • Ontology-based annotation and query of tissue microarray data. AMIA ... Annual Symposium proceedings / AMIA Symposium. AMIA Symposium Shah, N. H., Rubin, D. L., Supekar, K. S., Musen, M. A. 2006: 709-713

    Abstract

    The Stanford Tissue Microarray Database (TMAD) is a repository of data amassed by a consortium of pathologists and biomedical researchers. The TMAD data are annotated with multiple free-text fields, specifying the pathological diagnoses for each tissue sample. These annotations are spread out over multiple text fields and are not structured according to any ontology, making it difficult to integrate this resource with other biological and clinical data. We developed methods to map these annotations to the NCI thesaurus and the SNOMED-CT ontologies. Using these two ontologies we can effectively represent about 80% of the annotations in a structured manner. This mapping offers the ability to perform ontology driven querying of the TMAD data. We also found that 40% of annotations can be mapped to terms from both ontologies, providing the potential to align the two ontologies based on experimental data. Our approach provides the basis for a data-driven ontology alignment by mapping annotations of experimental data.

    View details for PubMedID 17238433

  • A statistical approach to scanning the biomedical literature for pharmacogenetics knowledge JOURNAL OF THE AMERICAN MEDICAL INFORMATICS ASSOCIATION Rubin, D. L., Thorn, C. F., Klein, T. E., Altman, R. B. 2005; 12 (2): 121-129

    Abstract

    Biomedical databases summarize current scientific knowledge, but they generally require years of laborious curation effort to build, focusing on identifying pertinent literature and data in the voluminous biomedical literature. It is difficult to manually extract useful information embedded in the large volumes of literature, and automated intelligent text analysis tools are becoming increasingly essential to assist in these curation activities. The goal of the authors was to develop an automated method to identify articles in Medline citations that contain pharmacogenetics data pertaining to gene-drug relationships.The authors built and evaluated several candidate statistical models that characterize pharmacogenetics articles in terms of word usage and the profile of Medical Subject Headings (MeSH) used in those articles. The best-performing model was used to scan the entire Medline article database (11 million articles) to identify candidate pharmacogenetics articles.A sampling of the articles identified from scanning Medline was reviewed by a pharmacologist to assess the precision of the method. The authors' approach identified 4,892 pharmacogenetics articles in the literature with 92% precision. Their automated method took a fraction of the time to acquire these articles compared with the time expected to be taken to accumulate them manually. The authors have built a Web resource (http://pharmdemo.stanford.edu/pharmdb/main.spy) to provide access to their results.A statistical classification approach can screen the primary literature to pharmacogenetics articles with high precision. Such methods may assist curators in acquiring pertinent literature in building biomedical databases.

    View details for DOI 10.1197/jamia.M1640

    View details for Web of Science ID 000227842000003

    View details for PubMedID 15561790

    View details for PubMedCentralID PMC551544

  • Challenges in converting frame-based ontology into OWL: the Foundational Model of Anatomy case-study. AMIA ... Annual Symposium proceedings / AMIA Symposium. AMIA Symposium Dameron, O., Rubin, D. L., Musen, M. A. 2005: 181-185

    Abstract

    A description logics representation of the Foundational Model of Anatomy (FMA) in the Web Ontology Language (OWL-DL) would allow developers to combine it with other OWL ontologies, and would provide the benefit of being able to access generic reasoning tools. However, the FMA is currently represented in a frame language. The differences between description logics and frames are not only syntactic, but also semantic. We analyze some theoretical and computational limitations of converting the FMA into OWL-DL. Namely, some of the constructs used in the FMA do not have a direct equivalent in description logics, and a complete conversion of the FMA in description logics is too large to support reasoning. Therefore, an OWL-DL representation of the FMA would have to be optimized for each application. We propose a solution based on OWL-Full, a superlanguage of OWL-DL, that meets the expressiveness requirements and remains application-independent. Specific simplified OWL-DL representations can then be generated from the OWL-Full model by applications. We argue that this solution is easier to implement and closer to the application needs than an integral translation, and that the latter approach would only make the FMA maintenance more difficult.

    View details for PubMedID 16779026

  • Use of description logic classification to reason about consequences of penetrating injuries. AMIA ... Annual Symposium proceedings / AMIA Symposium. AMIA Symposium Rubin, D. L., Dameron, O., Musen, M. A. 2005: 649-653

    Abstract

    The consequences of penetrating injuries can be complex, including abnormal blood flow through the injury channel and functional impairment of organs if arteries supplying them have been severed. Determining the consequences of such injuries can be posed as a classification problem, requiring a priori symbolic knowledge of anatomy. We hypothesize that such symbolic knowledge can be modeled using ontologies, and that the reasoning task can be accomplished using knowl-edge representation in description logics (DL) and automatic classification. We demonstrate the capabilities of automated classification using the Web Ontology Language (OWL) to reason about the consequences of penetrating injuries. We created in OWL a knowledge model of chest and heart anatomy describing the heart structure and the surrounding anatomic compartments, as well as the perfusion of regions of the heart by branches of the coronary arteries. We then used a domain-independent classifier to infer ischemic regions of the heart as well as anatomic spaces containing ectopic blood secondary to the injuries. Our results highlight the advantages of posing reasoning problems as a classification task, and lever-aging the automatic classification capabilities of DL to create intelligent applications.

    View details for PubMedID 16779120

  • Using an Ontology of Human Anatomy to Inform Reasoning with Geometric Models 13th Conference on Medicine Meets Virtual Reality Rubin, D. L., Bashir, Y., Grossman, D., Dev, P., Musen, M. A. I O S PRESS. 2005: 429–435

    Abstract

    The Virtual Soldier project is a large effort on the part of the U.S. Defense Advanced Research Projects agency to explore using both general anatomical knowledge and specific computed tomographic (CT) images of individual soldiers to aid the rapid diagnosis and treatment of penetrating injuries. Our goal is to develop intelligent computer applications that use this knowledge to reason about the anatomic structures that are directly injured and to predict propagation of injuries secondary to primary organ damage. To accomplish this, we needed to develop an architecture to combine geometric data with anatomic knowledge and reasoning services that use this information to predict the consequences of injuries.

    View details for Web of Science ID 000273828700086

    View details for PubMedID 15718773

  • A resource to acquire and summarize pharmacogenetics knowledge in the literature 11th World Congress on Medical Informatics Rubin, D. L., Carrillo, M., Woon, M., Conroy, J., Klein, T. E., Altman, R. B. I O S PRESS. 2004: 793–797

    Abstract

    To determine how genetic variations contribute the variations in drug response, we need to know the genes that are related to drugs of interest. But there are no publicly available data-bases of known gene-drug relationships, and it is time-consuming to search the literature for this information. We have developed a resource to support the storage, summarization, and dissemination of key gene-drug interactions of relevance to pharmacogenetics. Extracting all gene-drug relationships from the literature is a daunting task, so we distributed a tool to acquire this knowledge from the scientific community. We also developed a categorization scheme to classify gene-drug relationships according to the type of pharmacogenetic evidence that supports them. Our resource (http://www.pharmgkb.org/home/project-community.jsp) can be queried by gene or drug, and it summarizes gene-drug relationships, categories of evidence, and supporting literature. This resource is growing, containing entries for 138 genes and 215 drugs of pharmacogenetics significance, and is a core component of PharmGKB, a pharmacogenetics knowledge base (http://www.pharmgkb.org).

    View details for Web of Science ID 000226723300159

    View details for PubMedID 15360921

  • Improving a Bayesian network's ability to predict the probability of malignancy of microcalcifications on mammography 18th International Congress and Exhibition on Computer Assisted Radiology and Surgery (CARS 2004) Burnside, E. S., Rubin, D. L., Shachter, R. D. ELSEVIER SCIENCE BV. 2004: 1021–1026
  • Using a Bayesian network to predict the probability and type of breast cancer represented by microcalcifications on mammography 11th World Congress on Medical Informatics Burnside, E. S., Rubin, D. L., Shachter, R. D. I O S PRESS. 2004: 13–17

    Abstract

    Since the widespread adoption of mammographic screening in the 1980's there has been a significant increase in the detection and biopsy of both benign and malignant microcalcifications. Though current practice standards recommend that the positive predictive value (PPV) of breast biopsy should be in the range of 25-40%, there exists significant variability in practice. Microcalcifications, if malignant, can represent either a non-invasive or an invasive form of breast cancer. The distinction is critical because distinct surgical therapies are indicated. Unfortunately, this information is not always available at the time of surgery due to limited sampling at image-guided biopsy. For these reasons we conducted an experiment to determine whether a previously created Bayesian network for mammography could predict the significance of microcalcifications. In this experiment we aim to test whether the system is able to perform two related tasks in this domain: 1) to predict the likelihood that microcalcifications are malignant and 2) to predict the likelihood that a malignancy is invasive to help guide the choice of appropriate surgical therapy.

    View details for Web of Science ID 000226723300003

    View details for PubMedID 15360765

  • Linking ontologies with three-dimensional models of anatomy to predict the effects of penetrating injuries 26th Annual International Conference of the IEEE-Engineering-in-Medicine-and-Biology-Society Rubin, D. L., Bashir, Y., Grossman, D., Dev, P., Musen, M. A. IEEE. 2004: 3128–31

    Abstract

    Rapid diagnosis of penetrating injuries is essential to increased chance of survival. Geometric models representing anatomic structures could be useful, but such models generally contain only information about the relationships of points in space as well as display properties. We describe an approach to predicting the anatomic consequences of penetrating injury by creating a geometric model of anatomy that integrates biomechanical and anatomic knowledge. We created a geometric model of the heart from the Visible Human image data set. We linked this geometric model of anatomy with an ontology of descriptive anatomic knowledge. A hierarchy of abstract geometric objects was created that represents organs and organ parts. These geometric objects contain information about organ identity, composition, adjacency, and tissue biomechanical properties. This integrated model can support anatomic reasoning. Given a bullet trajectory and a parametric representation of a cone of tissue damage, we can use our model to predict the organs and organ parts that are injured. Our model is extensible, being able to incorporate future information, such as physiological implications of organ injuries.

    View details for Web of Science ID 000225461800809

  • Linking ontologies with three-dimensional models of anatomy to predict the effects of penetrating injuries. Conference proceedings : ... Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Conference Rubin, D. L., Bashir, Y., Grossman, D., Dev, P., Musen, M. A. 2004; 5: 3128-3131

    Abstract

    Rapid diagnosis of penetrating injuries is essential to increased chance of survival. Geometric models representing anatomic structures could be useful, but such models generally contain only information about the relationships of points in space as well as display properties. We describe an approach to predicting the anatomic consequences of penetrating injury by creating a geometric model of anatomy that integrates biomechanical and anatomic knowledge. We created a geometric model of the heart from the Visible Human image data set. We linked this geometric model of anatomy with an ontology of descriptive anatomic knowledge. A hierarchy of abstract geometric objects was created that represents organs and organ parts. These geometric objects contain information about organ identity, composition, adjacency, and tissue biomechanical properties. This integrated model can support anatomic reasoning. Given a bullet trajectory and a parametric representation of a cone of tissue damage, we can use our model to predict the organs and organ parts that are injured. Our model is extensible, being able to incorporate future information, such as physiological implications of organ injuries.

    View details for PubMedID 17270942

  • Indexing pharmacogenetic knowledge on the World Wide Web PHARMACOGENETICS Altman, R. B., Flockhart, D. A., Sherry, S. T., Oliver, D. E., Rubin, D. L., Klein, T. E. 2003; 13 (1): 3-5

    View details for Web of Science ID 000180584000002

    View details for PubMedID 12544507

  • PharmGKB: The Pharmacogenetics Knowledge Base NUCLEIC ACIDS RESEARCH Hewett, M., Oliver, D. E., Rubin, D. L., Easton, K. L., Stuart, J. M., Altman, R. B., Klein, T. E. 2002; 30 (1): 163-165

    Abstract

    The Pharmacogenetics Knowledge Base (PharmGKB; http://www.pharmgkb.org/) contains genomic, phenotype and clinical information collected from ongoing pharmacogenetic studies. Tools to browse, query, download, submit, edit and process the information are available to registered research network members. A subset of the tools is publicly available. PharmGKB currently contains over 150 genes under study, 14 Coriell populations and a large ontology of pharmacogenetics concepts. The pharmacogenetic concepts and the experimental data are interconnected by a set of relations to form a knowledge base of information for pharmacogenetic researchers. The information in PharmGKB, and its associated tools for processing that information, are tailored for leading-edge pharmacogenetics research. The PharmGKB project was initiated in April 2000 and the first version of the knowledge base went online in February 2001.

    View details for Web of Science ID 000173077100041

    View details for PubMedID 11752281

    View details for PubMedCentralID PMC99138

  • Automating data acquisition into ontologies from pharmacogenetics relational data sources using declarative object definitions and XML. Pacific Symposium on Biocomputing. Pacific Symposium on Biocomputing Rubin, D. L., Hewett, M., Oliver, D. E., Klein, T. E., Altman, R. B. 2002: 88-99

    Abstract

    Ontologies are useful for organizing large numbers of concepts having complex relationships, such as the breadth of genetic and clinical knowledge in pharmacogenomics. But because ontologies change and knowledge evolves, it is time consuming to maintain stable mappings to external data sources that are in relational format. We propose a method for interfacing ontology models with data acquisition from external relational data sources. This method uses a declarative interface between the ontology and the data source, and this interface is modeled in the ontology and implemented using XML schema. Data is imported from the relational source into the ontology using XML, and data integrity is checked by validating the XML submission with an XML schema. We have implemented this approach in PharmGKB (http://www.pharmgkb.org/), a pharmacogenetics knowledge base. Our goals were to (1) import genetic sequence data, collected in relational format, into the pharmacogenetics ontology, and (2) automate the process of updating the links between the ontology and data acquisition when the ontology changes. We tested our approach by linking PharmGKB with data acquisition from a relational model of genetic sequence information. The ontology subsequently evolved, and we were able to rapidly update our interface with the external data and continue acquiring the data. Similar approaches may be helpful for integrating other heterogeneous information sources in order make the diversity of pharmacogenetics data amenable to computational analysis.

    View details for PubMedID 11928521

  • Representing genetic sequence data for pharmacogenomics: an evolutionary approach using ontological and relational models. Bioinformatics Rubin, D. L., Shafa, F., Oliver, D. E., Hewett, M., Altman, R. B. 2002; 18: S207-15

    Abstract

    The information model chosen to store biological data affects the types of queries possible, database performance, and difficulty in updating that information model. Genetic sequence data for pharmacogenetics studies can be complex, and the best information model to use may change over time. As experimental and analytical methods change, and as biological knowledge advances, the data storage requirements and types of queries needed may also change.We developed a model for genetic sequence and polymorphism data, and used XML Schema to specify the elements and attributes required for this model. We implemented this model as an ontology in a frame-based representation and as a relational model in a database system. We collected genetic data from two pharmacogenetics resequencing studies, and formulated queries useful for analysing these data. We compared the ontology and relational models in terms of query complexity, performance, and difficulty in changing the information model. Our results demonstrate benefits of evolving the schema for storing pharmacogenetics data: ontologies perform well in early design stages as the information model changes rapidly and simplify query formulation, while relational models offer improved query speed once the information model and types of queries needed stabilize.

    View details for PubMedID 12169549

  • Ontology development for a pharmacogenetics knowledge base. Pacific Symposium on Biocomputing. Pacific Symposium on Biocomputing Oliver, D. E., Rubin, D. L., Stuart, J. M., Hewett, M., Klein, T. E., Altman, R. B. 2002: 65-76

    Abstract

    Research directed toward discovering how genetic factors influence a patient's response to drugs requires coordination of data produced from laboratory experiments, computational methods, and clinical studies. A public repository of pharmacogenetic data should accelerate progress in the field of pharmacogenetics by organizing and disseminating public datasets. We are developing a pharmacogenetics knowledge base (PharmGKB) to support the storage and retrieval of both experimental data and conceptual knowledge. PharmGKB is an Internet-based resource that integrates complex biological, pharmacological, and clinical data in such a way that researchers can submit their data and users can retrieve information to investigate genotype-phenotype correlations. Successful management of the names, meaning, and organization of concepts used within the system is crucial. We have selected a frame-based knowledge-representation system for development of an ontology of concepts and relationships that represent the domain and that permit storage of experimental data. Preliminary experience shows that the ontology we have developed for gene-sequence data allows us to accept, store, and query data submissions.

    View details for PubMedID 11928517

  • Integrating genotype and phenotype information: an overview of the PharmGKB project. Pharmacogenetics Research Network and Knowledge Base. pharmacogenomics journal Klein, T. E., Chang, J. T., Cho, M. K., Easton, K. L., FERGERSON, R., Hewett, M., Lin, Z., Liu, Y., Liu, S., Oliver, D. E., Rubin, D. L., SHAFA, F., Stuart, J. M., Altman, R. B. 2001; 1 (3): 167-170

    View details for PubMedID 11908751

  • A Bayesian network for mammography Annual Symposium of the American-Medical-Informatics-Association Burnside, E., Rubin, D., Shachter, R. HANLEY & BELFUS INC. 2000: 106–110

    Abstract

    The interpretation of a mammogram and decisions based on it involve reasoning and management of uncertainty. The wide variation of training and practice among radiologists results in significant variability in screening performance with attendant cost and efficacy consequences. We have created a Bayesian belief network to integrate the findings on a mammogram, based on the standardized lexicon developed for mammography, the Breast Imaging Reporting And Data System (BI-RADS). Our goal in creating this network is to explore the probabilistic underpinnings of this lexicon as well as standardize mammographic decision-making to the level of expert knowledge.

    View details for Web of Science ID 000170207500023

    View details for PubMedID 11079854

  • Blood pool and liver enhancement in CT with liposomal iodixanol: Comparison with iohexol ACADEMIC RADIOLOGY Desser, T. S., Rubin, D. L., Muller, H., McIntire, G. L., Bacon, E. R., Toner, J. L. 1999; 6 (3): 176-183

    Abstract

    The authors compared the time course and blood pool and hepatic enhancement of three different doses of liposomal iodixanol with those of iohexol.A liposomal iodixanol formulation was prepared with 200 mg of iodine per milliliter total and 80 mg of iodine per milliliter encapsulated. Twelve normal New Zealand white rabbits divided into four groups received 75-, 100-, or 150-mg encapsulated iodine per kilogram doses of liposomal iodixanol or 2 mL/kg iohexol with 300 mg of iodine per milliliter. A liver section was scanned with serial computed tomography (CT) before the injection, immediately afterward, and at 1-minute intervals for 10 minutes. Region-of-interest measurements of the aorta and liver were plotted at each time point, and contrast enhancement was plotted as a function of time and iodine dose.All liposomal iodixanol doses produced greater liver enhancement than iohexol. Results were significant (P < .05) for 100 mg and 150 mg iodine per kilogram dose groups at time points beyond 2 minutes. Peak hepatic enhancement (change in attenuation) was 54.9 HU +/- 7.6 with iohexol, compared with 59.6 HU +/- 6.1, 73.3 HU +/- 3.6, and 104.1 HU +/- 8.8 for 75, 100, and 150 mg encapsulated iodine per kilogram doses, respectively. Hepatic enhancement increased rapidly after injection of liposomal iodixanol, plateauing 2-3 minutes later. Blood pool enhancement decreased rapidly. Steady-state liver enhancement with liposomal iodixanol increased linearly with dose. Aortic enhancement was greater with iohexol.Liposomal iodixanol yielded greater hepatic enhancement at lower total iodine doses than iohexol. Although liver enhancement occurred rapidly after injection, blood pool enhancement was brief.

    View details for Web of Science ID 000086025000006

    View details for PubMedID 10898037

  • INFLUENCE OF VISCOSITY ON WIN-39996 AS A CONTRAST AGENT FOR GASTROINTESTINAL MAGNETIC-RESONANCE-IMAGING INVESTIGATIVE RADIOLOGY Rubin, D. L., Muller, H. H., Young, S. W., Hunke, W. A., GORMAN, W. G., Lee, K. C. 1995; 30 (4): 226-231

    Abstract

    The authors discuss the influence of viscosity on the imaging properties of WIN 39996 suspension. WIN 39996 suspension is a magnetically susceptible iron ferrite that provides negative (darkening) contrast enhancement in magnetic resonance imaging of the gastrointestinal tract.The viscosity of WIN 39996 suspension was altered by various stress conditions (1 week to 4.5 months storage at temperatures of 5 degrees to 70 degrees C) or by various amounts of xanthan gum. Magnetic resonance imaging was performed in vitro on phantoms and in vivo on the gastrointestinal tract of anesthetized dogs.The results indicated that in vitro and in vivo imaging efficacies of WIN 39996 suspension depended on the viscosity, irrespective of the means by which the viscosity was altered. Specifically, the imaging quality was suitable at viscosities > or = 36.6 cp for in vitro imaging, and > 25 cp for in vivo imaging. The lower in vivo viscosity limit for magnetic resonance imaging compared with the in vitro limit may be due to gastrointestinal peristaltic activities continuously mixing the WIN 39996 suspension to prevent gravitational settling, and the enhancement of signal blackening by intraluminal WIN 39996 that was above and below the plane of image.It is speculated that the imaging quality of WIN 39996 suspension depends on the degree of dispersion of the magnetically susceptible iron ferrite in the WIN 39996 suspension, and that a minimum viscosity is needed to ensure such dispersion.

    View details for Web of Science ID A1995RA15600005

    View details for PubMedID 7635672

  • NANOPARTICULATE CONTRAST-MEDIA - BLOOD-POOL AND LIVER-SPLEEN IMAGING 1993 Meeting of Contrast Media Research (CMR 93) Rubin, D. L., Desser, T. S., Qing, F., Muller, H. H., Young, S. W., McIntire, G. L., Bacon, E., Cooper, E., Toner, J. LIPPINCOTT WILLIAMS & WILKINS. 1994: S280–S283

    View details for Web of Science ID A1994NX79500096

    View details for PubMedID 7928256

  • QUANTITATION OF SATURATION EFFECTS VERSUS DOSE IN 3-DIMENSIONAL TIME-OF-FLIGHT MAGNETIC-RESONANCE ANGIOGRAPHY WITH BLOOD-POOL CONTRAST AGENTS 1993 Meeting of Contrast Media Research (CMR 93) Desser, T. S., Rubin, D. L., Fan, Q., Muller, H. H., Young, S. W., Kellar, K. E., WELLONS, J. A., Ladd, D. L., Toner, J. L., Snow, R. A. LIPPINCOTT WILLIAMS & WILKINS. 1994: S65–S68

    View details for Web of Science ID A1994NX79500022

    View details for PubMedID 7928274

  • DYNAMICS OF TUMOR IMAGING WITH GD-DTPA POLYETHYLENE-GLYCOL POLYMERS - DEPENDENCE ON MOLECULAR-WEIGHT JOURNAL OF MAGNETIC RESONANCE IMAGING Desser, T. S., Rubin, D. L., Muller, H. H., Qing, F., KHODOR, S., Zanazzi, G., Young, S. W., Ladd, D. L., WELLONS, J. A., Kellar, K. E., Toner, J. L., Snow, R. A. 1994; 4 (3): 467-472

    Abstract

    Macromolecular contrast media offer potential advantages over freely diffusible agents in magnetic resonance (MR) imaging outside the central nervous system. To identify an optimum molecular weight for macromolecular contrast media, the authors studied a novel macromolecular contrast agent, gadolinium diethylenetriaminepentaacetic acid polyethylene glycol (DTPA-PEG), synthesized in seven polymer (average) molecular weights ranging from 10 to 83 kd. Twenty-eight rabbits bearing V2 carcinoma in thighs underwent T1-weighted spin-echo imaging before injection and 5-60 minutes and 24 hours after injection of the Gd-DTPA-PEG polymers or Gd-DTPA at a gadolinium dose of 0.1 mmol/kg. Tumor region-of-interest measurements were obtained at each time point to determine contrast enhancement dynamics. Blood-pool enhancement dynamics were observed for the Gd-DTPA-PEG polymers larger than 20 kd. Polymers smaller than 20 kd displayed dynamics similar to those of the freely diffusible agent Gd-DTPA. Above the 20 kd threshold, tumor enhancement was more rapid for smaller polymers. The authors conclude that the 21.9-kd Gd-DTPA-PEG polymer is best suited for clinical MR imaging.

    View details for Web of Science ID A1994NP29200033

    View details for PubMedID 8061449

  • OPTIMIZATION OF AN ORAL MAGNETIC PARTICLE FORMULATION AS A GASTROINTESTINAL CONTRAST AGENT FOR MAGNETIC-RESONANCE-IMAGING INVESTIGATIVE RADIOLOGY Rubin, D. L., Muller, H. H., Young, S. W., Hunke, W. A., GORMAN, W. G. 1994; 29 (1): 81-86

    Abstract

    Magnetically susceptible iron oxide (MSIO) contrast agents for magnetic resonance imaging (MRI) of the gastrointestinal (GI) tract are limited because they produce magnetic susceptibility artifacts. To determine whether oral magnetic particles (WIN 39996) can be an effective MRI contrast agent without producing induced image artifacts, we optimized a liquid formulation of WIN 39996.A range of concentrations (25-250 micrograms iron/mL) and viscosities (1-600 cP) was imaged in a phantom at 1.5 T using conventional spin-echo and gradient-recalled echo pulse sequences. Some formulations also contained titanium.All concentrations of WIN 39996 at 1 cP produced susceptibility artifacts. For formulations in the 150 to 600 cP range, the 125 to 150 micrograms/mL concentrations produced signal blackening and magnetic susceptibility image distortion comparable to an air control. Concentrations greater than 150 micrograms/mL were unacceptable because they produced significant susceptibility artifacts, while concentrations less than 125 micrograms/mL were undesirable because they produced insufficient signal blackening.These preliminary in-vitro studies suggest that an optimized liquid formulation of WIN 39996 can be produced that yields excellent negative contrast without producing image artifacts.

    View details for Web of Science ID A1994NA65700013

    View details for PubMedID 8144343

  • LIQUID ORAL MAGNETIC PARTICLES AS A GASTROINTESTINAL CONTRAST AGENT FOR MR IMAGING - EFFICACY INVIVO JMRI-JOURNAL OF MAGNETIC RESONANCE IMAGING Rubin, D. L., Muller, H. H., Sidhu, M. K., Young, S. W., Hunke, W. A., GORMAN, W. G. 1993; 3 (1): 113-118

    Abstract

    Recent in vitro studies suggested there is an optimal range of concentration and viscosity for a liquid formulation of oral magnetic particles (WIN 39996) for magnetic resonance (MR) imaging of the gastrointestinal (GI) tract. To determine whether this formulation is also effective in vivo and whether differing viscosity and administration regimen affect GI distribution of the contrast agent, a range of concentrations of iron (75, 150, and 200 micrograms/mL) and viscosities (1, 150, and 600 cp) were imaged in dogs at 1.5 T with conventional spin-echo and fat-saturation pulse sequences. The effects of dose regimen (single vs divided dose) and subject position (supine vs right lateral decubitus) were also studied. The 75 and 200 micrograms/mL concentrations were unacceptable for MR imaging, while 150 micrograms/mL was effective. The GI distribution of the contrast agent was affected jointly by viscosity, subject position, and dose regimen. The 150 micrograms/mL formulation produced excellent GI contrast enhancement in vivo for both 150- and 600-cp viscosities. The choice of optimal viscosity may depend on the preferred administration regimen.

    View details for Web of Science ID A1993KJ72500016

    View details for PubMedID 8428076

  • FORMULATION OF RADIOGRAPHICALLY DETECTABLE GASTROINTESTINAL CONTRAST AGENTS FOR MAGNETIC-RESONANCE-IMAGING - EFFECTS OF A BARIUM-SULFATE ADDITIVE ON MR CONTRAST AGENT EFFECTIVENESS MAGNETIC RESONANCE IN MEDICINE Rubin, D. L., Muller, H. H., Young, S. W. 1992; 23 (1): 154-165

    Abstract

    Complete and homogeneous distribution of gastrointestinal (GI) contrast media are important factors for their effective use in computed tomography as well as in magnetic resonance (MR) imaging. A radiographic method (using fluoroscopy or spot films) could be effective for monitoring intestinal filling with GI contrast agents for MR imaging (GICMR), but it would require the addition of a radiopaque agent to most GICMR. This study was conducted to determine the minimum amount of barium additive necessary to be radiographically visible and to evaluate whether this additive influences the signal characteristics of the GICMR. A variety of barium sulfate preparations (3-12% wt/vol) were tested in dogs to determine the minimum quantity needed to make the administered agent visible during fluoroscopy and on abdominal radiographs. Solutions of 10 different potential GI contrast agents (Gd-DTPA, ferric ammonium citrate, Mn-DPDP, chromium-EDTA, gadolinium-oxalate, ferrite particles, water, mineral oil, lipid emulsion, and methylcellulose) were prepared without ("nondoped") and with ("doped") the barium sulfate additive. MR images of the solutions in tubes were obtained at 0.38 T using 10 different spin-echo pulse sequences. Region of interest (ROI) measurements of contrast agent signal intensity (SI) were made. In addition, for the paramagnetic contrast media, the longitudinal and transverse relaxivity (R1 and R2) were measured. A 6% wt/vol suspension of barium was the smallest concentration yielding adequate radiopacity in the GI tract. Except for gadolinium-oxalate, there was no statistically significant difference in SI for doped and non-doped solutions with most pulse sequences used. In addition, the doped and nondoped solutions yielded R1 and R2 values which were comparable. We conclude that barium sulfate 6% wt/vol added to MR contrast agents produces a suspension with sufficient radiodensity to be viewed radiographically, and it does not cause significant alteration in the MR signal appearance of most GICMR. These formulations can be useful for achieving optimal filling of the gastrointestinal tract prior to MRI.

    View details for Web of Science ID A1992HA59900015

    View details for PubMedID 1734177

  • INTRALUMINAL CONTRAST ENHANCEMENT AND MR VISUALIZATION OF THE BOWEL WALL - EFFICACY OF PFOB JMRI-JOURNAL OF MAGNETIC RESONANCE IMAGING Rubin, D. L., Muller, H. H., NINOMURCIA, M., Sidhu, M., CHRISTY, V., Young, S. W. 1991; 1 (3): 371-380

    Abstract

    Efforts to develop satisfactory intraluminal gastrointestinal contrast agents for magnetic resonance (MR) imaging have focused on depicting only the bowel lumen to exclude possible involvement by a pathologic process. To determine whether the bowel wall can be adequately imaged with use of the contrast agent and whether bowel wall visualization is a better index of the utility of the contrast agent for MR imaging, perfluoroocytlbromide (PFOB) was studied in human subjects. Twenty consecutive patients referred for abdominal or pelvic MR imaging were selected. All patients were given 400-1,000 mL of PFOB orally. MR imaging was performed at 0.38 and 1.5 T with T1- and T2-weighted spin-echo pulse sequences before and after administration of PFOB. The images were graded independently by three blinded readers. All readers reported significantly superior conspicuity of the bowel lumen and wall after PFOB than before PFOB administration (P less than .002). Among the post-PFOB studies, those with superior bowel wall visualization demonstrated superior overall image quality. In three patients, lesions were optimally demonstrated because the relationship of the process to the bowel wall, rather than just to the lumen, was identified. In two patients, masses arising within the bowel wall could be identified prospectively only when the bowel wall was adequately imaged. The authors conclude that while lumen identification is improved with PFOB, its greatest clinical utility may be in facilitating intestinal wall visualization.

    View details for Web of Science ID A1991HA76500013

    View details for PubMedID 1802151

  • METHODS FOR THE SYSTEMATIC INVESTIGATION OF GASTROINTESTINAL CONTRAST-MEDIA FOR MRI - EVALUATION OF INTESTINAL DISTRIBUTION BY RADIOGRAPHIC MONITORING MAGNETIC RESONANCE IMAGING Rubin, D. L., Muller, H. H., Young, S. W. 1991; 9 (3): 285-293

    Abstract

    Comparison of the effectiveness of various gastrointestinal (GI) contrast agents for magnetic resonance (MR) imaging is often complicated by varying amounts intraluminal filling with the orally administered agents. To achieve more uniform and reproducible imaging results with GI contrast agents for MR imaging (GICMR), we evaluated a radiographic method for monitoring intraluminal filling and distribution. Solutions of Mn-DPDP (2 mM), to which a small amount of barium sulfate (6 wt/vol%) was added, were administered orally to dogs. Gastric emptying and small bowel transit were monitored fluoroscopically. MR imaging was performed either 1) at a fixed time after administration of the contrast agent or 2) at a variable interval when the contrast agent was observed fluoroscopically to be in the terminal ileum. When initiation of MR imaging was guided by fluoroscopic monitoring of intestinal contrast distribution, uniform and reproducible intestinal contrast enhancement by GICMR was achieved. However, when MR imaging was performed at a fixed time interval after oral administration, non-uniform and variable GI visualization was obtained, and this corresponded to the variable intestinal distribution observed fluoroscopically. We conclude that reproducible intestinal filling with orally administered contrast agents can be accomplished with a radiographic monitoring technique, and this promotes more consistent GI visualization on MR images. Such standardized and reproducible methods are necessary for studies in which the effectiveness of GI contrast media for MR imaging is evaluated and compared.

    View details for Web of Science ID A1991FW09600006

    View details for PubMedID 1908931

  • MAGNETIC-SUSCEPTIBILITY EFFECTS AND THEIR APPLICATION IN THE DEVELOPMENT OF NEW FERROMAGNETIC CATHETERS FOR MAGNETIC-RESONANCE-IMAGING INVESTIGATIVE RADIOLOGY Rubin, D. L., RATNER, A. V., Young, S. W. 1990; 25 (12): 1325-1332

    Abstract

    Newly developed ferromagnetic catheters (Fe-Caths) are more conspicuous than conventional radiographic catheters (Rad-Caths) on magnetic resonance (MR) images because they produce recognizable ferromagnetic signal patterns (FSPs). To determine how MRI parameters influence these patterns, the imaging characteristics of nine Fe-Caths (ferromagnetic concentration 0.01 to 1.0 weight/weight %) were studied systematically and compared with three Rad-Caths. All catheters were studied in stationary and moving phantoms at mid-field (0.38 T) and high-field (1.5 T) strength using spin-echo and gradient-echo pulse sequences. Rad-Caths always produced a signal void. Fe-Caths produced FSPs, the size of which depended on the orientation of the catheter with respect to the main magnetic field, the concentration of ferromagnetic agent in the catheter, and the direction and strength of the frequency encoding gradient. When Fe-Caths were positioned perpendicular to the main magnetic field, they produced FSPs; however, when they were parallel to the main magnetic field, Fe-Caths produced no FSP, thus having a similar appearance to the Rad-Caths. Ferromagnetic catheters produce conspicuous patterns on MR images that depend on catheter orientation in the main magnetic field and vary predictably with the MRI parameters.

    View details for Web of Science ID A1990EM20500007

    View details for PubMedID 2279913

  • DETECTION OF HEPATIC MALIGNANCIES USING MN-DPDP (MANGANESE DIPYRIDOXAL DIPHOSPHATE) HEPATOBILIARY MRI CONTRAST AGENT MAGNETIC RESONANCE IMAGING Young, S. W., Bradley, B., Muller, H. H., Rubin, D. L. 1990; 8 (3): 267-276

    Abstract

    A new hepatobiliary contrast agent (Mn-DPDP) was used in the detection of liver metastases in six rabbits with seven hepatic V2 carcinomas. This contrast agent is derived from pyridoxyl-5-phosphate which is biomimetically designed to be secreted by the hepatocyte. After Mn-DPDP administration, a 105% increase in liver signal to noise was obtained using a 200/20 (TR/TE) pulsing sequence, and a 62% decrease in intensity was observed using a 1200/60 pulsing sequence. Liver V2 carcinoma contrast enhancement increased 427% using the 200/20 pulsing sequence and 176% using the 1200/60 pulsing sequence. Four of seven V2 carcinomas were not detectable prior to the administration of Mn-DPDP (50 mumol/kg). Two neoplasms were only detectable in retrospect (after Mn-DPDP) on the 1200/60 sequence. The smallest neoplasms detected in this study were 1-4 mm. Mn-DPDP appears to be a promising MRI contrast agent.

    View details for Web of Science ID A1990DL77400011

    View details for PubMedID 2114511

  • INFECTIOUS ROTAVIRUS ENTERS CELLS BY DIRECT CELL-MEMBRANE PENETRATION, NOT BY ENDOCYTOSIS JOURNAL OF VIROLOGY KALJOT, K. T., Shaw, R. D., Rubin, D. H., Greenberg, H. B. 1988; 62 (4): 1136-1144

    Abstract

    Rotaviruses are icosahedral viruses with a segmented, double-stranded RNA genome. They are the major cause of severe infantile infectious diarrhea. Rotavirus growth in tissue culture is markedly enhanced by pretreatment of virus with trypsin. Trypsin activation is associated with cleavage of the viral hemagglutinin (viral protein 3 [VP3]; 88 kilodaltons) into two fragments (60 and 28 kilodaltons). The mechanism by which proteolytic cleavage leads to enhanced growth is unknown. Cleavage of VP3 does not alter viral binding to cell monolayers. In previous electron microscopic studies of infected cell cultures, it has been demonstrated that rotavirus particles enter cells by both endocytosis and direct cell membrane penetration. To determine whether trypsin treatment affected rotavirus internalization, we studied the kinetics of entry of infectious rhesus rotavirus (RRV) into MA104 cells. Trypsin-activated RRV was internalized with a half-time of 3 to 5 min, while nonactivated virus disappeared from the cell surface with a half-time of 30 to 50 min. In contrast to trypsin-activated RRV, loss of nonactivated RRV from the cell surface did not result in the appearance of infection, as measured by plaque formation. Endocytosis inhibitors (sodium azide, dinitrophenol) and lysosomotropic agents (ammonium chloride, chloroquine) had a limited effect on the entry of infectious virus into cells. Purified trypsin-activated RRV added to cell monolayers at pH 7.4 medicated 51Cr, [14C]choline, and [3H]inositol released from prelabeled MA104 cells. This release could be specifically blocked by neutralizing antibodies to VP3. These results suggest that MA104 cell infection follows the rapid entry of trypsin-activated RRV by direct cell membrane penetration. Cell membrane penetration of infectious RRV is initiated by trypsin cleavage of VP3. Neutralizing antibodies can inhibit this direct membrane penetration.

    View details for Web of Science ID A1988M444000007

    View details for PubMedID 2831376

  • PULMONARY-FUNCTION IN ADVANCED PULMONARY-HYPERTENSION THORAX Burke, C. M., Glanville, A. R., MORRIS, A. J., Rubin, D., Harvey, J. A., Theodore, J., Robin, E. D. 1987; 42 (2): 131-135

    Abstract

    Pulmonary mechanical function and gas exchange were studied in 33 patients with advanced pulmonary vascular disease, resulting from primary pulmonary hypertension in 18 cases and from Eisenmenger physiology in 15 cases. Evidence of airway obstruction was found in most patients. In addition, mean total lung capacity (TLC) was only 81.5% of predicted and 27% of our subjects had values of TLC less than one standard deviation below the mean predicted value. The mean value for transfer factor (TLCO) was 71.8% of predicted and appreciable arterial hypoxaemia was present, which was disproportionate to the mild derangements in pulmonary mechanics. Patients with Eisenmenger physiology had significantly lower values of arterial oxygen tension (PaO2) (p less than 0.05) and of maximum mid expiratory flow (p less than 0.05) and significantly higher pulmonary arterial pressure (p less than 0.05) than those with primary pulmonary hypertension, but no other variables were significantly different between the two subpopulations. It is concluded that advanced pulmonary vascular disease in patients with primary pulmonary hypertension and Eisenmenger physiology is associated not only with severe hypoxaemia but also with altered pulmonary mechanical function.

    View details for Web of Science ID A1987F940800010

    View details for PubMedID 3433237