Bio


Shahira Abousamra is a Postdoctoral scholar in the department of Biomedical Data Science at Stanford University, working with Dr. Sylvia Plevritis at the Plevritis Lab. She earned her PhD in Computer Science from Stony Brook University in 2024 under the supervision of Dr. Chao Chen and Dr. Dimitris Samaras.

In her research, she integrates mathematical modeling with computer vision to create more robust solutions, particularly in the context of advancing cancer research and enhancing our understanding of the tumor microenvironment. She leverages computational topology and spatial statistics to provide spatial semantic grounding to complement machine learning models. She publishes in top computer vision, artificial intelligence, and medical image analysis conferences including CVPR, ECCV, ICCV, AAAI, and MICCAI.

Honors & Awards


  • Rising Stars in Data Science Workshop, Stanford University (2025)
  • Rising Stars in EECS Workshop, MIT and Boston University (2025)
  • Doctoral Consortium Award, The IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR (2023)
  • Outstanding Reviewer, International Conference on Computer Vision, ICCV (2023)

Professional Education


  • Master of Science, Alexandria University (2011)
  • Bachelor of Science, Alexandria University (2005)
  • Doctor of Philosophy, S.U.N.Y. State University at Stony Brook (2024)
  • Ph.D., Stony Brook University, Computer Science (2024)

Stanford Advisors


All Publications


  • TopoCellGen: Generating Histopathology Cell Topology with a Diffusion Model The IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR Xu, M., Gupta, S., Hu, X., Li, C., Abousamra, S., Samaras, D., Prasanna, P., Chen, C. 2025

    Abstract

    Accurately modeling multi-class cell topology is crucial in digital pathology, as it provides critical insights into tissue structure and pathology. The synthetic generation of cell topology enables realistic simulations of complex tissue environments, enhances downstream tasks by augmenting training data, aligns more closely with pathologists' domain knowledge, and offers new opportunities for controlling and generalizing the tumor microenvironment. In this paper, we propose a novel approach that integrates topological constraints into a diffusion model to improve the generation of realistic, contextually accurate cell topologies. Our method refines the simulation of cell distributions and interactions, increasing the precision and interpretability of results in downstream tasks such as cell detection and classification. To assess the topological fidelity of generated layouts, we introduce a new metric, Topological Fréchet Distance (TopoFD), which overcomes the limitations of traditional metrics like FID in evaluating topological structure. Experimental results demonstrate the effectiveness of our approach in generating multi-class cell layouts that capture intricate topological relationships. Code is available at https://github.com/Melon-Xu/TopoCellGen.

    View details for DOI 10.1109/CVPR52734.2025.01954

    View details for PubMedCentralID PMC12380007

  • MATCH: Multi-faceted Adaptive Topo- Consistency for Semi-Supervised Histopathology Segmentation The Thirty-Ninth Annual Conference on Neural Information Processing Systems, NeurIPS Xu, M., Hu, X., Abousamra, S., Li, C., Chen, C. 2025
  • Label-Efficient Deep Color Deconvolution of Brightfield Multiplex IHC Images IEEE Transactions on Medical Imaging Abousamra, S., Fassler, D., Gupta, R., Kurc, T., Escobar-Hoyos, L. F., Samaras, D., Shroyer, K. R., Saltz, J., Chen, C. 2025

    View details for DOI 10.1109/TMI.2025.3609245

  • New Spatial Phenotypes from Imaging Uncover Survival Differences for Breast Cancer Patients. ACM-BCB ... ... : the ... ACM Conference on Bioinformatics, Computational Biology and Biomedicine. ACM Conference on Bioinformatics, Computational Biology and Biomedicine Hasan, M., Kim Silva, A., Abousamra, S., Tang, S. J., Prasanna, P., Saltz, J., Gardner, K., Chen, C., Yurovsky, A. 2024; 2024

    Abstract

    Imaging technologies have revolutionized the study of the tumor microenvironment (TME) by leveraging spatial analysis, which enables the exploration of tissue organization and cellular communication, as well as aiding cancer diagnosis and prognosis. However, while many advanced spatial analysis methods have been recently published, they are enmeshed with specific imaging technology. An opportunity exists to develop a technology-agnostic methodology that captures complex spatial patterns in the TME as phenotypes to use in downstream tasks. In this paper, we present a novel variation of spatial g-function and a comprehensive imaging-technology-agnostic framework that identifies rich spatial phenotypes that can be used in survival analysis and classification tasks. Applying our methodology to breast cancer, we uncover spatial phenotypes with significance to survival across racial groups and molecular subtypes of breast cancer. We find other phenotypes that are significant to the survival of specific patient categories (such as African American). We also demonstrate that our phenotypes reflect specific biological contexts. These results highlight the relevance of our proposed spatial analysis and phenotype discovery pipeline and demonstrate the benefits of the systematic exploration of spatial phenotypes for more personalized diagnosis and treatments.

    View details for DOI 10.1145/3698587.3701333

    View details for PubMedID 40620540

    View details for PubMedCentralID PMC12228512

  • Semi-supervised Segmentation of Histopathology Images with Noise-Aware Topological Consistency. Computer vision - ECCV ... : ... European Conference on Computer Vision : proceedings. European Conference on Computer Vision Xu, M., Hu, X., Gupta, S., Abousamra, S., Chen, C. 2024; 15136: 271-289

    Abstract

    In digital pathology, segmenting densely distributed objects like glands and nuclei is crucial for downstream analysis. Since detailed pixel-wise annotations are very time-consuming, we need semi-supervised segmentation methods that can learn from unlabeled images. Existing semi-supervised methods are often prone to topological errors, e.g., missing or incorrectly merged/separated glands or nuclei. To address this issue, we propose TopoSemiSeg, the first semi-supervised method that learns the topological representation from unlabeled histopathology images. The major challenge is for unlabeled images; we only have predictions carrying noisy topology. To this end, we introduce a noise-aware topological consistency loss to align the representations of a teacher and a student model. By decomposing the topology of the prediction into signal topology and noisy topology, we ensure that the models learn the true topological signals and become robust to noise. Extensive experiments on public histopathology image datasets show the superiority of our method, especially on topology-aware evaluation metrics. Code is available at https://github.com/Melon-Xu/TopoSemiSeg.

    View details for DOI 10.1007/978-3-031-73229-4_16

    View details for PubMedID 40557360

    View details for PubMedCentralID PMC12185923

  • Hard Negative Sample Mining for Whole Slide Image Classification. Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention Huang, W., Hu, X., Abousamra, S., Prasanna, P., Chen, C. 2024; 15004: 144-154

    Abstract

    Weakly supervised whole slide image (WSI) classification is challenging due to the lack of patch-level labels and high computational costs. State-of-the-art methods use self-supervised patch-wise feature representations for multiple instance learning (MIL). Recently, methods have been proposed to fine-tune the feature representation on the downstream task using pseudo labeling, but mostly focusing on selecting high-quality positive patches. In this paper, we propose to mine hard negative samples during fine-tuning. This allows us to obtain better feature representations and reduce the training cost. Furthermore, we propose a novel patch-wise ranking loss in MIL to better exploit these hard negative samples. Experiments on two public datasets demonstrate the efficacy of these proposed ideas. Our codes are available at https://github.com/winston52/HNM-WSI.

    View details for DOI 10.1007/978-3-031-72083-3_14

    View details for PubMedID 40556770

    View details for PubMedCentralID PMC12185924

  • Spatial Diffusion for Cell Layout Generation. Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention Li, C., Hu, X., Abousamra, S., Xu, M., Chen, C. 2024; 15004: 481-491

    Abstract

    Generative models, such as GANs and diffusion models, have been used to augment training sets and boost performances in different tasks. We focus on generative models for cell detection instead, i.e., locating and classifying cells in given pathology images. One important information that has been largely overlooked is the spatial patterns of the cells. In this paper, we propose a spatial-pattern-guided generative model for cell layout generation. Specifically, a novel diffusion model guided by spatial features and generates realistic cell layouts has been proposed. We explore different density models as spatial features for the diffusion model. In downstream tasks, we show that the generated cell layouts can be used to guide the generation of high-quality pathology images. Augmenting with these images can significantly boost the performance of SOTA cell detection methods. The code is available at https://github.com/superlc1995/Diffusion-cell.

    View details for DOI 10.1007/978-3-031-72083-3_45

    View details for PubMedID 40586094

    View details for PubMedCentralID PMC12206494

  • Uncertainty Estimation for Tumor Prediction with Unlabeled Data. Conference on Computer Vision and Pattern Recognition Workshops. IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Workshops Yun, J., Abousamra, S., Li, C., Gupta, R., Kurc, T., Samaras, D., Van Dyke, A., Saltz, J., Chen, C. 2024; 2024: 6946-6954

    Abstract

    Estimating uncertainty of a neural network is crucial in providing transparency and trustworthiness. In this paper, we focus on uncertainty estimation for digital pathology prediction models. To explore the large amount of unlabeled data in digital pathology, we propose to adopt novel learning method that can fully exploit unlabeled data. The proposed method achieves superior performance compared with different baselines including the celebrated Monte-Carlo Dropout. Closeup inspection of uncertain regions reveal insight into the model and improves the trustworthiness of the models.

    View details for DOI 10.1109/cvprw63382.2024.00688

    View details for PubMedID 39553831

    View details for PubMedCentralID PMC11567674

  • Keratin 17 modulates the immune topography of pancreatic cancer. Journal of translational medicine Delgado-Coka, L., Horowitz, M., Torrente-Goncalves, M., Roa-Peña, L., Leiton, C. V., Hasan, M., Babu, S., Fassler, D., Oentoro, J., Bai, J. K., Petricoin, E. F., Matrisian, L. M., Blais, E. M., Marchenko, N., Allard, F. D., Jiang, W., Larson, B., Hendifar, A., Chen, C., Abousamra, S., Samaras, D., Kurc, T., Saltz, J., Escobar-Hoyos, L. F., Shroyer, K. R. 2024; 22 (1): 443

    Abstract

    The immune microenvironment impacts tumor growth, invasion, metastasis, and patient survival and may provide opportunities for therapeutic intervention in pancreatic ductal adenocarcinoma (PDAC). Although never studied as a potential modulator of the immune response in most cancers, Keratin 17 (K17), a biomarker of the most aggressive (basal) molecular subtype of PDAC, is intimately involved in the histogenesis of the immune response in psoriasis, basal cell carcinoma, and cervical squamous cell carcinoma. Thus, we hypothesized that K17 expression could also impact the immune cell response in PDAC, and that uncovering this relationship could provide insight to guide the development of immunotherapeutic opportunities to extend patient survival.Multiplex immunohistochemistry (mIHC) and automated image analysis based on novel computational imaging technology were used to decipher the abundance and spatial distribution of T cells, macrophages, and tumor cells, relative to K17 expression in 235 PDACs.K17 expression had profound effects on the exclusion of intratumoral CD8+ T cells and was also associated with decreased numbers of peritumoral CD8+ T cells, CD16+ macrophages, and CD163+ macrophages (p < 0.0001). The differences in the intratumor and peritumoral CD8+ T cell abundance were not impacted by neoadjuvant therapy, tumor stage, grade, lymph node status, histologic subtype, nor KRAS, p53, SMAD4, or CDKN2A mutations.Thus, K17 expression correlates with major differences in the immune microenvironment that are independent of any tested clinicopathologic or tumor intrinsic variables, suggesting that targeting K17-mediated immune effects on the immune system could restore the innate immunologic response to PDAC and might provide novel opportunities to restore immunotherapeutic approaches for this most deadly form of cancer.

    View details for DOI 10.1186/s12967-024-05252-1

    View details for PubMedID 38730319

    View details for PubMedCentralID PMC11087249

  • ChampKit: A framework for rapid evaluation of deep neural networks for patch-based histopathology classification. Computer methods and programs in biomedicine Kaczmarzyk, J. R., Gupta, R., Kurc, T. M., Abousamra, S., Saltz, J. H., Koo, P. K. 2023; 239: 107631

    Abstract

    Histopathology is the gold standard for diagnosis of many cancers. Recent advances in computer vision, specifically deep learning, have facilitated the analysis of histopathology images for many tasks, including the detection of immune cells and microsatellite instability. However, it remains difficult to identify optimal models and training configurations for different histopathology classification tasks due to the abundance of available architectures and the lack of systematic evaluations. Our objective in this work is to present a software tool that addresses this need and enables robust, systematic evaluation of neural network models for patch classification in histology in a light-weight, easy-to-use package for both algorithm developers and biomedical researchers.Here we present ChampKit (Comprehensive Histopathology Assessment of Model Predictions toolKit): an extensible, fully reproducible evaluation toolkit that is a one-stop-shop to train and evaluate deep neural networks for patch classification. ChampKit curates a broad range of public datasets. It enables training and evaluation of models supported by timm directly from the command line, without the need for users to write any code. External models are enabled through a straightforward API and minimal coding. As a result, Champkit facilitates the evaluation of existing and new models and deep learning architectures on pathology datasets, making it more accessible to the broader scientific community. To demonstrate the utility of ChampKit, we establish baseline performance for a subset of possible models that could be employed with ChampKit, focusing on several popular deep learning models, namely ResNet18, ResNet50, and R26-ViT, a hybrid vision transformer. In addition, we compare each model trained either from random weight initialization or with transfer learning from ImageNet pretrained models. For ResNet18, we also consider transfer learning from a self-supervised pretrained model.The main result of this paper is the ChampKit software. Using ChampKit, we were able to systemically evaluate multiple neural networks across six datasets. We observed mixed results when evaluating the benefits of pretraining versus random intialization, with no clear benefit except in the low data regime, where transfer learning was found to be beneficial. Surprisingly, we found that transfer learning from self-supervised weights rarely improved performance, which is counter to other areas of computer vision.Choosing the right model for a given digital pathology dataset is nontrivial. ChampKit provides a valuable tool to fill this gap by enabling the evaluation of hundreds of existing (or user-defined) deep learning models across a variety of pathology tasks. Source code and data for the tool are freely accessible at https://github.com/SBU-BMI/champkit.

    View details for DOI 10.1016/j.cmpb.2023.107631

    View details for PubMedID 37271050

    View details for PubMedCentralID PMC11093625

  • Unsupervised Stain Decomposition via Inversion Regulation for Multiplex Immunohistochemistry Images. Proceedings of machine learning research Abousamra, S., Fassler, D., Yao, J., Gupta, R., Kurc, T., Escobar-Hoyos, L., Samaras, D., Shroyer, K., Saltz, J., Chen, C. 2023; 227: 74-94

    Abstract

    Multiplex Immunohistochemistry (mIHC) is a cost-effective and accessible method for in situ labeling of multiple protein biomarkers in a tissue sample. By assigning a different stain to each biomarker, it allows the visualization of different types of cells within the tumor vicinity for downstream analysis. However, to detect different types of stains in a given mIHC image is a challenging problem, especially when the number of stains is high. Previous deep-learning-based methods mostly assume full supervision; yet the annotation can be costly. In this paper, we propose a novel unsupervised stain decomposition method to detect different stains simultaneously. Our method does not require any supervision, except for color samples of different stains. A main technical challenge is that the problem is underdetermined and can have multiple solutions. To conquer this issue, we propose a novel inversion regulation technique, which eliminates most undesirable solutions. On a 7-plexed IHC images dataset, the proposed method achieves high quality stain decomposition results without human annotation.

    View details for DOI 10.1109/ISBI45749.2020.9098652

    View details for PubMedID 38817539

    View details for PubMedCentralID PMC11138139

  • Topology-Guided Multi-Class Cell Context Generation for Digital Pathology. Proceedings. IEEE Computer Society Conference on Computer Vision and Pattern Recognition Abousamra, S., Gupta, R., Kurc, T., Samaras, D., Saltz, J., Chen, C. 2023; 2023: 3323-3333

    Abstract

    In digital pathology, the spatial context of cells is important for cell classification, cancer diagnosis and prognosis. To model such complex cell context, however, is challenging. Cells form different mixtures, lineages, clusters and holes. To model such structural patterns in a learnable fashion, we introduce several mathematical tools from spatial statistics and topological data analysis. We incorporate such structural descriptors into a deep generative model as both conditional inputs and a differentiable loss. This way, we are able to generate high quality multi-class cell layouts for the first time. We show that the topology-rich cell layouts can be used for data augmentation and improve the performance of downstream tasks such as cell classification.

    View details for DOI 10.1109/cvpr52729.2023.00324

    View details for PubMedID 38741683

    View details for PubMedCentralID PMC11090253

  • GaNDLF: the generally nuanced deep learning framework for scalable end-to-end clinical workflows COMMUNICATIONS ENGINEERING Pati, S., Thakur, S. P., Hamamci, I., Baid, U., Baheti, B., Bhalerao, M., Gueley, O., Mouchtaris, S., Lang, D., Thermos, S., Gotkowski, K., Gonzalez, C., Grenko, C., Getka, A., Edwards, B., Sheller, M., Wu, J., Karkada, D., Panchumarthy, R., Ahluwalia, V., Zou, C., Bashyam, V., Li, Y., Haghighi, B., Chitalia, R., Abousamra, S., Kurc, T. M., Gastounioti, A., Er, S., Bergman, M., Saltz, J. H., Fan, Y., Shah, P., Mukhopadhyay, A., Tsaftaris, S. A., Menze, B., Davatzikos, C., Kontos, D., Karargyris, A., Umeton, R., Mattson, P., Bakas, S. 2023; 2 (1)
  • Calibrating Uncertainty for Semi-Supervised Crowd Counting Li, C., Hu, X., Abousamra, S., Chen, C., IEEE IEEE COMPUTER SOC. 2023: 16685-16695
  • Multi-Class Cell Detection Using Spatial Context Representation. Proceedings. IEEE International Conference on Computer Vision Abousamra, S., Belinsky, D., Van Arnam, J., Allard, F., Yee, E., Gupta, R., Kurc, T., Samaras, D., Saltz, J., Chen, C. 2021; 2021: 3985-3994

    Abstract

    In digital pathology, both detection and classification of cells are important for automatic diagnostic and prognostic tasks. Classifying cells into subtypes, such as tumor cells, lymphocytes or stromal cells is particularly challenging. Existing methods focus on morphological appearance of individual cells, whereas in practice pathologists often infer cell classes through their spatial context. In this paper, we propose a novel method for both detection and classification that explicitly incorporates spatial contextual information. We use the spatial statistical function to describe local density in both a multi-class and a multi-scale manner. Through representation learning and deep clustering techniques, we learn advanced cell representation with both appearance and spatial context. On various benchmarks, our method achieves better performance than state-of-the-arts, especially on the classification task. We also create a new dataset for multi-class cell detection and classification in breast cancer and we make both our code and data publicly available.

    View details for DOI 10.1109/iccv48922.2021.00397

    View details for PubMedID 38783989

    View details for PubMedCentralID PMC11114143

  • Deep Learning-Based Mapping of Tumor Infiltrating Lymphocytes in Whole Slide Images of 23 Types of Cancer. Frontiers in oncology Abousamra, S., Gupta, R., Hou, L., Batiste, R., Zhao, T., Shankar, A., Rao, A., Chen, C., Samaras, D., Kurc, T., Saltz, J. 2021; 11: 806603

    Abstract

    The role of tumor infiltrating lymphocytes (TILs) as a biomarker to predict disease progression and clinical outcomes has generated tremendous interest in translational cancer research. We present an updated and enhanced deep learning workflow to classify 50x50 um tiled image patches (100x100 pixels at 20x magnification) as TIL positive or negative based on the presence of 2 or more TILs in gigapixel whole slide images (WSIs) from the Cancer Genome Atlas (TCGA). This workflow generates TIL maps to study the abundance and spatial distribution of TILs in 23 different types of cancer. We trained three state-of-the-art, popular convolutional neural network (CNN) architectures (namely VGG16, Inception-V4, and ResNet-34) with a large volume of training data, which combined manual annotations from pathologists (strong annotations) and computer-generated labels from our previously reported first-generation TIL model for 13 cancer types (model-generated annotations). Specifically, this training dataset contains TIL positive and negative patches from cancers in additional organ sites and curated data to help improve algorithmic performance by decreasing known false positives and false negatives. Our new TIL workflow also incorporates automated thresholding to convert model predictions into binary classifications to generate TIL maps. The new TIL models all achieve better performance with improvements of up to 13% in accuracy and 15% in F-score. We report these new TIL models and a curated dataset of TIL maps, referred to as TIL-Maps-23, for 7983 WSIs spanning 23 types of cancer with complex and diverse visual appearances, which will be publicly available along with the code to evaluate performance. Code Available at: https://github.com/ShahiraAbousamra/til_classification.

    View details for DOI 10.3389/fonc.2021.806603

    View details for PubMedID 35251953

    View details for PubMedCentralID PMC8889499

  • Localization in the Crowd with Topological Constraints Abousamra, S., Hoai, M., Samaras, D., Chen, C., Assoc Advancement Artificial Intelligence ASSOC ADVANCEMENT ARTIFICIAL INTELLIGENCE. 2021: 872-881
  • Deep learning-based image analysis methods for brightfield-acquired multiplex immunohistochemistry images. Diagnostic pathology Fassler, D. J., Abousamra, S., Gupta, R., Chen, C., Zhao, M., Paredes, D., Batool, S. A., Knudsen, B. S., Escobar-Hoyos, L., Shroyer, K. R., Samaras, D., Kurc, T., Saltz, J. 2020; 15 (1): 100

    Abstract

    Multiplex immunohistochemistry (mIHC) permits the labeling of six or more distinct cell types within a single histologic tissue section. The classification of each cell type requires detection of the unique colored chromogens localized to cells expressing biomarkers of interest. The most comprehensive and reproducible method to evaluate such slides is to employ digital pathology and image analysis pipelines to whole-slide images (WSIs). Our suite of deep learning tools quantitatively evaluates the expression of six biomarkers in mIHC WSIs. These methods address the current lack of readily available methods to evaluate more than four biomarkers and circumvent the need for specialized instrumentation to spectrally separate different colors. The use case application for our methods is a study that investigates tumor immune interactions in pancreatic ductal adenocarcinoma (PDAC) with a customized mIHC panel.Six different colored chromogens were utilized to label T-cells (CD3, CD4, CD8), B-cells (CD20), macrophages (CD16), and tumor cells (K17) in formalin-fixed paraffin-embedded (FFPE) PDAC tissue sections. We leveraged pathologist annotations to develop complementary deep learning-based methods: (1) ColorAE is a deep autoencoder which segments stained objects based on color; (2) U-Net is a convolutional neural network (CNN) trained to segment cells based on color, texture and shape; and ensemble methods that employ both ColorAE and U-Net, collectively referred to as (3) ColorAE:U-Net. We assessed the performance of our methods using: structural similarity and DICE score to evaluate segmentation results of ColorAE against traditional color deconvolution; F1 score, sensitivity, positive predictive value, and DICE score to evaluate the predictions from ColorAE, U-Net, and ColorAE:U-Net ensemble methods against pathologist-generated ground truth. We then used prediction results for spatial analysis (nearest neighbor).We observed that (1) the performance of ColorAE is comparable to traditional color deconvolution for single-stain IHC images (note: traditional color deconvolution cannot be used for mIHC); (2) ColorAE and U-Net are complementary methods that detect 6 different classes of cells with comparable performance; (3) combinations of ColorAE and U-Net into ensemble methods outperform using either ColorAE and U-Net alone; and (4) ColorAE:U-Net ensemble methods can be employed for detailed analysis of the tumor microenvironment (TME). We developed a suite of scalable deep learning methods to analyze 6 distinctly labeled cell populations in mIHC WSIs. We evaluated our methods and found that they reliably detected and classified cells in the PDAC tumor microenvironment. We also present a use case, wherein we apply the ColorAE:U-Net ensemble method across 3 mIHC WSIs and use the predictions to quantify all stained cell populations and perform nearest neighbor spatial analysis. Thus, we provide proof of concept that these methods can be employed to quantitatively describe the spatial distribution immune cells within the tumor microenvironment. These complementary deep learning methods are readily deployable for use in clinical research studies.

    View details for DOI 10.1186/s13000-020-01003-0

    View details for PubMedID 32723384

    View details for PubMedCentralID PMC7385962

  • Utilizing Automated Breast Cancer Detection to Identify Spatial Distributions of Tumor-Infiltrating Lymphocytes in Invasive Breast Cancer. The American journal of pathology Le, H., Gupta, R., Hou, L., Abousamra, S., Fassler, D., Torre-Healy, L., Moffitt, R. A., Kurc, T., Samaras, D., Batiste, R., Zhao, T., Rao, A., Van Dyke, A. L., Sharma, A., Bremer, E., Almeida, J. S., Saltz, J. 2020; 190 (7): 1491-1504

    Abstract

    Quantitative assessment of spatial relations between tumor and tumor-infiltrating lymphocytes (TIL) is increasingly important in both basic science and clinical aspects of breast cancer research. We have developed and evaluated convolutional neural network analysis pipelines to generate combined maps of cancer regions and TILs in routine diagnostic breast cancer whole slide tissue images. The combined maps provide insight about the structural patterns and spatial distribution of lymphocytic infiltrates and facilitate improved quantification of TILs. Both tumor and TIL analyses were evaluated by using three convolutional neural network networks (34-layer ResNet, 16-layer VGG, and Inception v4); the results compared favorably with those obtained by using the best published methods. We have produced open-source tools and a public data set consisting of tumor/TIL maps for 1090 invasive breast cancer images from The Cancer Genome Atlas. The maps can be downloaded for further downstream analyses.

    View details for DOI 10.1016/j.ajpath.2020.03.012

    View details for PubMedID 32277893

    View details for PubMedCentralID PMC7369575

  • WEAKLY-SUPERVISED DEEP STAIN DECOMPOSITION FOR MULTIPLEX IHC IMAGES Abousamra, S., Fassler, D., Hou, L., Zhang, Y., Gupta, R., Kurc, T., Escobar-Hoyos, L. F., Samaras, D., Knudson, B., Shroyer, K., Saltz, J., Chen, C., IEEE IEEE. 2020: 481-485