Bio


Indrani Bhattacharya, Ph.D. is an Academic Staff (Research) in the Department of Radiology at Stanford University School of Medicine. Her current research focuses on developing accurate and generalizable machine learning methods for cancer detection and aggressiveness characterization using radiology images.

Dr. Bhattacharya received her Ph.D. and M.S. in Electrical Engineering from Rensselaer Polytechnic Institute (RPI), NY. Her research interests are in machine learning, computer vision, and multimodal data fusion applied to interdisciplinary real-world problems in precision medicine and human-centered computing. Her doctoral as well as postdoctoral research have been highly interdisciplinary, at the intersection of machine learning with social science and medicine respectively. Her doctoral research focused on the development of multi-sensor fusion and multimodal machine learning algorithms for estimating and analyzing human behavior in group interactions. Her postdoctoral research focused on developing multimodal machine learning algorithms leveraging complementary radiology, pathology and clinical data for prostate cancer detection.

Dr. Bhattacharya completed her Bachelor’s in Electrical Engineering from Jadavpur University, India and worked as a Project Engineer in Indian Oil Corporation Limited before transitioning to her graduate education.

Current Role at Stanford


Academic Staff -- Research, working on developing Artificial Intelligence methods that integrate complementary multimodal data for disease detection and aggressiveness characterization.

Honors & Awards


  • Rising Stars in EECS, U. C.Berkeley (Nov. 2020)
  • MICCAI NIH Award, MICCAI (Oct. 2020)
  • Founders' Award of Excellence, Rensselaer Polytechnic Institute (Oct. 2018)

Education & Certifications


  • Bachelor of Engineering, Jadavpur University, India, Electrical Engineering (2011)
  • Master of Science, Rensselaer Polytechnic Institute, Electrical Engineering (2016)
  • Doctor of Philosophy, Rensselaer Polytechnic Institute, Electrical Engineering (2019)

All Publications


  • DETECTION OF CLINICALLY SIGNIFICANT PROSTATE CANCER ON MRI: A COMPARISON OF AN ARTIFICIAL INTELLIGENCE MODEL VERSUS RADIOLOGISTS Soerensen, S., Fan, R. E., Bhattacharya, I., Lim, D. S., Ahmadi, S., Li, X., Vesal, S., Rusu, M., Sonn, G. A. LIPPINCOTT WILLIAMS & WILKINS. 2023: E103
  • IMPROVING PROSTATE CANCER DETECTION ON MRI WITH DEEP LEARNING, CLINICAL VARIABLES, AND RADIOMICS Saunders, S., Li, X., Vesal, S., Bhattacharya, I., Soerensen, S. C., Fan, R. E., Rusu, M., Sonn, G. A. LIPPINCOTT WILLIAMS & WILKINS. 2023: E665
  • A review of artificial intelligence in prostate cancer detection on imaging. Therapeutic advances in urology Bhattacharya, I., Khandwala, Y. S., Vesal, S., Shao, W., Yang, Q., Soerensen, S. J., Fan, R. E., Ghanouni, P., Kunder, C. A., Brooks, J. D., Hu, Y., Rusu, M., Sonn, G. A. 2022; 14: 17562872221128791

    Abstract

    A multitude of studies have explored the role of artificial intelligence (AI) in providing diagnostic support to radiologists, pathologists, and urologists in prostate cancer detection, risk-stratification, and management. This review provides a comprehensive overview of relevant literature regarding the use of AI models in (1) detecting prostate cancer on radiology images (magnetic resonance and ultrasound imaging), (2) detecting prostate cancer on histopathology images of prostate biopsy tissue, and (3) assisting in supporting tasks for prostate cancer detection (prostate gland segmentation, MRI-histopathology registration, MRI-ultrasound registration). We discuss both the potential of these AI models to assist in the clinical workflow of prostate cancer diagnosis, as well as the current limitations including variability in training data sets, algorithms, and evaluation criteria. We also discuss ongoing challenges and what is needed to bridge the gap between academic research on AI for prostate cancer and commercial solutions that improve routine clinical care.

    View details for DOI 10.1177/17562872221128791

    View details for PubMedID 36249889

    View details for PubMedCentralID PMC9554123

  • Domain generalization for prostate segmentation in transrectal ultrasound images: A multi-center study. Medical image analysis Vesal, S., Gayo, I., Bhattacharya, I., Natarajan, S., Marks, L. S., Barratt, D. C., Fan, R. E., Hu, Y., Sonn, G. A., Rusu, M. 2022; 82: 102620

    Abstract

    Prostate biopsy and image-guided treatment procedures are often performed under the guidance of ultrasound fused with magnetic resonance images (MRI). Accurate image fusion relies on accurate segmentation of the prostate on ultrasound images. Yet, the reduced signal-to-noise ratio and artifacts (e.g., speckle and shadowing) in ultrasound images limit the performance of automated prostate segmentation techniques and generalizing these methods to new image domains is inherently difficult. In this study, we address these challenges by introducing a novel 2.5D deep neural network for prostate segmentation on ultrasound images. Our approach addresses the limitations of transfer learning and finetuning methods (i.e., drop in performance on the original training data when the model weights are updated) by combining a supervised domain adaptation technique and a knowledge distillation loss. The knowledge distillation loss allows the preservation of previously learned knowledge and reduces the performance drop after model finetuning on new datasets. Furthermore, our approach relies on an attention module that considers model feature positioning information to improve the segmentation accuracy. We trained our model on 764 subjects from one institution and finetuned our model using only ten subjects from subsequent institutions. We analyzed the performance of our method on three large datasets encompassing 2067 subjects from three different institutions. Our method achieved an average Dice Similarity Coefficient (Dice) of 94.0±0.03 and Hausdorff Distance (HD95) of 2.28mm in an independent set of subjects from the first institution. Moreover, our model generalized well in the studies from the other two institutions (Dice: 91.0±0.03; HD95: 3.7mm and Dice: 82.0±0.03; HD95: 7.1mm). We introduced an approach that successfully segmented the prostate on ultrasound images in a multi-center study, suggesting its clinical potential to facilitate the accurate fusion of ultrasound and MRI images to drive biopsy and image-guided treatments.

    View details for DOI 10.1016/j.media.2022.102620

    View details for PubMedID 36148705

  • Computational Detection of Extraprostatic Extension of Prostate Cancer on Multiparametric MRI Using Deep Learning. Cancers Moroianu, S. L., Bhattacharya, I., Seetharaman, A., Shao, W., Kunder, C. A., Sharma, A., Ghanouni, P., Fan, R. E., Sonn, G. A., Rusu, M. 2022; 14 (12)

    Abstract

    The localization of extraprostatic extension (EPE), i.e., local spread of prostate cancer beyond the prostate capsular boundary, is important for risk stratification and surgical planning. However, the sensitivity of EPE detection by radiologists on MRI is low (57% on average). In this paper, we propose a method for computational detection of EPE on multiparametric MRI using deep learning. Ground truth labels of cancers and EPE were obtained in 123 patients (38 with EPE) by registering pre-surgical MRI with whole-mount digital histopathology images from radical prostatectomy. Our approach has two stages. First, we trained deep learning models using the MRI as input to generate cancer probability maps both inside and outside the prostate. Second, we built an image post-processing pipeline that generates predictions for EPE location based on the cancer probability maps and clinical knowledge. We used five-fold cross-validation to train our approach using data from 74 patients and tested it using data from an independent set of 49 patients. We compared two deep learning models for cancer detection: (i) UNet and (ii) the Correlated Signature Network for Indolent and Aggressive prostate cancer detection (CorrSigNIA). The best end-to-end model for EPE detection, which we call EPENet, was based on the CorrSigNIA cancer detection model. EPENet was successful at detecting cancers with extraprostatic extension, achieving a mean area under the receiver operator characteristic curve of 0.72 at the patient-level. On the test set, EPENet had 80.0% sensitivity and 28.2% specificity at the patient-level compared to 50.0% sensitivity and 76.9% specificity for the radiologists. To account for spatial location of predictions during evaluation, we also computed results at the sextant-level, where the prostate was divided into sextants according to standard systematic 12-core biopsy procedure. At the sextant-level, EPENet achieved mean sensitivity 61.1% and mean specificity 58.3%. Our approach has the potential to provide the location of extraprostatic extension using MRI alone, thus serving as an independent diagnostic aid to radiologists and facilitating treatment planning.

    View details for DOI 10.3390/cancers14122821

    View details for PubMedID 35740487

  • Bridging the gap between prostate radiology and pathology through machine learning. Medical physics Bhattacharya, I., Lim, D. S., Aung, H. L., Liu, X., Seetharaman, A., Kunder, C. A., Shao, W., Soerensen, S. J., Fan, R. E., Ghanouni, P., To'o, K. J., Brooks, J. D., Sonn, G. A., Rusu, M. 2022

    Abstract

    Prostate cancer remains the second deadliest cancer for American men despite clinical advancements. Currently, Magnetic Resonance Imaging (MRI) is considered the most sensitive non-invasive imaging modality that enables visualization, detection and localization of prostate cancer, and is increasingly used to guide targeted biopsies for prostate cancer diagnosis. However, its utility remains limited due to high rates of false positives and false negatives as well as low inter-reader agreements.Machine learning methods to detect and localize cancer on prostate MRI can help standardize radiologist interpretations. However, existing machine learning methods vary not only in model architecture, but also in the ground truth labeling strategies used for model training. We compare different labeling strategies and the effects they have on the performance of different machine learning models for prostate cancer detection on MRI.Four different deep learning models (SPCNet, U-Net, branched U-Net, and DeepLabv3+) were trained to detect prostate cancer on MRI using 75 patients with radical prostatectomy, and evaluated using 40 patients with radical prostatectomy and 275 patients with targeted biopsy. Each deep learning model was trained with four different label types: pathology-confirmed radiologist labels, pathologist labels on whole-mount histopathology images, and lesion-level and pixel-level digital pathologist labels (previously validated deep learning algorithm on histopathology images to predict pixel-level Gleason patterns) on whole-mount histopathology images. The pathologist and digital pathologist labels (collectively referred to as pathology labels) were mapped onto pre-operative MRI using an automated MRI-histopathology registration platform.Radiologist labels missed cancers (ROC-AUC: 0.75 - 0.84), had lower lesion volumes (~68% of pathology lesions), and lower Dice overlaps (0.24 - 0.28) when compared with pathology labels. Consequently, machine learning models trained with radiologist labels also showed inferior performance compared to models trained with pathology labels. Digital pathologist labels showed high concordance with pathologist labels of cancer (lesion ROC-AUC: 0.97 - 1, lesion Dice: 0.75 - 0.93). Machine learning models trained with digital pathologist labels had the highest lesion detection rates in the radical prostatectomy cohort (aggressive lesion ROC-AUC: 0.91 - 0.94), and had generalizable and comparable performance to pathologist label trained-models in the targeted biopsy cohort (aggressive lesion ROC-AUC: 0.87 - 0.88), irrespective of the deep learning architecture. Moreover, machine learning models trained with pixel-level digital pathologist labels were able to selectively identify aggressive and indolent cancer components in mixed lesions on MRI, which is not possible with any human-annotated label type.Machine learning models for prostate MRI interpretation that are trained with digital pathologist labels showed higher or comparable performance with pathologist label-trained models in both radical prostatectomy and targeted biopsy cohort. Digital pathologist labels can reduce challenges associated with human annotations, including labor, time, inter- and intra-reader variability, and can help bridge the gap between prostate radiology and pathology by enabling the training of reliable machine learning models to detect and localize prostate cancer on MRI. This article is protected by copyright. All rights reserved.

    View details for DOI 10.1002/mp.15777

    View details for PubMedID 35633505

  • Integrating zonal priors and pathomic MRI biomarkers for improved aggressive prostate cancer detection on MRI Bhattacharya, I., Shao, W., Soerensen, S. C., Fan, R. E., Wang, J. B., Kunder, C., Ghanouni, P., Sonn, G. A., Rusu, M., Drukker, K., Iftekharuddin, K. M. SPIE-INT SOC OPTICAL ENGINEERING. 2022

    View details for DOI 10.1117/12.2612433

    View details for Web of Science ID 000838048600024

  • DETAILED ANALYSIS OF MRI CONCORDANCE WITH PROSTATECTOMY HISTOPATHOLOGY USING DEEP LEARNING-BASED DIGITAL PATHOLOGY Hockman, L., Fan, R., Schmidt, B., Bhattacharya, I., Rusu, M., Sonn, G. LIPPINCOTT WILLIAMS & WILKINS. 2021: E813-E814
  • Deep Learning Improves Speed and Accuracy of Prostate Gland Segmentations on Magnetic Resonance Imaging for Targeted Biopsy JOURNAL OF UROLOGY Soerensen, S., Fan, R. E., Seetharaman, A., Chen, L., Shao, W., Bhattacharya, I., Kim, Y., Sood, R., Borre, M., Chung, B., To'o, K. J., Rusu, M., Sonn, G. A. 2021; 206 (3): 605-612
  • Deep Learning Improves Speed and Accuracy of Prostate Gland Segmentations on MRI for Targeted Biopsy. The Journal of urology Soerensen, S. J., Fan, R. E., Seetharaman, A., Chen, L., Shao, W., Bhattacharya, I., Kim, Y., Sood, R., Borre, M., Chung, B. I., To'o, K. J., Rusu, M., Sonn, G. A. 2021: 101097JU0000000000001783

    Abstract

    PURPOSE: Targeted biopsy improves prostate cancer diagnosis. Accurate prostate segmentation on MRI is critical for accurate biopsy. Manual gland segmentation is tedious and time-consuming. We sought to develop a deep learning model to rapidly and accurately segment the prostate on MRI and to implement it as part of routine MR-US fusion biopsy in the clinic.MATERIALS AND METHODS: 905 subjects underwent multiparametric MRI at 29 institutions, followed by MR-US fusion biopsy at one institution. A urologic oncology expert segmented the prostate on axial T2-weighted MRI scans. We trained a deep learning model, ProGNet, on 805 cases. We retrospectively tested ProGNet on 100 independent internal and 56 external cases. We prospectively implemented ProGNet as part of the fusion biopsy procedure for 11 patients. We compared ProGNet performance to two deep learning networks (U-Net and HED) and radiology technicians. The Dice similarity coefficient (DSC) was used to measure overlap with expert segmentations. DSCs were compared using paired t-tests.RESULTS: ProGNet (DSC=0.92) outperformed U-Net (DSC=0.85, p <0.0001), HED (DSC=0.80, p< 0.0001), and radiology technicians (DSC=0.89, p <0.0001) in the retrospective internal test set. In the prospective cohort, ProGNet (DSC=0.93) outperformed radiology technicians (DSC=0.90, p <0.0001). ProGNet took just 35 seconds per case (vs. 10 minutes for radiology technicians) to yield a clinically utilizable segmentation file.CONCLUSIONS: This is the first study to employ a deep learning model for prostate gland segmentation for targeted biopsy in routine urologic clinical practice, while reporting results and releasing the code online. Prospective and retrospective evaluations revealed increased speed and accuracy.

    View details for DOI 10.1097/JU.0000000000001783

    View details for PubMedID 33878887

  • Automated Detection of Aggressive and Indolent Prostate Cancer on Magnetic Resonance Imaging. Medical physics Seetharaman, A., Bhattacharya, I., Chen, L. C., Kunder, C. A., Shao, W., Soerensen, S. J., Wang, J. B., Teslovich, N. C., Fan, R. E., Ghanouni, P., Brooks, J. D., To'o, K. J., Sonn, G. A., Rusu, M. 2021

    Abstract

    PURPOSE: While multi-parametric Magnetic Resonance Imaging (MRI) shows great promise in assisting with prostate cancer diagnosis and localization, subtle differences in appearance between cancer and normal tissue lead to many false positive and false negative interpretations by radiologists. We sought to automatically detect aggressive cancer (Gleason pattern ≥ 4) and indolent cancer (Gleason pattern 3) on a per-pixel basis on MRI to facilitate the targeting of aggressive cancer during biopsy.METHODS: We created the Stanford Prostate Cancer Network (SPCNet), a convolutional neural network model, trained to distinguish between aggressive cancer, indolent cancer, and normal tissue on MRI. Ground truth cancer labels were obtainedby registering MRI with whole-mount digital histopathology images from patients that underwent radical prostatectomy. Before registration, these histopathology images were automatically annotated to show Gleason patterns on a per-pixel basis. The model was trained on data from 78 patients that underwent radical prostatectomy and 24 patients without prostate cancer. The model was evaluated on a pixel and lesion level in 322 patients, including: 6 patients with normal MRI and no cancer, 23 patients that underwent radical prostatectomy, and 293 patients that underwent biopsy. Moreover, we assessed the ability of our model to detect clinically significant cancer (lesions with an aggressive component) and compared it to the performance of radiologists.RESULTS: Our model detected clinically significant lesions with an Area Under the Receiver Operator Characteristics Curve of 0.75 for radical prostatectomy patients and 0.80 for biopsy patients. Moreover, the model detected up to 18% of lesions missed by radiologists, and overall had a sensitivity and specificity that approached that of radiologists in detecting clinically significant cancer.CONCLUSIONS: Our SPCNet model accurately detected aggressive prostate cancer. Its performance approached that of radiologists, and it helped identify lesions otherwise missed by radiologists. Our model has the potential to assist physicians in specifically targeting the aggressive component of prostate cancers during biopsy or focal treatment.

    View details for DOI 10.1002/mp.14855

    View details for PubMedID 33760269

  • Classifying the emotional speech content of participants in group meetings using convolutional long short-term memory networka) JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA Morgan, M. M., Bhattacharya, I., Radke, R. J., Braasch, J. 2021; 149 (2): 885-894

    Abstract

    Emotion is a central component of verbal communication between humans. Due to advances in machine learning and the development of affective computing, automatic emotion recognition is increasingly possible and sought after. To examine the connection between emotional speech and significant group dynamics perceptions, such as leadership and contribution, a new dataset (14 group meetings, 45 participants) is collected for analyzing collaborative group work based on the lunar survival task. To establish a training database, each participant's audio is manually annotated both categorically and along a three-dimensional scale with axes of activation, dominance, and valence and then converted to spectrograms. The performance of several neural network architectures for predicting speech emotion are compared for two tasks: categorical emotion classification and 3D emotion regression using multitask learning. Pretraining each neural network architecture on the well-known IEMOCAP (Interactive Emotional Dyadic Motion Capture) corpus improves the performance on this new group dynamics dataset. For both tasks, the two-dimensional convolutional long short-term memory network achieves the highest overall performance. By regressing the annotated emotions against post-task questionnaire variables for each participant, it is shown that the emotional speech content of a meeting can predict 71% of perceived group leaders and 86% of major contributors.

    View details for DOI 10.1121/10.0003433

    View details for Web of Science ID 000630061500002

    View details for PubMedID 33639830

  • Clinically significant prostate cancer detection on MRI with self-supervised learning using image context restoration Bolous, A., Seetharaman, A., Bhattacharya, I., Fan, R. E., Soerensen, S., Chen, L., Ghanouni, P., Sonn, G. A., Rusu, M., Mazurowski, M. A., Drukker, K. SPIE-INT SOC OPTICAL ENGINEERING. 2021

    View details for DOI 10.1117/12.2581557

    View details for Web of Science ID 000672800100052

  • ProGNet: Prostate Gland Segmentation on MRI with Deep Learning Soerensen, S., Fan, R., Seetharaman, A., Chen, L., Shao, W., Bhattacharya, I., Borre, M., Chung, B., To'o, K., Sonn, G., Rusu, M., Isgum, Landman, B. A. SPIE-INT SOC OPTICAL ENGINEERING. 2021

    View details for DOI 10.1117/12.2580448

    View details for Web of Science ID 000672800200091

  • Selective identification and localization of indolent and aggressive prostate cancers via CorrSigNIA: an MRI-pathology correlation and deep learning framework. Medical image analysis Bhattacharya, I., Seetharaman, A., Kunder, C., Shao, W., Chen, L. C., Soerensen, S. J., Wang, J. B., Teslovich, N. C., Fan, R. E., Ghanouni, P., Brooks, J. D., Sonn, G. A., Rusu, M. 2021; 75: 102288

    Abstract

    Automated methods for detecting prostate cancer and distinguishing indolent from aggressive disease on Magnetic Resonance Imaging (MRI) could assist in early diagnosis and treatment planning. Existing automated methods of prostate cancer detection mostly rely on ground truth labels with limited accuracy, ignore disease pathology characteristics observed on resected tissue, and cannot selectively identify aggressive (Gleason Pattern≥4) and indolent (Gleason Pattern=3) cancers when they co-exist in mixed lesions. In this paper, we present a radiology-pathology fusion approach, CorrSigNIA, for the selective identification and localization of indolent and aggressive prostate cancer on MRI. CorrSigNIA uses registered MRI and whole-mount histopathology images from radical prostatectomy patients to derive accurate ground truth labels and learn correlated features between radiology and pathology images. These correlated features are then used in a convolutional neural network architecture to detect and localize normal tissue, indolent cancer, and aggressive cancer on prostate MRI. CorrSigNIA was trained and validated on a dataset of 98 men, including 74 men that underwent radical prostatectomy and 24 men with normal prostate MRI. CorrSigNIA was tested on three independent test sets including 55 men that underwent radical prostatectomy, 275 men that underwent targeted biopsies, and 15 men with normal prostate MRI. CorrSigNIA achieved an accuracy of 80% in distinguishing between men with and without cancer, a lesion-level ROC-AUC of 0.81±0.31 in detecting cancers in both radical prostatectomy and biopsy cohort patients, and lesion-levels ROC-AUCs of 0.82±0.31 and 0.86±0.26 in detecting clinically significant cancers in radical prostatectomy and biopsy cohort patients respectively. CorrSigNIA consistently outperformed other methods across different evaluation metrics and cohorts. In clinical settings, CorrSigNIA may be used in prostate cancer detection as well as in selective identification of indolent and aggressive components of prostate cancer, thereby improving prostate cancer care by helping guide targeted biopsies, reducing unnecessary biopsies, and selecting and planning treatment.

    View details for DOI 10.1016/j.media.2021.102288

    View details for PubMedID 34784540

  • Weakly Supervised Registration of Prostate MRI and Histopathology Images Shao, W., Bhattacharya, I., Soerensen, S. C., Kunder, C. A., Wang, J. B., Fan, R. E., Ghanouni, P., Brooks, J. D., Sonn, G. A., Rusu, M., DeBruijne, M., Cattin, P. C., Cotin, S., Padoy, N., Speidel, S., Zheng, Y., Essert, C. SPRINGER INTERNATIONAL PUBLISHING AG. 2021: 98-107
  • Intensity Normalization of Prostate MRIs using Conditional Generative Adversarial Networks for Cancer Detection DeSilvio, T., Moroianu, S., Bhattacharya, I., Seetharaman, A., Sonn, G., Rusu, M., Mazurowski, M. A., Drukker, K. SPIE-INT SOC OPTICAL ENGINEERING. 2021

    View details for DOI 10.1117/12.2582297

    View details for Web of Science ID 000672800100016

  • CorrSigNet: Learning CORRelated Prostate Cancer SIGnatures from Radiology and Pathology Images for Improved Computer Aided Diagnosis Medical Image Computing and Computer Assisted Intervention Bhattacharya, I., et al 2020
  • Multiparty Visual Co-Occurrences for Estimating Personality Traits in Group Meetings Zhang, L., Bhattacharya, I., Morgan, M., Foley, M., Riedl, C., Welles, B., Radke, R. J., IEEE Comp Soc IEEE COMPUTER SOC. 2020: 2074-2083
  • Improved Visual Focus of Attention Estimation and Prosodic Features for Analyzing Group Interactions Zhang, L., Morgan, M., Bhattacharya, I., Foley, M., Braasch, J., Riedl, C., Welles, B., Radke, R. J., Assoc Comp Machinery ASSOC COMPUTING MACHINERY. 2019: 385-394
  • Multimodal Dialog for Browsing Large Visual Catalogs using Exploration-Exploitation Paradigm in a Joint Embedding Space Bhattacharya, I., Chowdhury, A., Raykar, V. C., Assoc Comp Machinery ASSOC COMPUTING MACHINERY. 2019: 187-191
  • The Unobtrusive Group Interaction (UGI) Corpus Bhattacharya, I., Foley, M., Ku, C., Zhang, N., Zhang, T., Mine, C., Li, M., Ji, H., Riedl, C., Welles, B., Radke, R. J., Assoc Comp Machinery ASSOC COMPUTING MACHINERY. 2019: 249-254
  • Privacy-Preserving Understanding of Human Body Orientation for Smart Meetings Bhattacharya, I., Eshed, N., Radke, R. J., IEEE IEEE. 2017: 284-292
  • Arrays of single pixel time-of-flight sensors for privacy preserving tracking and coarse pose estimation Bhattacharya, I., Radke, R. J., IEEE IEEE. 2016
  • A palmprint based biometric authentication system using dual tree complex wavelet transform MEASUREMENT Chakraborty, S., Bhattacharya, I., Chatterjee, A. 2013; 46 (10): 4179-4188