Bio


Priyanka Raina is an Assistant Professor of Electrical Engineering at Stanford University. She received her BTech in Electrical Engineering from the IIT Delhi in 2011 and her SM and PhD in Electrical Engineering and Computer Science from MIT in 2013 and 2018. Priyanka’s research is on creating high-performance and energy-efficient architectures for domain-specific hardware accelerators in existing and emerging technologies and agile hardware-software co-design. Her research has won best paper awards at VLSI, ESSCIRC and MICRO conferences and in the JSSC journal. Priyanka teaches several VLSI design classes at Stanford. She has also won the Intel Rising Star Faculty Award, Hellman Faculty Scholar Award and is a Terman Faculty Fellow.

Academic Appointments


Honors & Awards


  • VLSI Best Demo Paper Award, Stanford University (2022)
  • Intel Rising Star Faculty Award, Stanford University (2021)
  • VLSI Best Student Paper Award, Stanford University (2021)
  • JSSC Best Paper Award, Stanford University (2020)
  • Hellman Fellow, Stanford University (2019)
  • MICRO Best Paper Award, Stanford University (2019)
  • Terman Faculty Fellow, Stanford University (2018)
  • Terman Faculty Fellow, MIT (2017)
  • ESSCIRC Best Young Scientist Paper Award, MIT (2016)
  • ISSCC Student Research Preview Award, MIT (2016)
  • Bimla Jain Medal, IIT Delhi (2011)
  • Institute Silver Medal, IIT Delhi (2011)
  • Gold Medal at Indian National Chemistry Olympiad, InChO (2007)

Professional Education


  • Ph.D., Massachusetts Institute of Technology (MIT), Electrical Engineering and Computer Science (2018)
  • S.M., Massachusetts Institute of Technology (MIT), Electrical Engineering and Computer Science (2013)
  • B.Tech., Indian Institute of Technology (IIT) Delhi, Electrical Engineering (2011)

Current Research and Scholarly Interests


For Priyanka's research please visit her group research page at https://stanfordaccelerate.github.io

Stanford Advisees


All Publications


  • A compute-in-memory chip based on resistive random-access memory. Nature Wan, W., Kubendran, R., Schaefer, C., Eryilmaz, S. B., Zhang, W., Wu, D., Deiss, S., Raina, P., Qian, H., Gao, B., Joshi, S., Wu, H., Wong, H. P., Cauwenberghs, G. 2022; 608 (7923): 504-512

    Abstract

    Realizing increasingly complex artificial intelligence (AI) functionalities directly on edge devices calls for unprecedented energy efficiency of edge hardware. Compute-in-memory (CIM) based on resistive random-access memory (RRAM)1 promises to meet such demand by storing AI model weights in dense, analogue and non-volatile RRAM devices, and by performing AI computation directly within RRAM, thus eliminating power-hungry data movement between separate compute and memory2-5. Although recent studies have demonstrated in-memory matrix-vector multiplication on fully integrated RRAM-CIM hardware6-17, it remains a goal for a RRAM-CIM chip to simultaneously deliver high energy efficiency, versatility to support diverse models and software-comparable accuracy. Although efficiency, versatility and accuracy are all indispensable for broad adoption of the technology, the inter-related trade-offs among them cannot be addressed by isolated improvements on any single abstraction level of the design. Here, by co-optimizing across all hierarchies of the design from algorithms and architecture to circuits and devices, we present NeuRRAM-a RRAM-based CIM chip that simultaneously delivers versatility in reconfiguring CIM cores for diverse model architectures, energy efficiency that is two-times better than previous state-of-the-art RRAM-CIM chips across various computational bit-precisions, and inference accuracy comparable to software models quantized to four-bit weights across various AI tasks, including accuracy of 99.0percent on MNIST18 and 85.7percent on CIFAR-1019 image classification, 84.7-percent accuracy on Google speech command recognition20, and a 70-percent reduction in image-reconstruction error on a Bayesian image-recovery task.

    View details for DOI 10.1038/s41586-022-04992-8

    View details for PubMedID 35978128

  • CHIMERA: A 0.92-TOPS, 2.2-TOPS/W Edge AI Accelerator With 2-MByte On-Chip Foundry Resistive RAM for Efficient Training and Inference IEEE JOURNAL OF SOLID-STATE CIRCUITS Prabhu, K., Gural, A., Khan, Z. F., Radway, R. M., Giordano, M., Koul, K., Doshi, R., Kustin, J. W., Liu, T., Lopes, G. B., Turbiner, V., Khwa, W., Chih, Y., Chang, M., Lallement, G., Murmann, B., Mitra, S., Raina, P. 2022
  • Efficient Routing for Coarse-Grained Reconfigurable Arrays using Multi-Pole NEM Relays IEEE/ACM Asia and South Pacific Design Automation Conference (ASP-DAC) Levy, A., Oduoza, M., Balasingam, A., Howe, R., Raina, P. 2022
  • Enabling Reusable Physical Design Flows with Modular Flow Generators Design Automation Conference (DAC) Carsello, A., Thomas, J., Nayak, A., Chen, P., Horowitz, M., Raina, P., Torng, C. 2022
  • An Agile Approach to the Design of Hardware Accelerators and Adaptable Compilers GOMACTech Daly, R., Melchert, J., Koul, K., Raina, P., et al 2022
  • SAPIENS: A 64-kb RRAM-Based Non-Volatile Associative Memory for One-Shot Learning and Inference at the Edge IEEE TRANSACTIONS ON ELECTRON DEVICES Li, H., Chen, W., Levy, A., Wang, C., Wang, H., Chen, P., Wan, W., Khwa, W., Chuang, H., Chih, Y., Chang, M., Wong, H., Raina, P. 2021; 68 (12): 6637-6643
  • RADAR: A Fast and Energy-Efficient Programming Technique for Multiple Bits-Per-Cell RRAM Arrays IEEE TRANSACTIONS ON ELECTRON DEVICES Le, B. Q., Levy, A., Wu, T. F., Radway, R. M., Hsieh, E., Zheng, X., Nelson, M., Raina, P., Wong, H., Wong, S., Mitra, S. 2021; 68 (9): 4397-4403
  • Simba: Scaling Deep-Learning Inference with Chiplet-Based Architecture COMMUNICATIONS OF THE ACM Shao, Y., Cemons, J., Venkatesan, R., Zimmer, B., Fojtik, M., Jiang, N., Keller, B., Klinefelter, A., Pinckney, N., Raina, P., Tell, S. G., Zhang, Y., Dally, W. J., Emer, J., Gray, C., Khailany, B., Keckler, S. W. 2021; 64 (6): 107-116

    View details for DOI 10.1145/3460227

    View details for Web of Science ID 000656072300024

  • Best Papers From Hot Chips 32 IEEE MICRO Raina, P., Young, C. 2021; 41 (2): 6
  • Automated Codesign of Domain-Specific Hardware Accelerators and Compilers Raina, P., Kjolstad, F. B., Horowitz, M., Barrett, C., Fatahalian, K. ASCR Workshop on Reimagining Codesign. 2021
  • CHIMERA: A 0.92 TOPS, 2.2 TOPS/W Edge AI Accelerator with 2 MByte On-Chip Foundry Resistive RAM for Efficient Training and Inference Symposium on VLSI Circuits (VLSI) Giordano, M., Prabhu, K., Koul, K., Radway, R. M., Gural, A., Doshi, R., Khan, Z. F., Kustin, J. W., Liu, T., Lopes, G. B., Turbiner, V., Khwa, W., Chih, Y., Chang, M., Lallement, G., Murmann, B., Mitra, S., Raina, P. 2021
  • One-Shot Learning with Memory-Augmented Neural Networks Using a 64-kbit, 118 GOPS/W RRAM-Based Non-Volatile Associative Memory Symposium on VLSI Technology (VLSI) Li, H., Chen, W., Levy, A., Wang, C., Wang, H., Chen, P., Wan, W., Wong, H., Raina, P. 2021
  • A 0.32-128 TOPS, Scalable Multi-Chip-Module-Based Deep Neural Network Inference Accelerator With Ground-Referenced Signaling in 16 nm IEEE JOURNAL OF SOLID-STATE CIRCUITS Zimmer, B., Venkatesan, R., Shao, Y., Clemons, J., Fojtik, M., Jiang, N., Keller, B., Klinefelter, A., Pinckney, N., Raina, P., Tell, S. G., Zhang, Y., Dally, W. J., Emer, J. S., Gray, C., Keckler, S. W., Khailany, B. 2020; 55 (4): 920–32
  • Automating Vitiligo Skin Lesion Segmentation Using Convolutional Neural Networks Low, M., Raina, P. IEEE International Symposium on Biomedical Imaging (ISBI). 2020
  • A Voltage-Mode Sensing Scheme with Differential-Row Weight Mapping For Energy-Efficient RRAM-Based In-Memory Computing Wan, W., Kubendran, R., Gao, B., Joshi, S., Raina, P., Wu, H., Cauwenberghs, G., Wong, H., IEEE IEEE. 2020
  • Monte Carlo Simulation of a Three-Terminal RRAM with Applications to Neuromorphic Computing Balasingam, A., Levy, A., Li, H., Raina, P., IEEE IEEE. 2020: 197–99
  • Creating an Agile Hardware Design Flow Bahr, R., Barrett, C., Bhagdikar, N., Carsello, A., Daly, R., Donovick, C., Durst, D., Fatahalian, K., Feng, K., Hanrahan, P., Hofstee, T., Horowitz, M., Huff, D., Kjolstad, F., Kong, T., Liu, Q., Mann, M., Melchert, J., Nayak, A., Niemetz, A., Nyengele, G., Raina, P., Richardson, S., Setaluri, R., Setter, J., Sreedhar, K., Strange, M., Thomas, J., Torng, C., Truong, L., Tsiskaridze, N., Zhang, K., IEEE IEEE. 2020
  • A-QED Verification of Hardware Accelerators Singh, E., Lonsing, F., Chattopadhyay, S., Strange, M., Wei, P., Zhang, X., Zhou, Y., Chen, D., Cong, J., Raina, P., Zhang, Z., Barrett, C., Mitra, S., IEEE IEEE. 2020
  • A 74TMACS/W CMOS-ReRAM Neurosynaptic Core with Dynamically Reconfigurable Dataflow and In-Situ Transposable Weights for Probabilistic Graphical Models Wan, W., Kubendran, R., Eryilmaz, S., Zhang, W., Liao, Y., Wu, D., Deiss, S., Gao, B., Raina, P., Joshi, S., Wu, H., Cauwenberghs, G., Wong, H. International Solid-State Circuits Conference (ISSCC). 2020
  • A Framework for Adding Low-Overhead, Fine-Grained Power Domains to CGRAs Nayak, A., Zhang, K., Setaluri, R., Carsello, A., Mann, M., Richardson, S., Bahr, R., Hanrahan, P., Horowitz, M., Raina, P. Design, Automation and Test in Europe Conference (DATE). 2020
  • Using Halide’s Scheduling Language to Analyze DNN Accelerators Yang, X., Gao, M., Liu, Q., Pu, J., Nayak, A., Setter, J., Bell, S., Cao, K., Ha, H., Raina, P., Kozyrakis, C., Horowitz, M. International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS). 2020
  • A 0.11 pJ/Op, 0.32-128 TOPS, Scalable Multi-Chip-Module-based Deep Neural Network Accelerator with Ground-Reference Signaling in 16nm Zimmer, B., Venkatesan, R., Shao, Y., Clemons, J., Fojtik, M., Jiang, N., Keller, B., Klinefelter, A., Pinckney, N., Raina, P., Tell, S. G., Zhang, Y., Dally, W. J., Emer, J. S., Gray, C., Keckler, S. W., Khailany, B., IEEE IEEE. 2019: C300–C301
  • MAGNet: A Modular Accelerator Generator for Neural Networks Venkatesan, R., Shao, Y., Wang, M., Clemons, J., Dai, S., Fojtik, M., Keller, B., Klinefelter, A., Pinckney, N., Raina, P., Zhang, Y., Zimmer, B., Dally, W. J., Emer, J., Keckler, S. W., Khailany, B., IEEE IEEE. 2019
  • Neuro-inspired computing with emerging memories: where device physics meets learning algorithms Li, H., Raina, P., Wong, H., Drouhin, H. J., Wegrowe, J. E., Razeghi, M., Jaffres, H. SPIE-INT SOC OPTICAL ENGINEERING. 2019

    View details for DOI 10.1117/12.2529916

    View details for Web of Science ID 000511161100014

  • A 0.11 pJ/Op, 0.32-128 TOPS, Scalable Multi-Chip-Module-based Deep Neural Network Accelerator Designed with a High-Productivity VLSI Methodology Khailany, B., Venkatesan, R., Shao, Y., Zimmer, B., Clemons, J., Fojtik, M., Jiang, N., Keller, B., Klinefelter, A., Pinckney, N., Raina, P., Tell, S., Zhang, Y., Dally, W., Emer, J., Gray, C., Keckler, S. Hot Chips: A Symposium on High Performance Chips (HotChips). 2019
  • Creating An Agile Hardware Flow Bahr, R., Barrett, C., Bhagdikar, N., Carsello, A., Chizgi, N., Daly, R., Donovick, C., Durst, D., Fatahalian, K., Hanrahan, P., Hofstee, T., Horowitz, M., Huff, D., Kong, T., Liu, Q., Mann, M., Nayak, A., Niemetz, A., Nyengele, G., Richardson, S., Setaluri, R., Setter, J., Stanley, D., Strange, M., Thomas, J., et al Hot Chips: A Symposium on High Performance Chips (HotChips). 2019
  • Simba: Scaling Deep-Learning Inference with Multi-Chip-Module-Based Architecture Shao, S., Clemons, J., Venkatesan, R., Zimmer, B., Fojtik, M., Jiang, N., Keller, B., Klinefelter, A., Pinckney, N., Raina, P., Tell, S., Zhang, Y., Dally, B., Emer, J., Gray, C., Khailany, B., Keckler, S. International Symposium on Microarchitecture (MICRO). 2019
  • Timeloop: A Systematic Approach to DNN Accelerator Evaluation Parashar, A., Raina, P., Shao, Y., Chen, Y., Ying, V. A., Mukkara, A., Venkatesan, R., Khailany, B., Keckler, S. W., Emer, J., IEEE IEEE. 2019: 304–15
  • An Energy-Scalable Accelerator for Blind Image Deblurring Raina, P., Tikekar, M., Chandrakasan, A. P. IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC. 2017: 1849–62
  • A 0.6V 8mW 3D Vision Processor for a Navigation Device for the Visually Impaired Jeon, D., Ickes, N., Raina, P., Wang, H., Rus, D., Chandrakasan, A., IEEE IEEE. 2016: 416–U584
  • An Energy-Scalable Accelerator for Blind Image Deblurring Raina, P., Tikekar, M., Chandrakasan, A. P., IEEE IEEE. 2016: 113–16
  • Reconfigurable Processor for Energy-Efficient Computational Photography Rithe, R., Raina, P., Ickes, N., Tenneti, S. V., Chandrakasan, A. P. IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC. 2013: 2908–19
  • Reconfigurable Processor for Energy-Scalable Computational Photography Rithe, R., Raina, P., Ickes, N., Tenneti, S. V., Chandrakasan, A. P., IEEE IEEE. 2013: 164–U972