
Priyanka Raina
Assistant Professor of Electrical Engineering
Bio
Priyanka Raina is an Assistant Professor of Electrical Engineering at Stanford University. She received her BTech in Electrical Engineering from the IIT Delhi in 2011 and her SM and PhD in Electrical Engineering and Computer Science from MIT in 2013 and 2018. Priyanka’s research is on creating high-performance and energy-efficient architectures for domain-specific hardware accelerators in existing and emerging technologies and agile hardware-software co-design. Her research has won best paper awards at VLSI, ESSCIRC and MICRO conferences and in the JSSC journal. Priyanka teaches several VLSI design classes at Stanford. She has also won the Intel Rising Star Faculty Award, Hellman Faculty Scholar Award and is a Terman Faculty Fellow.
Honors & Awards
-
VLSI Best Demo Paper Award, Stanford University (2022)
-
Intel Rising Star Faculty Award, Stanford University (2021)
-
VLSI Best Student Paper Award, Stanford University (2021)
-
JSSC Best Paper Award, Stanford University (2020)
-
Hellman Fellow, Stanford University (2019)
-
MICRO Best Paper Award, Stanford University (2019)
-
Terman Faculty Fellow, Stanford University (2018)
-
Terman Faculty Fellow, MIT (2017)
-
ESSCIRC Best Young Scientist Paper Award, MIT (2016)
-
ISSCC Student Research Preview Award, MIT (2016)
-
Bimla Jain Medal, IIT Delhi (2011)
-
Institute Silver Medal, IIT Delhi (2011)
-
Gold Medal at Indian National Chemistry Olympiad, InChO (2007)
Professional Education
-
Ph.D., Massachusetts Institute of Technology (MIT), Electrical Engineering and Computer Science (2018)
-
S.M., Massachusetts Institute of Technology (MIT), Electrical Engineering and Computer Science (2013)
-
B.Tech., Indian Institute of Technology (IIT) Delhi, Electrical Engineering (2011)
Current Research and Scholarly Interests
For Priyanka's research please visit her group research page at https://stanfordaccelerate.github.io
2023-24 Courses
- Design Projects in VLSI Systems I
EE 272 (Win) - Semiconductor Memory Devices and Circuit Design
EE 309A (Aut) -
Independent Studies (6)
- Advanced Reading and Research
CS 499 (Aut, Win, Spr) - Advanced Reading and Research
CS 499P (Aut, Win, Spr) - Master's Thesis and Thesis Research
EE 300 (Aut, Win, Spr, Sum) - Special Studies and Reports in Electrical Engineering
EE 191 (Win) - Special Studies and Reports in Electrical Engineering
EE 391 (Aut, Win, Spr, Sum) - Special Studies or Projects in Electrical Engineering
EE 390 (Aut, Win, Spr, Sum)
- Advanced Reading and Research
-
Prior Year Courses
2022-23 Courses
- Design Projects in VLSI Systems I
EE 272 (Win) - Design Projects in VLSI Systems II
EE 372 (Spr) - Introduction to VLSI Systems
EE 271 (Aut)
2021-22 Courses
- Design Projects in VLSI Systems I
EE 272 (Win) - Design Projects in VLSI Systems II
EE 372 (Spr) - Emerging Non-Volatile Memory Devices and Circuit Design
EE 309B (Win) - Introduction to VLSI Systems
EE 271 (Aut) - Semiconductor Memory Devices and Circuit Design
EE 309A (Aut)
2020-21 Courses
- Design Projects in VLSI Systems I
EE 272A (Win) - Design Projects in VLSI Systems II
EE 272B (Spr) - Emerging Non-Volatile Memory Devices and Circuit Design
EE 309B (Win) - Semiconductor Memory Devices and Circuit Design
EE 309A (Aut)
- Design Projects in VLSI Systems I
Stanford Advisees
-
Bo Wun Cheng, Pu Deng, Yuchen Mei, Jeffrey Yu -
Doctoral Dissertation Reader (AC)
Nikhil Bhagdikar, Alex Carsello, Massimo Giordano, Sneha Goenka, William Hwang, Taeyoung Kong, Qiaoyi(Joey) Liu, Shuhan Liu, Zachary Myers, Gedeon Nyengele, Kavya Sreedhar, Daniel Stanley, Maxwell Strange -
Orals Chair
Ross Daly, Caleb Donovick -
Doctoral Dissertation Advisor (AC)
Po-Han Chen, Kathleen Feng, Kalhan Koul, Akash Levy, Jackson Melchert, Kartik Prabhu -
Master's Program Advisor
Pranavi Boyalakuntla, Sai Saketika Chekuri, John Espera, Eric Han, Samidh Vimish Mehta, William Salcedo, Caleb Terrill, Jacqueline Tran, Suguna Velury -
Doctoral (Program)
Po-Han Chen, Nikhil Poole, Kartik Prabhu, Ritvik Sharma
All Publications
-
AHA: An Agile Approach to the Design of Coarse-Grained Reconfigurable Accelerators and Compilers
ACM Transactions on Embedded Computing Systems
2023; 22 (2)
View details for DOI 10.1145/3534933
-
3-D coarse-grained reconfigurable array using multi-pole NEM relays for programmable routing
INTEGRATION-THE VLSI JOURNAL
2023; 88: 249-261
View details for DOI 10.1016/j.vlsi.2022.10.001
View details for Web of Science ID 000882493600001
-
Canal: A Flexible Interconnect Generator for Coarse-Grained Reconfigurable Arrays
IEEE Computer Architecture Letters
2023
View details for DOI 10.1109/LCA.2023.3268126
-
A compute-in-memory chip based on resistive random-access memory.
Nature
2022; 608 (7923): 504-512
Abstract
Realizing increasingly complex artificial intelligence (AI) functionalities directly on edge devices calls for unprecedented energy efficiency of edge hardware. Compute-in-memory (CIM) based on resistive random-access memory (RRAM)1 promises to meet such demand by storing AI model weights in dense, analogue and non-volatile RRAM devices, and by performing AI computation directly within RRAM, thus eliminating power-hungry data movement between separate compute and memory2-5. Although recent studies have demonstrated in-memory matrix-vector multiplication on fully integrated RRAM-CIM hardware6-17, it remains a goal for a RRAM-CIM chip to simultaneously deliver high energy efficiency, versatility to support diverse models and software-comparable accuracy. Although efficiency, versatility and accuracy are all indispensable for broad adoption of the technology, the inter-related trade-offs among them cannot be addressed by isolated improvements on any single abstraction level of the design. Here, by co-optimizing across all hierarchies of the design from algorithms and architecture to circuits and devices, we present NeuRRAM-a RRAM-based CIM chip that simultaneously delivers versatility in reconfiguring CIM cores for diverse model architectures, energy efficiency that is two-times better than previous state-of-the-art RRAM-CIM chips across various computational bit-precisions, and inference accuracy comparable to software models quantized to four-bit weights across various AI tasks, including accuracy of 99.0percent on MNIST18 and 85.7percent on CIFAR-1019 image classification, 84.7-percent accuracy on Google speech command recognition20, and a 70-percent reduction in image-reconstruction error on a Bayesian image-recovery task.
View details for DOI 10.1038/s41586-022-04992-8
View details for PubMedID 35978128
-
CHIMERA: A 0.92-TOPS, 2.2-TOPS/W Edge AI Accelerator With 2-MByte On-Chip Foundry Resistive RAM for Efficient Training and Inference
IEEE JOURNAL OF SOLID-STATE CIRCUITS
2022
View details for DOI 10.1109/JSSC.2022.3140753
View details for Web of Science ID 000750226200001
- An Agile Approach to the Design of Hardware Accelerators and Adaptable Compilers GOMACTech 2022
- Efficient Routing for Coarse-Grained Reconfigurable Arrays using Multi-Pole NEM Relays IEEE/ACM Asia and South Pacific Design Automation Conference (ASP-DAC) 2022
-
mflowgen: a modular flow generator and ecosystem for community-driven physical design
DAC '22: Proceedings of the 59th ACM/IEEE Design Automation Conference
2022: 1339–1342
View details for DOI 10.1145/3489517.3530633
-
Improving Energy Efficiency of CGRAs with Low-Overhead Fine-Grained Power Domains
ACM Transactions on Reconfigurable Technology and Systems
2022
View details for DOI 10.1145/3558394
-
Unified Buffer: Compiling Image Processing and Machine Learning Applications to Push-Memory Accelerators
ACM Transactions on Architecture and Code Optimization
2022: 26
View details for DOI 10.1145/3572908
- Enabling Reusable Physical Design Flows with Modular Flow Generators Design Automation Conference (DAC) 2022
-
SAPIENS: A 64-kb RRAM-Based Non-Volatile Associative Memory for One-Shot Learning and Inference at the Edge
IEEE TRANSACTIONS ON ELECTRON DEVICES
2021; 68 (12): 6637-6643
View details for DOI 10.1109/TED.2021.3110464
View details for Web of Science ID 000724501000107
-
RADAR: A Fast and Energy-Efficient Programming Technique for Multiple Bits-Per-Cell RRAM Arrays
IEEE TRANSACTIONS ON ELECTRON DEVICES
2021; 68 (9): 4397-4403
View details for DOI 10.1109/TED.2021.3097975
View details for Web of Science ID 000686761500038
-
Simba: Scaling Deep-Learning Inference with Chiplet-Based Architecture
COMMUNICATIONS OF THE ACM
2021; 64 (6): 107-116
View details for DOI 10.1145/3460227
View details for Web of Science ID 000656072300024
-
Best Papers From Hot Chips 32
IEEE MICRO
2021; 41 (2): 6
View details for DOI 10.1109/MM.2021.3060294
View details for Web of Science ID 000639559200002
- Automated Codesign of Domain-Specific Hardware Accelerators and Compilers ASCR Workshop on Reimagining Codesign. 2021
- CHIMERA: A 0.92 TOPS, 2.2 TOPS/W Edge AI Accelerator with 2 MByte On-Chip Foundry Resistive RAM for Efficient Training and Inference Symposium on VLSI Circuits (VLSI) 2021
- One-Shot Learning with Memory-Augmented Neural Networks Using a 64-kbit, 118 GOPS/W RRAM-Based Non-Volatile Associative Memory Symposium on VLSI Technology (VLSI) 2021
-
A 0.32-128 TOPS, Scalable Multi-Chip-Module-Based Deep Neural Network Inference Accelerator With Ground-Referenced Signaling in 16 nm
IEEE JOURNAL OF SOLID-STATE CIRCUITS
2020; 55 (4): 920–32
View details for DOI 10.1109/JSSC.2019.2960488
View details for Web of Science ID 000522446300009
-
Creating an Agile Hardware Design Flow
IEEE. 2020
View details for Web of Science ID 000628528400063
-
A Voltage-Mode Sensing Scheme with Differential-Row Weight Mapping For Energy-Efficient RRAM-Based In-Memory Computing
IEEE. 2020
View details for Web of Science ID 000668063000053
-
A-QED Verification of Hardware Accelerators
IEEE. 2020
View details for Web of Science ID 000628528400220
- A 74TMACS/W CMOS-ReRAM Neurosynaptic Core with Dynamically Reconfigurable Dataflow and In-Situ Transposable Weights for Probabilistic Graphical Models International Solid-State Circuits Conference (ISSCC). 2020
- A Framework for Adding Low-Overhead, Fine-Grained Power Domains to CGRAs Design, Automation and Test in Europe Conference (DATE). 2020
- Using Halide’s Scheduling Language to Analyze DNN Accelerators International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS). 2020
- Automating Vitiligo Skin Lesion Segmentation Using Convolutional Neural Networks IEEE International Symposium on Biomedical Imaging (ISBI). 2020
-
Monte Carlo Simulation of a Three-Terminal RRAM with Applications to Neuromorphic Computing
IEEE. 2020: 197–99
View details for Web of Science ID 000636981000050
-
Neuro-inspired computing with emerging memories: where device physics meets learning algorithms
SPIE-INT SOC OPTICAL ENGINEERING. 2019
View details for DOI 10.1117/12.2529916
View details for Web of Science ID 000511161100014
-
MAGNet: A Modular Accelerator Generator for Neural Networks
IEEE. 2019
View details for Web of Science ID 000524676400085
- A 0.11 pJ/Op, 0.32-128 TOPS, Scalable Multi-Chip-Module-based Deep Neural Network Accelerator Designed with a High-Productivity VLSI Methodology Hot Chips: A Symposium on High Performance Chips (HotChips). 2019
- Creating An Agile Hardware Flow Hot Chips: A Symposium on High Performance Chips (HotChips). 2019
- Simba: Scaling Deep-Learning Inference with Multi-Chip-Module-Based Architecture International Symposium on Microarchitecture (MICRO). 2019
-
Timeloop: A Systematic Approach to DNN Accelerator Evaluation
IEEE. 2019: 304–15
View details for DOI 10.1109/ISPASS.2019.00042
View details for Web of Science ID 000470201600034
-
A 0.11 pJ/Op, 0.32-128 TOPS, Scalable Multi-Chip-Module-based Deep Neural Network Accelerator with Ground-Reference Signaling in 16nm
IEEE. 2019: C300–C301
View details for Web of Science ID 000531736500102
-
An Energy-Scalable Accelerator for Blind Image Deblurring
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC. 2017: 1849–62
View details for DOI 10.1109/JSSC.2017.2682842
View details for Web of Science ID 000404301300012
-
A 0.6V 8mW 3D Vision Processor for a Navigation Device for the Visually Impaired
IEEE. 2016: 416–U584
View details for Web of Science ID 000382151400172
-
An Energy-Scalable Accelerator for Blind Image Deblurring
IEEE. 2016: 113–16
View details for Web of Science ID 000386656300026
-
Reconfigurable Processor for Energy-Efficient Computational Photography
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC. 2013: 2908–19
View details for DOI 10.1109/JSSC.2013.2282614
View details for Web of Science ID 000326265100030
-
Reconfigurable Processor for Energy-Scalable Computational Photography
IEEE. 2013: 164–U972
View details for Web of Science ID 000366612300065