Priyanka Raina
Assistant Professor of Electrical Engineering and, by courtesy, of Computer Science
Bio
Priyanka Raina received the B.Tech. degree in electrical engineering from the Indian Institute of Technology Delhi, New Delhi, India, in 2011, and the M.S. and Ph.D. degrees in electrical engineering and computer science from the Massachusetts Institute of Technology, Cambridge, MA, USA, in 2013 and 2018, respectively. She was a Visiting Research Scientist with NVIDIA Corporation, Santa Clara, CA, USA, in 2018. She is currently an Assistant Professor of electrical engineering with Stanford University, Stanford, CA, USA, where she works on domain-specific hardware architectures and agile hardware–software codesign methodology.
Dr. Raina is a 2018 Terman Faculty Fellow. She was a co-recipient of the Best Demo Paper Award at VLSI 2022, the Best Student Paper Award at VLSI 2021, the IEEE Journal of Solid-State Circuits (JSSC) Best Paper Award in 2020, the Best Paper Award at MICRO 2019, and the Best Young Scientist Paper Award at ESSCIRC 2016. She has won the Sloan Research Fellowship in 2024, National Science Foundation (NSF) CAREER Award in 2023, the Intel Rising Star Faculty Award in 2021, and the Hellman Faculty Scholar Award in 2019. She was the Program Chair of the IEEE Hot Chips in 2020. She serves as an Associate Editor for the IEEE Journal of Solid-State Circuits and IEEE Solid-State Circuits Letters.
Academic Appointments
-
Assistant Professor, Electrical Engineering
-
Assistant Professor (By courtesy), Computer Science
Honors & Awards
-
Sloan Research Fellowship, Stanford University (2024)
-
NSF CAREER Award, Stanford University (2023)
-
ISSCC Student Research Preview Award, Stanford University (2022)
-
VLSI Best Demo Paper Award, Stanford University (2022)
-
Intel Rising Star Faculty Award, Stanford University (2021)
-
VLSI Best Student Paper Award, Stanford University (2021)
-
JSSC Best Paper Award, Stanford University (2020)
-
Hellman Fellow, Stanford University (2019)
-
MICRO Best Paper Award, Stanford University (2019)
-
Terman Faculty Fellow, Stanford University (2018)
-
Terman Faculty Fellow, MIT (2017)
-
ESSCIRC Best Young Scientist Paper Award, MIT (2016)
-
ISSCC Student Research Preview Award, MIT (2016)
-
Bimla Jain Medal, IIT Delhi (2011)
-
Institute Silver Medal, IIT Delhi (2011)
-
Gold Medal at Indian National Chemistry Olympiad, InChO (2007)
Professional Education
-
Ph.D., Massachusetts Institute of Technology (MIT), Electrical Engineering and Computer Science (2018)
-
S.M., Massachusetts Institute of Technology (MIT), Electrical Engineering and Computer Science (2013)
-
B.Tech., Indian Institute of Technology (IIT) Delhi, Electrical Engineering (2011)
Current Research and Scholarly Interests
For Priyanka's research please visit her group research page at https://stanfordaccelerate.github.io
2024-25 Courses
- Design Projects in VLSI Systems I
EE 272 (Win) -
Independent Studies (7)
- Advanced Reading and Research
CS 499 (Aut, Win, Spr, Sum) - Advanced Reading and Research
CS 499P (Aut, Win, Spr, Sum) - Master's Thesis and Thesis Research
EE 300 (Aut, Win, Spr, Sum) - Special Studies and Reports in Electrical Engineering
EE 191 (Aut, Win, Spr, Sum) - Special Studies and Reports in Electrical Engineering
EE 391 (Aut, Win, Spr, Sum) - Special Studies or Projects in Electrical Engineering
EE 190 (Aut, Sum) - Special Studies or Projects in Electrical Engineering
EE 390 (Aut, Win, Spr, Sum)
- Advanced Reading and Research
-
Prior Year Courses
2023-24 Courses
- Design Projects in VLSI Systems I
EE 272 (Win) - Emerging Non-Volatile Memory Devices and Circuit Design
EE 309B (Win) - Semiconductor Memory Devices and Circuit Design
EE 309A (Aut)
2022-23 Courses
- Design Projects in VLSI Systems I
EE 272 (Win) - Design Projects in VLSI Systems II
EE 372 (Spr) - Introduction to VLSI Systems
EE 271 (Aut)
2021-22 Courses
- Design Projects in VLSI Systems I
EE 272 (Win) - Design Projects in VLSI Systems II
EE 372 (Spr) - Emerging Non-Volatile Memory Devices and Circuit Design
EE 309B (Win) - Introduction to VLSI Systems
EE 271 (Aut) - Semiconductor Memory Devices and Circuit Design
EE 309A (Aut)
- Design Projects in VLSI Systems I
Stanford Advisees
-
Pu Deng -
Doctoral Dissertation Reader (AC)
Nikhil Bhagdikar, Alex Carsello, William Hwang, Taeyoung Kong, Shuhan Liu, Zachary Myers, Maxwell Strange -
Doctoral Dissertation Advisor (AC)
Po-Han Chen, Kathleen Feng, Kalhan Koul, Kartik Prabhu -
Orals Evaluator
Lenny Truong -
Master's Program Advisor
Sai Saketika Chekuri, Eric Han, Samidh Vimish Mehta, William Salcedo, Jacqueline Tran, Zhouhua Xie -
Doctoral (Program)
Po-Han Chen, Bo Wun Cheng, Yuchen Mei, Allen Pan, Nikhil Poole, Kartik Prabhu, Ritvik Sharma, Jeffrey Yu
All Publications
-
EMBER: Efficient Multiple-Bits-Per-Cell Embedded RRAM Macro for High-Density Digital Storage
IEEE JOURNAL OF SOLID-STATE CIRCUITS
2024
View details for DOI 10.1109/JSSC.2024.3387566
View details for Web of Science ID 001205858500001
- FastPASE: An AI-Driven Fast PPA Speculation Engine for RTL Design Space Optimization International Symposium on Quality Electronic Design (ISQED) 2024
- Cascade: An Application Pipelining Toolkit for Coarse-Grained Reconfigurable Arrays IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD) 2024
- MINOTAUR: An Edge Transformer Inference and Training Accelerator with 12 MBytes On-Chip Resistive RAM and Fine-Grained Spatiotemporal Power Gating IEEE Symposium on VLSI Technology & Circuits (VLSI) 2024
- Onyx: A 12nm 756 GOPS/W Coarse-Grained Reconfigurable Array for Accelerating Dense and Sparse Applications IEEE Symposium on VLSI Technology & Circuits (VLSI) 2024
- 8-bit Transformer Inference and Fine-tuning for Edge Accelerators ACM International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS) 2024
- Cascade: An Application Pipelining Toolkit for Coarse-Grained Reconfigurable Arrays Languages, Tools, and Techniques for Accelerator Design (LATTE) Workshop at the ACM International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS). 2024
-
Amber: A 16-nm System-on-Chip With a Coarse-Grained Reconfigurable Array for Flexible Acceleration of Dense Linear Algebra
IEEE JOURNAL OF SOLID-STATE CIRCUITS
2023
View details for DOI 10.1109/JSSC.2023.3313116
View details for Web of Science ID 001078350700001
-
3-D coarse-grained reconfigurable array using multi-pole NEM relays for programmable routing
INTEGRATION-THE VLSI JOURNAL
2023; 88: 249-261
View details for DOI 10.1016/j.vlsi.2022.10.001
View details for Web of Science ID 000882493600001
-
Canal: A Flexible Interconnect Generator for Coarse-Grained Reconfigurable Arrays
IEEE Computer Architecture Letters
2023
View details for DOI 10.1109/LCA.2023.3268126
-
Ultra-Dense 3D Physical Design Unlocks New Architectural Design Points with Large Benefits
IEEE. 2023
View details for Web of Science ID 001027444200118
-
Unified Buffer: Compiling Image Processing and Machine Learning Applications to Push-Memory Accelerators
ACM Transactions on Architecture and Code Optimization
2023: 26
View details for DOI 10.1145/3572908
-
An Open-Source 4x8 Coarse-Grained Reconfigurable Array Using SkyWater 130 nm Technology and Agile Hardware Design Flow
IEEE. 2023
View details for DOI 10.1109/ISCAS46773.2023.10182052
View details for Web of Science ID 001038214602125
-
EMBER: A 100 MHz, 0.86 mm<SUP>2</SUP>, Multiple-Bits-per-Cell RRAM Macro in 40 nm CMOS with Compact Peripherals and 1.0 pJ/bit Read Circuitry
IEEE. 2023: 469-472
View details for DOI 10.1109/ESSCIRC59616.2023.10268807
View details for Web of Science ID 001088613100118
-
AHA: An Agile Approach to the Design of Coarse-Grained Reconfigurable Accelerators and Compilers
ACM Transactions on Embedded Computing Systems
2023; 22 (2)
View details for DOI 10.1145/3534933
- PEak: A Single Source of Truth for Hardware Design and Verification Programming Languages for Architecture (PLARCH) Workshop at PLDI. 2023
- APEX: A Framework for Automated Processing Element Design Space Exploration using Frequent Subgraph Analysis ACM International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS) 2023
-
PBA: Percentile-Based Level Allocation for Multiple-Bits-Per-Cell RRAM
IEEE. 2023
View details for DOI 10.1109/ICCAD57390.2023.10323967
View details for Web of Science ID 001116715100189
-
High-density analog image storage in an analog-valued non-volatile memory array
NEUROMORPHIC COMPUTING AND ENGINEERING
2022; 2 (4)
View details for DOI 10.1088/2634-4386/aca92c
View details for Web of Science ID 001062521400001
-
A compute-in-memory chip based on resistive random-access memory.
Nature
2022; 608 (7923): 504-512
Abstract
Realizing increasingly complex artificial intelligence (AI) functionalities directly on edge devices calls for unprecedented energy efficiency of edge hardware. Compute-in-memory (CIM) based on resistive random-access memory (RRAM)1 promises to meet such demand by storing AI model weights in dense, analogue and non-volatile RRAM devices, and by performing AI computation directly within RRAM, thus eliminating power-hungry data movement between separate compute and memory2-5. Although recent studies have demonstrated in-memory matrix-vector multiplication on fully integrated RRAM-CIM hardware6-17, it remains a goal for a RRAM-CIM chip to simultaneously deliver high energy efficiency, versatility to support diverse models and software-comparable accuracy. Although efficiency, versatility and accuracy are all indispensable for broad adoption of the technology, the inter-related trade-offs among them cannot be addressed by isolated improvements on any single abstraction level of the design. Here, by co-optimizing across all hierarchies of the design from algorithms and architecture to circuits and devices, we present NeuRRAM-a RRAM-based CIM chip that simultaneously delivers versatility in reconfiguring CIM cores for diverse model architectures, energy efficiency that is two-times better than previous state-of-the-art RRAM-CIM chips across various computational bit-precisions, and inference accuracy comparable to software models quantized to four-bit weights across various AI tasks, including accuracy of 99.0percent on MNIST18 and 85.7percent on CIFAR-1019 image classification, 84.7-percent accuracy on Google speech command recognition20, and a 70-percent reduction in image-reconstruction error on a Bayesian image-recovery task.
View details for DOI 10.1038/s41586-022-04992-8
View details for PubMedID 35978128
-
CHIMERA: A 0.92-TOPS, 2.2-TOPS/W Edge AI Accelerator With 2-MByte On-Chip Foundry Resistive RAM for Efficient Training and Inference
IEEE JOURNAL OF SOLID-STATE CIRCUITS
2022
View details for DOI 10.1109/JSSC.2022.3140753
View details for Web of Science ID 000750226200001
- Efficient Routing for Coarse-Grained Reconfigurable Arrays using Multi-Pole NEM Relays IEEE/ACM Asia and South Pacific Design Automation Conference (ASP-DAC) 2022
-
mflowgen: a modular flow generator and ecosystem for community-driven physical design
DAC '22: Proceedings of the 59th ACM/IEEE Design Automation Conference
2022: 1339–1342
View details for DOI 10.1145/3489517.3530633
-
Synthesizing Instruction Selection Rewrite Rules from RTL using SMT
TU Wien Acad Press. 2022: 139-150
View details for DOI 10.34727/2022/isbn.978-3-85448-053-2_20
View details for Web of Science ID 001062691400020
-
Improving Energy Efficiency of CGRAs with Low-Overhead Fine-Grained Power Domains
ACM Transactions on Reconfigurable Technology and Systems
2022
View details for DOI 10.1145/3558394
- Canal: A Flexible Interconnect Generator for Coarse-Grained Reconfigurable Arrays Workshop on Democratizing Domain-Specific Accelerators (WDDSA) at MICRO. 2022
- Hardware Abstractions and Hardware Mechanisms to Support Multi-Task Execution on Coarse-Grained Reconfigurable Arrays Workshop on Democratizing Domain-Specific Accelerators (WDDSA) at MICRO. 2022
- Amber: Coarse-Grained Reconfigurable Array-Based SoC for Dense Linear Algebra Acceleration IEEE Hot Chips Symposium (Hot Chips) 2022
- Amber: A 367 GOPS, 538 GOPS/W 16nm SoC with a Coarse-Grained Reconfigurable Array for Flexible Acceleration of Dense Linear Algebra IEEE Symposium on VLSI Technology & Circuits (VLSI) 2022
- An Agile Approach to the Design of Hardware Accelerators and Adaptable Compilers GOMACTech 2022
-
SAPIENS: A 64-kb RRAM-Based Non-Volatile Associative Memory for One-Shot Learning and Inference at the Edge
IEEE TRANSACTIONS ON ELECTRON DEVICES
2021; 68 (12): 6637-6643
View details for DOI 10.1109/TED.2021.3110464
View details for Web of Science ID 000724501000107
-
RADAR: A Fast and Energy-Efficient Programming Technique for Multiple Bits-Per-Cell RRAM Arrays
IEEE TRANSACTIONS ON ELECTRON DEVICES
2021; 68 (9): 4397-4403
View details for DOI 10.1109/TED.2021.3097975
View details for Web of Science ID 000686761500038
-
Simba: Scaling Deep-Learning Inference with Chiplet-Based Architecture
COMMUNICATIONS OF THE ACM
2021; 64 (6): 107-116
View details for DOI 10.1145/3460227
View details for Web of Science ID 000656072300024
-
Best Papers From Hot Chips 32
IEEE MICRO
2021; 41 (2): 6
View details for DOI 10.1109/MM.2021.3060294
View details for Web of Science ID 000639559200002
- Automated Codesign of Domain-Specific Hardware Accelerators and Compilers ASCR Workshop on Reimagining Codesign. 2021
- CHIMERA: A 0.92 TOPS, 2.2 TOPS/W Edge AI Accelerator with 2 MByte On-Chip Foundry Resistive RAM for Efficient Training and Inference Symposium on VLSI Circuits (VLSI) 2021
- One-Shot Learning with Memory-Augmented Neural Networks Using a 64-kbit, 118 GOPS/W RRAM-Based Non-Volatile Associative Memory Symposium on VLSI Technology (VLSI) 2021
-
A 0.32-128 TOPS, Scalable Multi-Chip-Module-Based Deep Neural Network Inference Accelerator With Ground-Referenced Signaling in 16 nm
IEEE JOURNAL OF SOLID-STATE CIRCUITS
2020; 55 (4): 920–32
View details for DOI 10.1109/JSSC.2019.2960488
View details for Web of Science ID 000522446300009
- Automating Vitiligo Skin Lesion Segmentation Using Convolutional Neural Networks IEEE International Symposium on Biomedical Imaging (ISBI). 2020
-
A Voltage-Mode Sensing Scheme with Differential-Row Weight Mapping For Energy-Efficient RRAM-Based In-Memory Computing
IEEE. 2020
View details for Web of Science ID 000668063000053
-
Monte Carlo Simulation of a Three-Terminal RRAM with Applications to Neuromorphic Computing
IEEE. 2020: 197–99
View details for Web of Science ID 000636981000050
-
Creating an Agile Hardware Design Flow
IEEE. 2020
View details for Web of Science ID 000628528400063
-
A-QED Verification of Hardware Accelerators
IEEE. 2020
View details for Web of Science ID 000628528400220
- A 74TMACS/W CMOS-ReRAM Neurosynaptic Core with Dynamically Reconfigurable Dataflow and In-Situ Transposable Weights for Probabilistic Graphical Models International Solid-State Circuits Conference (ISSCC). 2020
- A Framework for Adding Low-Overhead, Fine-Grained Power Domains to CGRAs Design, Automation and Test in Europe Conference (DATE). 2020
- Using Halide’s Scheduling Language to Analyze DNN Accelerators International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS). 2020
-
A 0.11 pJ/Op, 0.32-128 TOPS, Scalable Multi-Chip-Module-based Deep Neural Network Accelerator with Ground-Reference Signaling in 16nm
IEEE. 2019: C300–C301
View details for Web of Science ID 000531736500102
-
MAGNet: A Modular Accelerator Generator for Neural Networks
IEEE. 2019
View details for Web of Science ID 000524676400085
-
Neuro-inspired computing with emerging memories: where device physics meets learning algorithms
SPIE-INT SOC OPTICAL ENGINEERING. 2019
View details for DOI 10.1117/12.2529916
View details for Web of Science ID 000511161100014
- A 0.11 pJ/Op, 0.32-128 TOPS, Scalable Multi-Chip-Module-based Deep Neural Network Accelerator Designed with a High-Productivity VLSI Methodology Hot Chips: A Symposium on High Performance Chips (HotChips). 2019
- Creating An Agile Hardware Flow Hot Chips: A Symposium on High Performance Chips (HotChips). 2019
- Simba: Scaling Deep-Learning Inference with Multi-Chip-Module-Based Architecture International Symposium on Microarchitecture (MICRO). 2019
-
Timeloop: A Systematic Approach to DNN Accelerator Evaluation
IEEE. 2019: 304–15
View details for DOI 10.1109/ISPASS.2019.00042
View details for Web of Science ID 000470201600034
-
An Energy-Scalable Accelerator for Blind Image Deblurring
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC. 2017: 1849–62
View details for DOI 10.1109/JSSC.2017.2682842
View details for Web of Science ID 000404301300012
-
A 0.6V 8mW 3D Vision Processor for a Navigation Device for the Visually Impaired
IEEE. 2016: 416–U584
View details for Web of Science ID 000382151400172
-
An Energy-Scalable Accelerator for Blind Image Deblurring
IEEE. 2016: 113–16
View details for Web of Science ID 000386656300026
-
Reconfigurable Processor for Energy-Efficient Computational Photography
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC. 2013: 2908–19
View details for DOI 10.1109/JSSC.2013.2282614
View details for Web of Science ID 000326265100030
-
Reconfigurable Processor for Energy-Scalable Computational Photography
IEEE. 2013: 164–U972
View details for Web of Science ID 000366612300065