School of Engineering
Showing 321-340 of 488 Results
-
Joonhee Choi
Assistant Professor of Electrical Engineering
BioJoonhee Choi is an Assistant Professor of Electrical Engineering at Stanford University. Joonhee received his Ph.D. and master’s from Harvard University, as well as master’s and bachelor’s degrees from Korea Advanced Institute of Science & Technology. Prior to joining Stanford, he worked as an IQIM postdoctoral fellow at the Institute for Quantum Information and Matter (IQIM) at Caltech. Joonhee’s research focus has been on engineering the dynamics of quantum many-body systems for both exploring fundamental science and demonstrating practical quantum applications. Throughout his career, he has worked in a wide variety of fields, including nonlinear nano-optics, ultrafast phenomena, solid-state and atomic physics, as well as quantum many-body physics. His expertise extends to practical applications in quantum metrology, communication, and information processing.
-
Yejin Choi
Dieter Schwarz Foundation HAI Professor and Senior Fellow at the Stanford Institute for Human-Centered Artificial Intelligence
BioYejin Choi is the Dieter Schwarz Foundation Professor and Senior Fellow at the Department of Computer Science at Stanford University and the Stanford Institute for Human-Centered Artificial Intelligence (HAI) respectively. Choi is MacArthur Fellow (class of 2022), AI2050 Senior Fellow (class of 2024), and named among Time100 Most Influential People in AI in 2023. In addition, Choi is a co-recipient of 2 Test-of-Time awards and 8 Best and Outstanding Paper Awards at top AI conferences including ACL, ICML, NeurIPS, ICCV, CVPR, and AAAI, the Borg Early Career Award (BECA) in 2018, the inaugural Alexa Prize Challenge in 2017, and IEEE AI’s 10 to Watch in 2016. Choi was a main stage speaker at TED 2023, and a keynote speaker for a dozen conferences across several AI disciplines including ACL, CVPR, ICLR, MLSys, VLDB, WebConf, and AAAI. Her current research interests include fundamental limits and capabilities of large language models, alternative training recipes for language models, symbolic methods for neural networks, reasoning and knowledge discovery, moral norms and values, pluralistic alignment, and AI safety.