Bio
Max is a Research Fellow at the Hoover Institution’s Technology Policy Accelerator and a member of the Stanford Intelligence Systems Laboratory and the Stanford Center for AI Safety at Stanford University.
With his research, he is working towards making AI systems inherently more secure and safe, providing critical insights to inform and guide effective AI policies, and shape public discourse. He specializes in interpretability and robustness of AI systems, ethical decision-making of language models, and uncertainty quantification. His work aims to promote the safe and responsible use of AI in society, with a particular emphasis on language models for automated decision-making, and has been recognized through publications in leading technical and socio-technical conferences such as NeurIPS, CoLM, FAccT, and AIES, as well as policy-oriented outlets like Foreign Affairs. Additionally, his research has garnered attention from international media, with coverage in the MIT Technology Review, The Washington Times, The Japan Times, LaPress, Axios, Deutschlandfunk, and New Scientist.
Prior to his current appointment, he was a postdoctoral fellow at the Stanford Center for AI Safety, the Center for International Security and Cooperation, and the Stanford Existential Risks Initiative at Stanford University advised by Prof. Clark Barrett, Prof. Steve Luby, and Prof. Paul Edwards. Max received his Ph.D. in August 2023 from the School of Natural Sciences at the Technical University of Munich and holds a B.Sc. and M.Sc. in Physics from the Ruprecht Karl University of Heidelberg.
Academic Appointments
-
Hoover Research Fellow, HOOVER RESEARCH
2025-26 Courses
- AI as Technology Accelerator
CS 132 (Spr) - Introduction to AI Safety
CS 120 (Aut) -
Independent Studies (1)
- Directed Reading
INTLPOL 299 (Win, Spr)
- Directed Reading
-
Prior Year Courses
2024-25 Courses
- Introduction to AI Governance
CS 134, STS 14 (Win) - Introduction to AI Safety
CS 120 (Aut)
2023-24 Courses
- Introduction to AI Safety
CS 120, STS 10 (Spr)
- Introduction to AI Governance
All Publications
-
Escalation Risks from Language Models in Military and Diplomatic Decision-Making
ASSOC COMPUTING MACHINERY. 2024: 836-898
View details for DOI 10.1145/3630106.3658942
View details for Web of Science ID 001253359300057
-
Analyzing And Editing Inner Mechanisms of Backdoored Language Models
ASSOC COMPUTING MACHINERY. 2024: 2362-2373
View details for DOI 10.1145/3630106.3659042
View details for Web of Science ID 001253359300156