Bio
Max is a Research Fellow at the Hoover Institution’s Technology Policy Accelerator and a member of the Stanford Intelligence Systems Laboratory and the Stanford Center for AI Safety at Stanford University.
With his research, he is working towards making AI systems inherently more secure and safe, providing critical insights to inform and guide effective AI policies, and shape public discourse. He specializes in interpretability and robustness of AI systems, ethical decision-making of language models, and uncertainty quantification. His work aims to promote the safe and responsible use of AI in society, with a particular emphasis on language models for automated decision-making, and has been recognized through publications in leading technical and socio-technical conferences such as NeurIPS, CoLM, FAccT, and AIES, as well as policy-oriented outlets like Foreign Affairs. Additionally, his research has garnered attention from international media, with coverage in the MIT Technology Review, The Washington Times, The Japan Times, LaPress, Axios, Deutschlandfunk, and New Scientist.
Prior to his current appointment, he was a postdoctoral fellow at the Stanford Center for AI Safety, the Center for International Security and Cooperation, and the Stanford Existential Risks Initiative at Stanford University advised by Prof. Clark Barrett, Prof. Steve Luby, and Prof. Paul Edwards. Max received his Ph.D. in August 2023 from the School of Natural Sciences at the Technical University of Munich and holds a B.Sc. and M.Sc. in Physics from the Ruprecht Karl University of Heidelberg.
Academic Appointments
-
Hoover Research Fellow, HOOVER RESEARCH
2025-26 Courses
- AI as Technology Accelerator
CS 132 (Spr) - Introduction to AI Safety
CS 120 (Aut) -
Independent Studies (1)
- Directed Reading
INTLPOL 299 (Win, Spr)
- Directed Reading
-
Prior Year Courses
2024-25 Courses
- Introduction to AI Governance
CS 134, STS 14 (Win) - Introduction to AI Safety
CS 120 (Aut)
2023-24 Courses
- Introduction to AI Safety
CS 120, STS 10 (Spr)
- Introduction to AI Governance
All Publications
-
A benchmark of expert-level academic questions to assess AI capabilities.
Nature
2026; 649 (8099): 1139-1146
Abstract
Benchmarks are important tools for tracking the rapid advancements in large language model (LLM) capabilities. However, benchmarks are not keeping pace in difficulty: LLMs now achieve more than 90% accuracy on popular benchmarks such as Measuring Massive Multitask Language Understanding1, limiting informed measurement of state-of-the-art LLM capabilities. Here, in response, we introduce Humanity's Last Exam (HLE), a multi-modal benchmark at the frontier of human knowledge, designed to be an expert-level closed-ended academic benchmark with broad subject coverage. HLE consists of 2,500 questions across dozens of subjects, including mathematics, humanities and the natural sciences. HLE is developed globally by subject-matter experts and consists of multiple-choice and short-answer questions suitable for automated grading. Each question has a known solution that is unambiguous and easily verifiable but cannot be quickly answered by internet retrieval. State-of-the-art LLMs demonstrate low accuracy and calibration on HLE, highlighting a marked gap between current LLM capabilities and the expert human frontier on closed-ended academic questions. To inform research and policymaking upon a clear understanding of model capabilities, we publicly release HLE at https://lastexam.ai .
View details for DOI 10.1038/s41586-025-09962-4
View details for PubMedID 41606155
-
Escalation Risks from Language Models in Military and Diplomatic Decision-Making
ASSOC COMPUTING MACHINERY. 2024: 836-898
View details for DOI 10.1145/3630106.3658942
View details for Web of Science ID 001253359300057
-
Analyzing And Editing Inner Mechanisms of Backdoored Language Models
ASSOC COMPUTING MACHINERY. 2024: 2362-2373
View details for DOI 10.1145/3630106.3659042
View details for Web of Science ID 001253359300156
-
Human vs. Machine: Behavioral Differences between Expert Humans and Language Models in Wargame Simulations
ASSOC COMPUTING MACHINERY. 2024: 807-817
View details for Web of Science ID 001447715300070
-
Risks from Language Models for Automated Mental Healthcare: Ethics and Structure for Implementation (Extended Abstract)
ASSOC COMPUTING MACHINERY. 2024: 519
View details for Web of Science ID 001447715300043