Bio


Since May 2025, I have been a postdoctoral researcher at Stanford University, working with Prof. Mert Pilanci. My research focuses on large language model (LLM) reasoning, post-training and fine-tuning of LLMs, autonomous agents, and diffusion models. Broadly, I am interested in understanding how modern foundation models can be made more reliable, interpretable, and effective for complex decision-making tasks.

My work explores both the theoretical and practical aspects of LLMs, including techniques for improving reasoning capabilities through post-training, aligning models with human objectives, and designing autonomous agentic systems that can operate robustly in real-world environments. I am also interested in generative modeling with diffusion models and their applications in multimodal learning.

I received my PhD in 2024 and my MASc in 2018, both in Electrical and Computer Engineering from the University of Waterloo.

Stanford Advisors


Current Research and Scholarly Interests


Reasoning in large language models (LLMs) and improving their systematic generalization

Post-training and fine-tuning methods for alignment, reliability, and efficiency

Autonomous agent architectures built on top of foundation models

Generative modeling with diffusion models and their multimodal applications

Theory and optimization methods for modern deep learning systems