I'm is a third year PhD student in the Institute for Computational and Mathematical Engineering at Stanford University. My interests lie broadly in the realm of data science and computational mathematics, spanning machine learning, numerical linear algebra, theoretical computer science, and computational physics. In particular, my most recent research focuses on finding efficient methods to improve accuracy when solving linear systems with unstructured noise. My other research focuses on model order reduction, leveraging machine learning and linear algebra techniques to deliver massive performance boosts in many-query physics problems, e.g., Bayesian inference and uncertainty quantification, while simultaneously guaranteeing accurate results. I presented these techniques in talks at SIAM: CSE ’19 and at ICIAM ’19, and published in CMAME. In the past, I've also worked as a data science research intern at Sandia National Laboratories, a software engineering intern at Google, and a research contractor at Bell Labs.
I received my undergraduate degree from Princeton, where I studied mathematics, computer science, and physics. While I was there, I wrote my undergraduate thesis on numerical methods for solitonic boson star evolution and ground state searching, graduating summa cum laude. Before that, I did some research in theoretical optics. And before that, I was interested in graph algorithms. But while I have a very broad background in mathematics and related fields, I'm particularly excited by finding ways of using data to accelerate computation, build fast approximation techniques, and make predictions about the future (and inferences about the present).
Going forward, I want to continue to develop better and faster algorithms by bringing the power of data science to bear on interesting computational and statistical challenges.
My other assorted interests include quantum physics, general relativity, computer graphics, and music.
I prefer tabs to spaces, and vim to emacs.
Education & Certifications
B.A., Princeton University, Mathematics (2017)
Certificate, Princeton University, Computer Science (2017)
Certificate, Princeton University, Applied Mathematics (2017)
I'm currently quite interested in music, particularly in composing and arranging for film. My favorite music comes mostly from the year range 1770 to 1900.
Coarse-Proxy Reduced Basis Methods for Integral Equations
In this paper, we introduce a new reduced basis methodology for accelerating the computation of large parameterized systems of high-fidelity integral equations. Core to our method- ology is the use of coarse-proxy models (i.e., lower resolution variants of the underlying high-fidelity equations) to identify important samples in the parameter space from which a high quality reduced basis is then constructed. Unlike the more traditional POD or greedy methods for reduced basis construction, our methodology has the benefit of being both easy to implement and embarrassingly parallel. We apply our methodology to the under-served area of integral equations, where the den- sity of the underlying integral operators has traditionally made reduced basis methods difficult to apply. To handle this difficulty, we introduce an operator interpolation technique, based on random sub-sampling, that is aimed specifically at integral operators. To demonstrate the effectiveness of our techniques, we present two numerical case studies, based on the Radiative Transport Equation and a boundary integral formation of the Laplace Equation respectively, where our methodology pro- vides a significant improvement in performance over the underlying high-fidelity models for a wide range of error tolerances. Moreover, we demonstrate that for these problems, as the coarse-proxy selection threshold is made more aggressive, the approximation error of our method decreases at an approximately linear rate.
Online adaptive basis refinement and compression for reduced-order models
In many applications, projection-based reduced-order models (ROMs) have demonstrated the ability to provide rapid approximate solutions to high-fidelity full-order models (FOMs). However, there is no a priori assurance that these approximate solutions are accurate; their accuracy depends on the ability of the low-dimensional trial basis to represent the FOM solution. As a result, ROMs can generate inaccurate approximate solutions, e.g., when the FOM solution at the online prediction point is not well represented by training data used to construct the trial basis. To address this fundamental deficiency of standard model-reduction approaches, this work proposes a novel online-adaptive mechanism for efficiently enriching the trial basis in a manner that ensures convergence of the ROM to the FOM, yet does not incur any FOM solves. The mechanism is based on the previously proposed adaptive h-refinement method for ROMs [Carlberg, 2015], but improves upon this work in two crucial ways. First, the proposed method enables basis refinement with respect to any orthogonal basis (not just the Kronecker basis), thereby generalizing the refinement mechanism and enabling it to be tailored to the physics characterizing the problem at hand. Second, the proposed method provides a fast online algorithm for periodically compressing the enriched basis via an efficient proper orthogonal decomposition (POD) method, which does not incur any operations that scale with the FOM dimension. These two features allow the proposed method to serve as (1) a failsafe mechanism for ROMs, as the method enables the ROM to satisfy any prescribed error tolerance online (even in the case of inadequate training), and (2) an efficient online basis-adaptation mechanism, as the combination of basis enrichment and compression enables the basis to adapt online while controlling its dimension.