Mert Pilanci
Assistant Professor of Electrical Engineering
Bio
Mert Pilanci is an assistant professor of Electrical Engineering at Stanford University. He received his Ph.D. in Electrical Engineering and Computer Science from UC Berkeley in 2016. Prior to joining Stanford, he was an assistant professor of Electrical Engineering and Computer Science at the University of Michigan. In 2017, he was a Math+X postdoctoral fellow working with Emmanuel Candès at Stanford University. His research interests are in large scale machine learning, optimization, and information theory.
Honors & Awards
-
CAREER Award, National Science Foundation (2023)
-
Early Career Award, U.S. Army Research Office (2021)
-
International Conference on Acoustics, Speech, & Signal Processing (ICASSP) Best Paper Award, IEEE (2021)
-
Best Poster Award, Conference on the Mathematical Theory of Deep Neural Networks (2020)
-
Faculty Research Award, Facebook (2020)
-
Faculty Research Award, Adobe (2019)
-
Terman Faculty Fellow, Stanford University (2018)
-
Math+X Postdoctoral Fellowship, Simons Foundation (2016)
-
PhD Fellowship, Microsoft Research (2013)
-
Signal Processing and Communications Applications Conference Best Paper Award, IEEE (2010)
Program Affiliations
-
Stanford SystemX Alliance
Professional Education
-
Postdoctoral Fellow, Stanford University (2017)
-
PhD, University of California, Berkeley, Electrical Engineering and Computer Science (2016)
Current Research and Scholarly Interests
Dr. Pilanci's research interests include neural networks, machine learning, mathematical optimization, information theory and signal processing.
2024-25 Courses
- Convex Optimization II
CME 364B, EE 364B (Spr) - Introductory Research Seminar in Electrical Engineering
EE 301 (Aut) -
Independent Studies (6)
- Curricular Practical Training
CME 390 (Aut, Win, Spr, Sum) - Directed Studies in Applied Physics
APPPHYS 290 (Aut, Win, Spr, Sum) - Master's Thesis and Thesis Research
EE 300 (Aut, Win, Spr, Sum) - Ph.D. Research
CME 400 (Aut, Win, Spr, Sum) - Special Studies and Reports in Electrical Engineering
EE 391 (Aut, Win, Spr, Sum) - Special Studies or Projects in Electrical Engineering
EE 390 (Aut, Win, Spr, Sum)
- Curricular Practical Training
-
Prior Year Courses
2023-24 Courses
- Convex Optimization II
CME 364B, EE 364B (Spr) - Introductory Research Seminar in Electrical Engineering
EE 301 (Aut) - Signal Processing for Machine Learning
EE 269 (Aut)
2022-23 Courses
- Convex Optimization II
CME 364B, EE 364B (Spr) - Introductory Research Seminar in Electrical Engineering
EE 301 (Aut) - Signal Processing for Machine Learning
EE 269 (Win)
2021-22 Courses
- Convex Optimization II
CME 364B, EE 364B (Spr) - Introductory Research Seminar in Electrical Engineering
EE 301 (Aut) - Signal Processing for Machine Learning
EE 269 (Aut)
- Convex Optimization II
Stanford Advisees
-
Doctoral Dissertation Reader (AC)
Amirhossein Afsharrad, Felipe Areces, Gary Cheng, Ibrahim Gulluk, Parth Nobel, Giray Ogut, Ryan Po, Pratik Rathore, Sunil Sudhakaran, Itamar Terem, Alan Yang, Emi Zeger, Fangzhao Zhang, Orr Zohar -
Orals Chair
James Yang -
Doctoral Dissertation Advisor (AC)
Rajat Dwaraknath, Aaron Mishkin, Yifei Wang -
Master's Program Advisor
Abdulaziz Alharbi, Connor Ding, Christian Femrite, Abhiram Gorle, Emily Kuo, Marc Moussa Nasser, Yifan Pan, Ryan Udell -
Doctoral (Program)
Amirhossein Afsharrad, Gary Cheng, Dorsa Fathollahi, Ibrahim Gulluk, Kasper Johansson, Sungyoon Kim, Erin Kunz, Aaron Mishkin, Yifei Wang -
Postdoctoral Research Mentor
Sara Fridovich-Keil
All Publications
-
Boost AI Power: Data Augmentation Strategies with Unlabeled Data and Conformal Prediction, a Case in Alternative Herbal Medicine Discrimination with Electronic Nose
IEEE Sensors Journal
2021: 1-11
View details for DOI 10.1109/JSEN.2021.3102488
-
Revealing the Structure of Deep Neural Networks via Convex Duality
JMLR-JOURNAL MACHINE LEARNING RESEARCH. 2021
View details for Web of Science ID 000683104603002
-
Global Optimality Beyond Two Layers: Training Deep ReLU Networks via Convex Programs
JMLR-JOURNAL MACHINE LEARNING RESEARCH. 2021
View details for Web of Science ID 000683104603001
-
Adaptive Newton Sketch: Linear-time Optimization with Quadratic Convergence and Effective Hessian Dimensionality
JMLR-JOURNAL MACHINE LEARNING RESEARCH. 2021
View details for Web of Science ID 000683104605087
- Boost AI Power: Data Augmentation Strategies with Unlabelled Data and Conformal Prediction, a Case in Alternative Herbal Medicine Discrimination with Electronic Nose IEEE Sensors Journal 2021
- Convex Geometry and Duality of Over-parameterized Neural Networks Convex Geometry and Duality of Over-parameterized Neural Networks 2021
-
WEIGHTED GRADIENT CODING WITH LEVERAGE SCORE SAMPLING
IEEE. 2020: 5215–19
View details for Web of Science ID 000615970405095
-
Separating the Effects of Batch Normalization on CNN Training Speed and Stability Using Classical Adaptive Filter Theory
IEEE. 2020: 1214-1221
View details for DOI 10.1109/IEEECONF51394.2020.9443275
View details for Web of Science ID 000681731800234
-
Optimal Randomized First-Order Methods for Least-Squares Problems
JMLR-JOURNAL MACHINE LEARNING RESEARCH. 2020
View details for Web of Science ID 000683178505066
- Global Multiclass Classification from Heterogeneous Local Models IEEE Journal on Selected Areas in Information Theory 2020
- Lower Bounds and a Near-Optimal Shrinkage Estimator for Least Squares using Random Projections IEEE Journal on Selected Areas in Information Theory 2020
-
Separating the Effects of Batch Normalization on CNN Training Speed and Stability Using Classical Adaptive Filter Theory
arXiv:2002.10674.
2020
Abstract
Batch Normalization (BatchNorm) is commonly used in Convolutional Neural Networks (CNNs) to improve training speed and stability. However, there is still limited consensus on why this technique is effective. This paper uses concepts from the traditional adaptive filter domain to provide insight into the dynamics and inner workings of BatchNorm. First, we show that the convolution weight updates have natural modes whose stability and convergence speed are tied to the eigenvalues of the input autocorrelation matrices, which are controlled by BatchNorm through the convolution layers' channel-wise structure. Furthermore, our experiments demonstrate that the speed and stability benefits are distinct effects. At low learning rates, it is BatchNorm's amplification of the smallest eigenvalues that improves convergence speed, while at high learning rates, it is BatchNorm's suppression of the largest eigenvalues that ensures stability. Lastly, we prove that in the first training step, when normalization is needed most, BatchNorm satisfies the same optimization as Normalized Least Mean Square (NLMS), while it continues to approximate this condition in subsequent steps. The analyses provided in this paper lay the groundwork for gaining further insight into the operation of modern neural network structures using adaptive filter theory.
-
Distributed Sketching Methods for Privacy Preserving Regression
arXiv:2002.06538.
2020
Abstract
In this work, we study distributed sketching methods for large scale regression problems. We leverage multiple randomized sketches for reducing the problem dimensions as well as preserving privacy and improving straggler resilience in asynchronous distributed systems. We derive novel approximation guarantees for classical sketching methods and analyze the accuracy of parameter averaging for distributed sketches. We consider random matrices including Gaussian, randomized Hadamard, uniform sampling and leverage score sampling in the distributed setting. Moreover, we propose a hybrid approach combining sampling and fast random projections for better computational efficiency. We illustrate the performance of distributed sketches in a serverless computing platform with large scale experiments.
-
Distributed Averaging Methods for Randomized Second Order Optimization
arXiv:2002.06540.
2020
Abstract
We consider distributed optimization problems where forming the Hessian is computationally challenging and communication is a significant bottleneck. We develop unbiased parameter averaging methods for randomized second order optimization that employ sampling and sketching of the Hessian. Existing works do not take the bias of the estimators into consideration, which limits their application to massively parallel computation. We provide closed-form formulas for regularization parameters and step sizes that provably minimize the bias for sketched Newton directions. We also extend the framework of second order averaging methods to introduce an unbiased distributed optimization framework for heterogeneous computing systems with varying worker resources. Additionally, we demonstrate the implications of our theoretical findings via large scale experiments performed on a serverless computing platform.
-
Limiting Spectrum of Randomized Hadamard Transform and Optimal Iterative Sketching Methods
International Conference on Machine Learning (ICML), 2020.
2020
Abstract
We provide an exact analysis of the limiting spectrum of matrices randomly projected either with the subsampled randomized Hadamard transform, or truncated Haar matrices. We characterize this limiting distribution through its Stieltjes transform, a classical object in random matrix theory, and compute the first and second inverse moments. We leverage the limiting spectrum and asymptotic freeness of random matrices to obtain an exact analysis of iterative sketching methods for solving least squares problems. Our results also yield optimal step-sizes and convergence rates in terms of simple closed-form expressions. Moreover, we show that the convergence rate for Haar and randomized Hadamard matrices are identical, and uniformly improve upon Gaussian random projections. The developed techniques and formulas can be applied to a plethora of randomized algorithms that employ fast randomized Hadamard dimension reduction.
- Weighted Gradient Coding with Leverage Score Sampling IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2020
- Convex Geometry of Two-Layer ReLU Networks: Implicit Autoencoding and Interpretable Models 23rd International Conference on Artificial Intelligence and Statistics (AISTATS) 2020
-
Optimal Randomized First-Order Methods for Least-Squares Problems
arXiv:2002.09488 .
2020
Abstract
We provide an exact analysis of a class of randomized algorithms for solving overdetermined least-squares problems. We consider first-order methods, where the gradients are pre-conditioned by an approximation of the Hessian, based on a subspace embedding of the data matrix. This class of algorithms encompasses several randomized methods among the fastest solvers for least-squares problems. We focus on two classical embeddings, namely, Gaussian projections and subsampled randomized Hadamard transforms (SRHT). Our key technical innovation is the derivation of the limiting spectral density of SRHT embeddings. Leveraging this novel result, we derive the family of normalized orthogonal polynomials of the SRHT density and we find the optimal pre-conditioned first-order method along with its rate of convergence. Our analysis of Gaussian embeddings proceeds similarly, and leverages classical random matrix theory results. In particular, we show that for a given sketch size, SRHT embeddings exhibits a faster rate of convergence than Gaussian embeddings. Then, we propose a new algorithm by optimizing the computational complexity over the choice of the sketching dimension. To our knowledge, our resulting algorithm yields the best known complexity for solving least-squares problems with no condition number dependence.
arXiv -
Convex Duality of Deep Neural Networks
arXiv:2002.09773.
2020
Abstract
We study regularized deep neural networks and introduce an analytic framework to characterize the structure of the hidden layers. We show that a set of optimal hidden layer weight matrices for a norm regularized deep neural network training problem can be explicitly found as the extreme points of a convex set. For two-layer linear networks, we first formulate a convex dual program and prove that strong duality holds. We then extend our derivations to prove that strong duality also holds for certain deep networks. In particular, for linear deep networks, we show that each optimal layer weight matrix is rank-one and aligns with the previous layers when the network output is scalar. We also extend our analysis to the vector outputs and other convex loss functions. More importantly, we show that the same characterization can also be applied to deep ReLU networks with rank-one inputs, where we prove that strong duality still holds and optimal layer weight matrices are rank-one for scalar output networks. As a corollary, we prove that norm regularized deep ReLU networks yield spline interpolation for one-dimensional datasets which was previously known only for two-layer networks. We then verify our theoretical results via several numerical experiments.
arXiv -
Convex Geometry and Duality of Over-parameterized Neural Networks
arXiv:2002.11219.
2020
Abstract
We develop a convex analytic framework for ReLU neural networks which elucidates the inner workings of hidden neurons and their function space characteristics. We show that neural networks with rectified linear units act as convex regularizers, where simple solutions are encouraged via extreme points of a certain convex set. For one dimensional regression and classification, as well as rank-one data matrices, we prove that finite two-layer ReLU networks with norm regularization yield linear spline interpolation. We characterize the classification decision regions in terms of a closed form kernel matrix and minimum L1 norm solutions. This is in contrast to Neural Tangent Kernel which is unable to explain neural network predictions with finitely many neurons. Our convex geometric description also provides intuitive explanations of hidden neurons as auto-encoders. In higher dimensions, we show that the training problem for two-layer networks can be cast as a convex optimization problem with infinitely many constraints. We then provide a family of convex relaxations to approximate the solution, and a cutting-plane algorithm to improve the relaxations. We derive conditions for the exactness of the relaxations and provide simple closed form formulas for the optimal neural network weights in certain cases. We also establish a connection to - equivalence for neural networks analogous to the minimal cardinality solutions in compressed sensing. Extensive experimental results show that the proposed approach yields interpretable and accurate models.
arXiv -
Neural Networks are Convex Regularizers: Exact Polynomial-time Convex Optimization Formulations for Two-Layer Networks
International Conference on Machine Learning (ICML), 2020.
2020
Abstract
We develop exact representations of two layer neural networks with rectified linear units in terms of a single convex program with number of variables polynomial in the number of training samples and number of hidden neurons. Our theory utilizes semi-infinite duality and minimum norm regularization. Moreover, we show that certain standard multi-layer convolutional neural networks are equivalent to L1 regularized linear models in a polynomial sized discrete Fourier feature space. We also introduce exact semi-definite programming representations of convolutional and fully connected linear multi-layer networks which are polynomial size in both the sample size and dimension.
arXiv -
Convex Geometry of Two-Layer ReLU Networks: Implicit Autoencoding and Interpretable Models
ADDISON-WESLEY PUBL CO. 2020: 4024–32
View details for Web of Science ID 000559931300084
-
High-Dimensional Optimization in Adaptive Random Subspaces
NEURAL INFORMATION PROCESSING SYSTEMS (NIPS). 2019
View details for Web of Science ID 000535866902047
-
Convex Optimization for Shallow Neural Networks
IEEE. 2019: 79–83
View details for Web of Science ID 000535355700013
-
Distributed Black-Box Optimization via Error Correcting Codes
IEEE. 2019: 246–52
View details for Web of Science ID 000535355700035
-
Straggler Resilient Serverless Computing Based on Polar Codes
IEEE. 2019: 276–83
View details for Web of Science ID 000535355700039
-
Faster Least Squares Optimization
arXiv:1911.02675.
2019
Abstract
We investigate randomized methods for solving overdetermined linear least-squares problems, where the Hessian is approximated based on a random projection of the data matrix. We consider a random subspace embedding which is either drawn at the beginning of the algorithm and fixed throughout, or, refreshed at each iteration. For a broad class of random matrices, we provide an exact finite-time analysis of the refreshed embeddings method, an exact asymptotic analysis of the fixed embedding method, as well as a non-asymptotic analysis, with and without momentum acceleration. Surprisingly, we show that, for Gaussian matrices, the refreshed sketching method with no momentum yields the same convergence rate as the fixed embedding method with momentum. Furthermore, we prove that momentum does not accelerate the refreshed embeddings method. Thus, picking the accelerated, fixed embedding method as the algorithm of choice among the methods we consider, we propose a new algorithm by optimizing the computational complexity over the choice of the sketching dimension. Our resulting algorithm yields a smaller complexity compared to current state-of-the-art randomized pre-conditioning methods. In particular, as the sample size grows, the resulting complexity becomes sub-linear in the problem dimensions. We validate numerically our guarantees on large sample datasets, both for Gaussian and SRHT embeddings.
- Straggler Resilient Serverless Computing Based on Polar Codes 57th Annual Allerton Conference on Communication, Control, and Computing 2019
- Distributed Black-Box Optimization via Error Correcting Codes 57th Annual Allerton Conference on Communication, Control, and Computing 2019
- High-Dimensional Optimization in Adaptive Random Subspaces Neural Information Processing Systems (NeurIPS) 2019
-
Fast and Robust Solution Techniques for Large Scale Linear System of Equations
IEEE. 2019
View details for Web of Science ID 000518994300160
-
CONVEX RELAXATIONS OF CONVOLUTIONAL NEURAL NETS
IEEE. 2019: 4928–32
View details for Web of Science ID 000482554005033
-
ITERATIVE HESSIAN SKETCH WITH MOMENTUM
IEEE. 2019: 7470–74
View details for Web of Science ID 000482554007141
-
NEWTON SKETCH: A NEAR LINEAR-TIME OPTIMIZATION ALGORITHM WITH LINEAR-QUADRATIC CONVERGENCE
SIAM JOURNAL ON OPTIMIZATION
2017; 27 (1): 205–45
View details for DOI 10.1137/15M1021106
View details for Web of Science ID 000404178500010
- Randomized sketches for kernels: Fast and optimal non-parametric regression Annals of Statistics 2017
- Iterative Hessian sketch: Fast and accurate solution approximation for constrained least-squares Journal of Machine Learning Research (JMLR) 2016
- Sparse learning via Boolean relaxations Mathematical Programming 2015
- Randomized Sketches of Convex Programs With Sharp Guarantees IEEE Transactions on Information Theory 2015
- Structured least squares problems and robust estimators IEEE Transactions on Signal Processing 2010