Mert Pilanci
Assistant Professor of Electrical Engineering
Bio
Mert Pilanci is an assistant professor of Electrical Engineering at Stanford University. He received his Ph.D. in Electrical Engineering and Computer Science from UC Berkeley in 2016. Prior to joining Stanford, he was an assistant professor of Electrical Engineering and Computer Science at the University of Michigan. In 2017, he was a Math+X postdoctoral fellow working with Emmanuel Candès at Stanford University. His research interests are in large scale machine learning, optimization, and information theory.
Program Affiliations

Stanford SystemX Alliance
Professional Education

PhD, University of California, Berkeley, Electrical Engineering and Computer Science (2016)
202021 Courses
 Convex Optimization II
CME 364B, EE 364B (Spr)  Large Scale Matrix Computation, Optimization and Learning
EE 270 (Win)  Signal Processing for Machine Learning
EE 269 (Aut) 
Independent Studies (4)
 Master's Thesis and Thesis Research
EE 300 (Win)  Special Studies and Reports in Electrical Engineering
EE 391 (Aut, Win, Spr, Sum)  Special Studies or Projects in Electrical Engineering
EE 190 (Spr)  Special Studies or Projects in Electrical Engineering
EE 390 (Win, Spr)
 Master's Thesis and Thesis Research

Prior Year Courses
201920 Courses
 Convex Optimization II
CME 364B, EE 364B (Spr)  Large Scale Matrix Computation, Optimization and Learning
EE 270 (Win)  Signal Processing for Machine Learning
EE 269 (Aut)
201819 Courses
 Convex Optimization II
CME 364B, EE 364B (Spr)  Signal Processing for Machine Learning
EE 269 (Win)
 Convex Optimization II
Stanford Advisees

Doctoral Dissertation Reader (AC)
Akshay Agrawal, Surin Ahn, Hilal Asi, Tavor Baharav, Vikranth Dwaracherla, Albert Gural, Soheil Hor, Qijia Jiang, Jongho Kim, Qianyun Lu, Bennet MeyersIm, Batu Ozturkler, Khaled Saab, Arda Sahiner, Ruiyang Song 
Doctoral Dissertation Advisor (AC)
Burak Bartan, Tolga Ergen, Jonathan Lacotte, Rajarshi Saha 
Master's Program Advisor
Jorge Cordero Enriquez, Mingyi Lu, Matthew Pauly, Varun Srivastava, Damir Vrabac 
Doctoral (Program)
WeiNing Chen, Gary Cheng, Tolga Ergen, Erin Kunz, Arda Sahiner, Siddharth Tanwar, Yifei Wang
All Publications

Convex Geometry of TwoLayer ReLU Networks: Implicit Autoencoding and Interpretable Models
ADDISONWESLEY PUBL CO. 2020: 4024–32
View details for Web of Science ID 000559931300084

Separating the Effects of Batch Normalization on CNN Training Speed and Stability Using Classical Adaptive Filter Theory
arXiv:2002.10674.
2020
Abstract
Batch Normalization (BatchNorm) is commonly used in Convolutional Neural Networks (CNNs) to improve training speed and stability. However, there is still limited consensus on why this technique is effective. This paper uses concepts from the traditional adaptive filter domain to provide insight into the dynamics and inner workings of BatchNorm. First, we show that the convolution weight updates have natural modes whose stability and convergence speed are tied to the eigenvalues of the input autocorrelation matrices, which are controlled by BatchNorm through the convolution layers' channelwise structure. Furthermore, our experiments demonstrate that the speed and stability benefits are distinct effects. At low learning rates, it is BatchNorm's amplification of the smallest eigenvalues that improves convergence speed, while at high learning rates, it is BatchNorm's suppression of the largest eigenvalues that ensures stability. Lastly, we prove that in the first training step, when normalization is needed most, BatchNorm satisfies the same optimization as Normalized Least Mean Square (NLMS), while it continues to approximate this condition in subsequent steps. The analyses provided in this paper lay the groundwork for gaining further insight into the operation of modern neural network structures using adaptive filter theory.

Distributed Sketching Methods for Privacy Preserving Regression
arXiv:2002.06538.
2020
Abstract
In this work, we study distributed sketching methods for large scale regression problems. We leverage multiple randomized sketches for reducing the problem dimensions as well as preserving privacy and improving straggler resilience in asynchronous distributed systems. We derive novel approximation guarantees for classical sketching methods and analyze the accuracy of parameter averaging for distributed sketches. We consider random matrices including Gaussian, randomized Hadamard, uniform sampling and leverage score sampling in the distributed setting. Moreover, we propose a hybrid approach combining sampling and fast random projections for better computational efficiency. We illustrate the performance of distributed sketches in a serverless computing platform with large scale experiments.

Distributed Averaging Methods for Randomized Second Order Optimization
arXiv:2002.06540.
2020
Abstract
We consider distributed optimization problems where forming the Hessian is computationally challenging and communication is a significant bottleneck. We develop unbiased parameter averaging methods for randomized second order optimization that employ sampling and sketching of the Hessian. Existing works do not take the bias of the estimators into consideration, which limits their application to massively parallel computation. We provide closedform formulas for regularization parameters and step sizes that provably minimize the bias for sketched Newton directions. We also extend the framework of second order averaging methods to introduce an unbiased distributed optimization framework for heterogeneous computing systems with varying worker resources. Additionally, we demonstrate the implications of our theoretical findings via large scale experiments performed on a serverless computing platform.

Limiting Spectrum of Randomized Hadamard Transform and Optimal Iterative Sketching Methods
International Conference on Machine Learning (ICML), 2020.
2020
Abstract
We provide an exact analysis of the limiting spectrum of matrices randomly projected either with the subsampled randomized Hadamard transform, or truncated Haar matrices. We characterize this limiting distribution through its Stieltjes transform, a classical object in random matrix theory, and compute the first and second inverse moments. We leverage the limiting spectrum and asymptotic freeness of random matrices to obtain an exact analysis of iterative sketching methods for solving least squares problems. Our results also yield optimal stepsizes and convergence rates in terms of simple closedform expressions. Moreover, we show that the convergence rate for Haar and randomized Hadamard matrices are identical, and uniformly improve upon Gaussian random projections. The developed techniques and formulas can be applied to a plethora of randomized algorithms that employ fast randomized Hadamard dimension reduction.
 Weighted Gradient Coding with Leverage Score Sampling IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2020
 Convex Geometry of TwoLayer ReLU Networks: Implicit Autoencoding and Interpretable Models 23rd International Conference on Artificial Intelligence and Statistics (AISTATS) 2020

Optimal Randomized FirstOrder Methods for LeastSquares Problems
arXiv:2002.09488 .
2020
Abstract
We provide an exact analysis of a class of randomized algorithms for solving overdetermined leastsquares problems. We consider firstorder methods, where the gradients are preconditioned by an approximation of the Hessian, based on a subspace embedding of the data matrix. This class of algorithms encompasses several randomized methods among the fastest solvers for leastsquares problems. We focus on two classical embeddings, namely, Gaussian projections and subsampled randomized Hadamard transforms (SRHT). Our key technical innovation is the derivation of the limiting spectral density of SRHT embeddings. Leveraging this novel result, we derive the family of normalized orthogonal polynomials of the SRHT density and we find the optimal preconditioned firstorder method along with its rate of convergence. Our analysis of Gaussian embeddings proceeds similarly, and leverages classical random matrix theory results. In particular, we show that for a given sketch size, SRHT embeddings exhibits a faster rate of convergence than Gaussian embeddings. Then, we propose a new algorithm by optimizing the computational complexity over the choice of the sketching dimension. To our knowledge, our resulting algorithm yields the best known complexity for solving leastsquares problems with no condition number dependence.
arXiv 
Convex Duality of Deep Neural Networks
arXiv:2002.09773.
2020
Abstract
We study regularized deep neural networks and introduce an analytic framework to characterize the structure of the hidden layers. We show that a set of optimal hidden layer weight matrices for a norm regularized deep neural network training problem can be explicitly found as the extreme points of a convex set. For twolayer linear networks, we first formulate a convex dual program and prove that strong duality holds. We then extend our derivations to prove that strong duality also holds for certain deep networks. In particular, for linear deep networks, we show that each optimal layer weight matrix is rankone and aligns with the previous layers when the network output is scalar. We also extend our analysis to the vector outputs and other convex loss functions. More importantly, we show that the same characterization can also be applied to deep ReLU networks with rankone inputs, where we prove that strong duality still holds and optimal layer weight matrices are rankone for scalar output networks. As a corollary, we prove that norm regularized deep ReLU networks yield spline interpolation for onedimensional datasets which was previously known only for twolayer networks. We then verify our theoretical results via several numerical experiments.
arXiv 
Convex Geometry and Duality of Overparameterized Neural Networks
arXiv:2002.11219.
2020
Abstract
We develop a convex analytic framework for ReLU neural networks which elucidates the inner workings of hidden neurons and their function space characteristics. We show that neural networks with rectified linear units act as convex regularizers, where simple solutions are encouraged via extreme points of a certain convex set. For one dimensional regression and classification, as well as rankone data matrices, we prove that finite twolayer ReLU networks with norm regularization yield linear spline interpolation. We characterize the classification decision regions in terms of a closed form kernel matrix and minimum L1 norm solutions. This is in contrast to Neural Tangent Kernel which is unable to explain neural network predictions with finitely many neurons. Our convex geometric description also provides intuitive explanations of hidden neurons as autoencoders. In higher dimensions, we show that the training problem for twolayer networks can be cast as a convex optimization problem with infinitely many constraints. We then provide a family of convex relaxations to approximate the solution, and a cuttingplane algorithm to improve the relaxations. We derive conditions for the exactness of the relaxations and provide simple closed form formulas for the optimal neural network weights in certain cases. We also establish a connection to  equivalence for neural networks analogous to the minimal cardinality solutions in compressed sensing. Extensive experimental results show that the proposed approach yields interpretable and accurate models.
arXiv 
Neural Networks are Convex Regularizers: Exact Polynomialtime Convex Optimization Formulations for TwoLayer Networks
International Conference on Machine Learning (ICML), 2020.
2020
Abstract
We develop exact representations of two layer neural networks with rectified linear units in terms of a single convex program with number of variables polynomial in the number of training samples and number of hidden neurons. Our theory utilizes semiinfinite duality and minimum norm regularization. Moreover, we show that certain standard multilayer convolutional neural networks are equivalent to L1 regularized linear models in a polynomial sized discrete Fourier feature space. We also introduce exact semidefinite programming representations of convolutional and fully connected linear multilayer networks which are polynomial size in both the sample size and dimension.
arXiv 
HighDimensional Optimization in Adaptive Random Subspaces
NEURAL INFORMATION PROCESSING SYSTEMS (NIPS). 2019
View details for Web of Science ID 000535866902047

Convex Optimization for Shallow Neural Networks
IEEE. 2019: 79–83
View details for Web of Science ID 000535355700013

Distributed BlackBox Optimization via Error Correcting Codes
IEEE. 2019: 246–52
View details for Web of Science ID 000535355700035

Straggler Resilient Serverless Computing Based on Polar Codes
IEEE. 2019: 276–83
View details for Web of Science ID 000535355700039

Faster Least Squares Optimization
arXiv:1911.02675.
2019
Abstract
We investigate randomized methods for solving overdetermined linear leastsquares problems, where the Hessian is approximated based on a random projection of the data matrix. We consider a random subspace embedding which is either drawn at the beginning of the algorithm and fixed throughout, or, refreshed at each iteration. For a broad class of random matrices, we provide an exact finitetime analysis of the refreshed embeddings method, an exact asymptotic analysis of the fixed embedding method, as well as a nonasymptotic analysis, with and without momentum acceleration. Surprisingly, we show that, for Gaussian matrices, the refreshed sketching method with no momentum yields the same convergence rate as the fixed embedding method with momentum. Furthermore, we prove that momentum does not accelerate the refreshed embeddings method. Thus, picking the accelerated, fixed embedding method as the algorithm of choice among the methods we consider, we propose a new algorithm by optimizing the computational complexity over the choice of the sketching dimension. Our resulting algorithm yields a smaller complexity compared to current stateoftheart randomized preconditioning methods. In particular, as the sample size grows, the resulting complexity becomes sublinear in the problem dimensions. We validate numerically our guarantees on large sample datasets, both for Gaussian and SRHT embeddings.
 Straggler Resilient Serverless Computing Based on Polar Codes 57th Annual Allerton Conference on Communication, Control, and Computing 2019
 Distributed BlackBox Optimization via Error Correcting Codes 57th Annual Allerton Conference on Communication, Control, and Computing 2019
 HighDimensional Optimization in Adaptive Random Subspaces Neural Information Processing Systems (NeurIPS) 2019

Fast and Robust Solution Techniques for Large Scale Linear System of Equations
IEEE. 2019
View details for Web of Science ID 000518994300160

CONVEX RELAXATIONS OF CONVOLUTIONAL NEURAL NETS
IEEE. 2019: 4928–32
View details for Web of Science ID 000482554005033

ITERATIVE HESSIAN SKETCH WITH MOMENTUM
IEEE. 2019: 7470–74
View details for Web of Science ID 000482554007141

NEWTON SKETCH: A NEAR LINEARTIME OPTIMIZATION ALGORITHM WITH LINEARQUADRATIC CONVERGENCE
SIAM JOURNAL ON OPTIMIZATION
2017; 27 (1): 205–45
View details for DOI 10.1137/15M1021106
View details for Web of Science ID 000404178500010