Emmanuel Candès is the Barnum-Simons Chair in Mathematics and Statistics, a professor of electrical engineering (by courtesy) and a member of the Institute of Computational and Mathematical Engineering at Stanford University. Earlier, Candès was the Ronald and Maxine Linde Professor of Applied and Computational Mathematics at the California Institute of Technology. His research interests are in computational harmonic analysis, statistics, information theory, signal processing and mathematical optimization with applications to the imaging sciences, scientific computing and inverse problems. He received his Ph.D. in statistics from Stanford University in 1998.

Candès has received several awards including the Alan T. Waterman Award from NSF, which is the highest honor bestowed by the National Science Foundation, and which recognizes the achievements of early-career scientists. He has given over 60 plenary lectures at major international conferences, not only in mathematics and statistics but in many other areas as well including biomedical imaging and solid-state physics. He was elected to the National Academy of Sciences and to the American Academy of Arts and Sciences in 2014.

Academic Appointments

Honors & Awards

  • 2015 George David Birkhoff Prize, American Mathematical Society (AMS) & Society for Industrial and Applied Mathematics (SIAM) (2015)
  • Fellow, American Academy of Arts and Sciences (2014)
  • Invited Plenary Address at ICM 2014, International Mathematical Union (2014)
  • Member, National Academy of Sciences (2014)
  • Prix Jean Kuntzmann, Laboratoire Jean Kuntzmann and PERSYVAL-lab (2014)
  • Dannie Heineman Prize, Academy of Sciences at Göttingen (2013)
  • Lagrange Prize in Continuous Optimization, Mathematical Optimization Society (MOS) and Society of Industrial and Applied Mathematics (SIAM) (2012)
  • Collatz Prize, International Council for Industrial and Applied Mathematics (ICIAM) (2011)
  • Simons Chair, Math + X, Simons Foundation (2011)
  • George Polya Prize, Society of Industrial and Applied Mathematics (SIAM) (2010)
  • Information Theory Society Paper Award, Information Theory Society (2008)
  • Alan T. Waterman Medal, National Science Foundation (2006)
  • James H. Wilkinson Prize in Numerical Analysis and Scientic Computing, Society of Industrial and Applied Mathematics (SIAM) (2005)
  • Best Paper Award, European Association for Signal, Speech and Image Processing (2003)
  • Young Investigator Award, Department of Energy (2002)
  • Sloan Research Fellow, Alfred P. Sloan Foundation (2001-2003)
  • Third Popov Prize in Approximation Theory, Popov Foundation (2001)
  • DRET Fellowship, Ecole Polytechnique (1993-1997)
  • National Scholarship, Ecole Polytechnique (1990)

Professional Education

  • PhD, Stanford University, Statistics (1998)
  • Diplome Ingenieur, Ecole Polytechnique (1993)

2015-16 Courses

Stanford Advisees

All Publications

  • Phase retrieval from coded diffraction patterns APPLIED AND COMPUTATIONAL HARMONIC ANALYSIS Candes, E. J., Li, X., Soltanolkotabi, M. 2015; 39 (2): 277-299
  • Adaptive Restart for Accelerated Gradient Schemes FOUNDATIONS OF COMPUTATIONAL MATHEMATICS O'Donoghue, B., Candes, E. 2015; 15 (3): 715-732
  • Randomized Algorithms for Low-Rank Matrix Factorizations: Sharp Performance Bounds ALGORITHMICA Witten, R., Candes, E. 2015; 72 (1): 264-281
  • Phase Retrieval via Wirtinger Flow: Theory and Algorithms IEEE TRANSACTIONS ON INFORMATION THEORY Candes, E. J., Li, X., Soltanolkotabi, M. 2015; 61 (4): 1985-2007
  • Low-Rank Plus Sparse Matrix Decomposition for Accelerated Dynamic MRI with Separation of Background and Dynamic Components MAGNETIC RESONANCE IN MEDICINE Otazo, R., Candes, E., Sodickson, D. K. 2015; 73 (3): 1125-1136


    To apply the low-rank plus sparse (L+S) matrix decomposition model to reconstruct undersampled dynamic MRI as a superposition of background and dynamic components in various problems of clinical interest.The L+S model is natural to represent dynamic MRI data. Incoherence between k-t space (acquisition) and the singular vectors of L and the sparse domain of S is required to reconstruct undersampled data. Incoherence between L and S is required for robust separation of background and dynamic components. Multicoil L+S reconstruction is formulated using a convex optimization approach, where the nuclear norm is used to enforce low rank in L and the l1 norm is used to enforce sparsity in S. Feasibility of the L+S reconstruction was tested in several dynamic MRI experiments with true acceleration, including cardiac perfusion, cardiac cine, time-resolved angiography, and abdominal and breast perfusion using Cartesian and radial sampling.The L+S model increased compressibility of dynamic MRI data and thus enabled high-acceleration factors. The inherent background separation improved background suppression performance compared to conventional data subtraction, which is sensitive to motion.The high acceleration and background separation enabled by L+S promises to enhance spatial and temporal resolution and to enable background suppression without the need of subtraction or modeling.

    View details for DOI 10.1002/mrm.25240

    View details for Web of Science ID 000350279900025

    View details for PubMedID 24760724

  • Phase Retrieval via Matrix Completion SIAM REVIEW Candes, E. J., Eldar, Y. C., Strohmer, T., Voroninski, V. 2015; 57 (2): 225-251

    View details for DOI 10.1137/151005099

    View details for Web of Science ID 000354985600003

  • Solving Quadratic Equations via PhaseLift When There Are About as Many Equations as Unknowns FOUNDATIONS OF COMPUTATIONAL MATHEMATICS Candes, E. J., Li, X. 2014; 14 (5): 1017-1026
  • Towards a Mathematical Theory of Super- resolution COMMUNICATIONS ON PURE AND APPLIED MATHEMATICS Candes, E. J., Fernandez-Granda, C. 2014; 67 (6): 906-956

    View details for DOI 10.1002/cpa.21455

    View details for Web of Science ID 000333662800002

  • ROBUST SUBSPACE CLUSTERING ANNALS OF STATISTICS Soltanolkotabi, M., Elhamifar, E., Candes, E. J. 2014; 42 (2): 669-699

    View details for DOI 10.1214/13-AOS1199

    View details for Web of Science ID 000336888400014

  • Super-Resolution from Noisy Data JOURNAL OF FOURIER ANALYSIS AND APPLICATIONS Candes, E. J., Fernandez-Granda, C. 2013; 19 (6): 1229-1254
  • Unbiased Risk Estimates for Singular Value Thresholding and Spectral Estimators IEEE TRANSACTIONS ON SIGNAL PROCESSING Candes, E. J., Sing-Long, C. A., Trzasko, J. D. 2013; 61 (19): 4643-4657
  • Simple bounds for recovering low-complexity models MATHEMATICAL PROGRAMMING Candes, E., Recht, B. 2013; 141 (1-2): 577-589
  • PhaseLift: Exact and Stable Signal Recovery from Magnitude Measurements via Convex Programming COMMUNICATIONS ON PURE AND APPLIED MATHEMATICS Candes, E. J., Strohmer, T., Voroninski, V. 2013; 66 (8): 1241-1274

    View details for DOI 10.1002/cpa.21432

    View details for Web of Science ID 000319617000003

  • Single-photon sampling architecture for solid-state imaging sensors PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA Van Den Berg, E., Candes, E., Chinn, G., Levin, C., Olcott, P. D., Sing-Long, C. 2013; 110 (30): E2752-E2761


    Advances in solid-state technology have enabled the development of silicon photomultiplier sensor arrays capable of sensing individual photons. Combined with high-frequency time-to-digital converters (TDCs), this technology opens up the prospect of sensors capable of recording with high accuracy both the time and location of each detected photon. Such a capability could lead to significant improvements in imaging accuracy, especially for applications operating with low photon fluxes such as light detection and ranging and positron-emission tomography. The demands placed on on-chip readout circuitry impose stringent trade-offs between fill factor and spatiotemporal resolution, causing many contemporary designs to severely underuse the technology's full potential. Concentrating on the low photon flux setting, this paper leverages results from group testing and proposes an architecture for a highly efficient readout of pixels using only a small number of TDCs. We provide optimized design instances for various sensor parameters and compute explicit upper and lower bounds on the number of TDCs required to uniquely decode a given maximum number of simultaneous photon arrivals. To illustrate the strength of the proposed architecture, we note a typical digitization of a 60 × 60 photodiode sensor using only 142 TDCs. The design guarantees registration and unique recovery of up to four simultaneous photon arrivals using a fast decoding algorithm. By contrast, a cross-strip design requires 120 TDCs and cannot uniquely decode any simultaneous photon arrivals. Among other realistic simulations of scintillation events in clinical positron-emission tomography, the above design is shown to recover the spatiotemporal location of 99.98% of all detected photons.

    View details for DOI 10.1073/pnas.1216318110

    View details for Web of Science ID 000322112300005

    View details for PubMedID 23836643

  • Improving IMRT delivery efficiency with reweighted L1-minimization for inverse planning MEDICAL PHYSICS Kim, H., Becker, S., Lee, R., Lee, S., Shin, S., Candes, E., Xing, L., Li, R. 2013; 40 (7)


    Purpose: This study presents an improved technique to further simplify the fluence-map in intensity modulated radiation therapy (IMRT) inverse planning, thereby reducing plan complexity and improving delivery efficiency, while maintaining the plan quality.Methods: First-order total-variation (TV) minimization (min.) based on L1-norm has been proposed to reduce the complexity of fluence-map in IMRT by generating sparse fluence-map variations. However, with stronger dose sparing to the critical structures, the inevitable increase in the fluence-map complexity can lead to inefficient dose delivery. Theoretically, L0-min. is the ideal solution for the sparse signal recovery problem, yet practically intractable due to its nonconvexity of the objective function. As an alternative, the authors use the iteratively reweighted L1-min. technique to incorporate the benefits of the L0-norm into the tractability of L1-min. The weight multiplied to each element is inversely related to the magnitude of the corresponding element, which is iteratively updated by the reweighting process. The proposed penalizing process combined with TV min. further improves sparsity in the fluence-map variations, hence ultimately enhancing the delivery efficiency. To validate the proposed method, this work compares three treatment plans obtained from quadratic min. (generally used in clinic IMRT), conventional TV min., and our proposed reweighted TV min. techniques, implemented by a large-scale L1-solver (template for first-order conic solver), for five patient clinical data. Criteria such as conformation number (CN), modulation index (MI), and estimated treatment time are employed to assess the relationship between the plan quality and delivery efficiency.Results: The proposed method yields simpler fluence-maps than the quadratic and conventional TV based techniques. To attain a given CN and dose sparing to the critical organs for 5 clinical cases, the proposed method reduces the number of segments by 10-15 and 30-35, relative to TV min. and quadratic min. based plans, while MIs decreases by about 20%-30% and 40%-60% over the plans by two existing techniques, respectively. With such conditions, the total treatment time of the plans obtained from our proposed method can be reduced by 12-30 s and 30-80 s mainly due to greatly shorter multileaf collimator (MLC) traveling time in IMRT step-and-shoot delivery.Conclusions: The reweighted L1-minimization technique provides a promising solution to simplify the fluence-map variations in IMRT inverse planning. It improves the delivery efficiency by reducing the entire segments and treatment time, while maintaining the plan quality in terms of target conformity and critical structure sparing.

    View details for DOI 10.1118/1.4811100

    View details for Web of Science ID 000321272200023

    View details for PubMedID 23822423

  • How well can we estimate a sparse vector? APPLIED AND COMPUTATIONAL HARMONIC ANALYSIS Candes, E. J., Davenport, M. A. 2013; 34 (2): 317-323
  • On the Fundamental Limits of Adaptive Sensing IEEE TRANSACTIONS ON INFORMATION THEORY Arias-Castro, E., Candes, E. J., Davenport, M. A. 2013; 59 (1): 472-481
  • Super-resolution via Transform-invariant Group-sparse Regularization 2013 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV) Fernandez-Granda, C., Candes, E. J. 2013: 3336-3343
  • Phase Retrieval via Matrix Completion SIAM JOURNAL ON IMAGING SCIENCES Candes, E. J., Eldar, Y. C., Strohmer, T., Voroninski, V. 2013; 6 (1): 199-225

    View details for DOI 10.1137/110848074

    View details for Web of Science ID 000326032900008


    View details for DOI 10.1214/12-AOS1001

    View details for Web of Science ID 000312899000007


    View details for DOI 10.1214/12-AOS1034

    View details for Web of Science ID 000321842400003

  • Dose optimization with first-order total-variation minimization for dense angularly sampled and sparse intensity modulated radiation therapy (DASSIM-RT) MEDICAL PHYSICS Kim, H., Li, R., Lee, R., Goldstein, T., Boyd, S., Candes, E., Xing, L. 2012; 39 (7): 4316-4327


    A new treatment scheme coined as dense angularly sampled and sparse intensity modulated radiation therapy (DASSIM-RT) has recently been proposed to bridge the gap between IMRT and VMAT. By increasing the angular sampling of radiation beams while eliminating dispensable segments of the incident fields, DASSIM-RT is capable of providing improved conformity in dose distributions while maintaining high delivery efficiency. The fact that DASSIM-RT utilizes a large number of incident beams represents a major computational challenge for the clinical applications of this powerful treatment scheme. The purpose of this work is to provide a practical solution to the DASSIM-RT inverse planning problem.The inverse planning problem is formulated as a fluence-map optimization problem with total-variation (TV) minimization. A newly released L1-solver, template for first-order conic solver (TFOCS), was adopted in this work. TFOCS achieves faster convergence with less memory usage as compared with conventional quadratic programming (QP) for the TV form through the effective use of conic forms, dual-variable updates, and optimal first-order approaches. As such, it is tailored to specifically address the computational challenges of large-scale optimization in DASSIM-RT inverse planning. Two clinical cases (a prostate and a head and neck case) are used to evaluate the effectiveness and efficiency of the proposed planning technique. DASSIM-RT plans with 15 and 30 beams are compared with conventional IMRT plans with 7 beams in terms of plan quality and delivery efficiency, which are quantified by conformation number (CN), the total number of segments and modulation index, respectively. For optimization efficiency, the QP-based approach was compared with the proposed algorithm for the DASSIM-RT plans with 15 beams for both cases.Plan quality improves with an increasing number of incident beams, while the total number of segments is maintained to be about the same in both cases. For the prostate patient, the conformation number to the target was 0.7509, 0.7565, and 0.7611 with 80 segments for IMRT with 7 beams, and DASSIM-RT with 15 and 30 beams, respectively. For the head and neck (HN) patient with a complicated target shape, conformation numbers of the three treatment plans were 0.7554, 0.7758, and 0.7819 with 75 segments for all beam configurations. With respect to the dose sparing to the critical structures, the organs such as the femoral heads in the prostate case and the brainstem and spinal cord in the HN case were better protected with DASSIM-RT. For both cases, the delivery efficiency has been greatly improved as the beam angular sampling increases with the similar or better conformal dose distribution. Compared with conventional quadratic programming approaches, first-order TFOCS-based optimization achieves far faster convergence and smaller memory requirements in DASSIM-RT.The new optimization algorithm TFOCS provides a practical and timely solution to the DASSIM-RT or other inverse planning problem requiring large memory space. The new treatment scheme is shown to outperform conventional IMRT in terms of dose conformity to both the targetand the critical structures, while maintaining high delivery efficiency.

    View details for DOI 10.1118/1.4729717

    View details for Web of Science ID 000306893000029

    View details for PubMedID 22830765

  • Compressive fluorescence microscopy for biological and hyperspectral imaging PROCEEDINGS OF THE NATIONAL ACADEMY OF SCIENCES OF THE UNITED STATES OF AMERICA Studer, V., Bobin, J., Chahid, M., Mousavi, H. S., Candes, E., Dahan, M. 2012; 109 (26): E1679-E1687


    The mathematical theory of compressed sensing (CS) asserts that one can acquire signals from measurements whose rate is much lower than the total bandwidth. Whereas the CS theory is now well developed, challenges concerning hardware implementations of CS-based acquisition devices--especially in optics--have only started being addressed. This paper presents an implementation of compressive sensing in fluorescence microscopy and its applications to biomedical imaging. Our CS microscope combines a dynamic structured wide-field illumination and a fast and sensitive single-point fluorescence detection to enable reconstructions of images of fluorescent beads, cells, and tissues with undersampling ratios (between the number of pixels and number of measurements) up to 32. We further demonstrate a hyperspectral mode and record images with 128 spectral channels and undersampling ratios up to 64, illustrating the potential benefits of CS acquisition for higher-dimensional signals, which typically exhibits extreme redundancy. Altogether, our results emphasize the interest of CS schemes for acquisition at a significantly reduced rate and point to some remaining challenges for CS fluorescence microscopy.

    View details for DOI 10.1073/pnas.1119511109

    View details for Web of Science ID 000306291400004

    View details for PubMedID 22689950

  • Exact Matrix Completion via Convex Optimization COMMUNICATIONS OF THE ACM Candes, E., Recht, B. 2012; 55 (6): 111-119
  • A Probabilistic and RIPless Theory of Compressed Sensing IEEE TRANSACTIONS ON INFORMATION THEORY Candes, E. J., Plan, Y. 2011; 57 (11): 7235-7254

    View details for DOI 10.1214/11-AOS910

    View details for Web of Science ID 000299186500013

  • Compressed sensing with coherent and redundant dictionaries APPLIED AND COMPUTATIONAL HARMONIC ANALYSIS Candes, E. J., Eldar, Y. C., Needell, D., Randall, P. 2011; 31 (1): 59-73
  • Robust Principal Component Analysis? JOURNAL OF THE ACM Candes, E. J., Li, X., Ma, Y., Wright, J. 2011; 58 (3)
  • Tight Oracle Inequalities for Low-Rank Matrix Recovery From a Minimal Number of Noisy Random Measurements IEEE TRANSACTIONS ON INFORMATION THEORY Candes, E. J., Plan, Y. 2011; 57 (4): 2342-2359
  • DETECTION OF AN ANOMALOUS CLUSTER IN A NETWORK ANNALS OF STATISTICS Arias-Castro, E., Candes, E. J., Durand, A. 2011; 39 (1): 278-304

    View details for DOI 10.1214/10-AOS839

    View details for Web of Science ID 000288183800009

  • Compressed Sensing With Quantized Measurements IEEE SIGNAL PROCESSING LETTERS Zymnis, A., Boyd, S., Candes, E. 2010; 17 (2): 149-152
  • The power of convex relaxation: the surprising stories of matrix completion and compressed sensing PROCEEDINGS OF THE TWENTY-FIRST ANNUAL ACM-SIAM SYMPOSIUM ON DISCRETE ALGORITHMS Candes, E. J. 2010; 135: 1321-1321
  • Fast computation of Fourier integral operators SIAM JOURNAL ON SCIENTIFIC COMPUTING Candes, E., Demanet, L., Ying, L. 2007; 29 (6): 2464-2493
  • Fast discrete curvelet transforms MULTISCALE MODELING & SIMULATION Candes, E., Demanet, L., Donoho, D., Ying, L. 2006; 5 (3): 861-899

    View details for DOI 10.1137/05064182X

    View details for Web of Science ID 000242572200007

  • Ridgelets: Estimating with ridge functions ANNALS OF STATISTICS Candes, E. J. 2003; 31 (5): 1561-1599
  • Gray and color image contrast enhancement by the curvelet transform IEEE TRANSACTIONS ON IMAGE PROCESSING Starck, J. L., Murtagh, F., Candes, E. J., Donoho, D. L. 2003; 12 (6): 706-717


    We present in this paper a new method for contrast enhancement based on the curvelet transform. The curvelet transform represents edges better than wavelets, and is therefore well-suited for multiscale edge enhancement. We compare this approach with enhancement based on the wavelet transform, and the Multiscale Retinex. In a range of examples, we use edge detection and segmentation, among other processing applications, to provide for quantitative comparative evaluation. Our findings are that curvelet based enhancement out-performs other enhancement methods on noisy images, but on noiseless or near noiseless images curvelet based enhancement is not remarkably better than wavelet based enhancement.

    View details for DOI 10.1109/TIP.2003.813140

    View details for Web of Science ID 000183824600011

    View details for PubMedID 18237946

  • The curvelet transform for image denoising IEEE TRANSACTIONS ON IMAGE PROCESSING Starck, J. L., Candes, E. J., Donoho, D. L. 2002; 11 (6): 670-684


    We describe approximate digital implementations of two new mathematical transforms, namely, the ridgelet transform and the curvelet transform. Our implementations offer exact reconstruction, stability against perturbations, ease of implementation, and low computational complexity. A central tool is Fourier-domain computation of an approximate digital Radon transform. We introduce a very simple interpolation in the Fourier space which takes Cartesian samples and yields samples on a rectopolar grid, which is a pseudo-polar sampling set based on a concentric squares geometry. Despite the crudeness of our interpolation, the visual performance is surprisingly good. Our ridgelet transform applies to the Radon transform a special overcomplete wavelet pyramid whose wavelets have compact support in the frequency domain. Our curvelet transform uses our ridgelet transform as a component step, and implements curvelet subbands using a filter bank of a; trous wavelet filters. Our philosophy throughout is that transforms should be overcomplete, rather than critically sampled. We apply these digital transforms to the denoising of some standard images embedded in white noise. In the tests reported here, simple thresholding of the curvelet coefficients is very competitive with "state of the art" techniques based on wavelets, including thresholding of decimated or undecimated wavelet transforms and also including tree-based Bayesian posterior mean methods. Moreover, the curvelet reconstructions exhibit higher perceptual quality than wavelet-based reconstructions, offering visually sharper images and, in particular, higher quality recovery of edges and of faint linear and curvilinear features. Existing theory for curvelet and ridgelet transforms suggests that these new approaches can outperform wavelet methods in certain image reconstruction problems. The empirical results reported here are in encouraging agreement.

    View details for Web of Science ID 000176533400009

    View details for PubMedID 18244665