Gordon Wetzstein is an Assistant Professor of Electrical Engineering and, by courtesy, of Computer Science at Stanford University. He is the leader of the Stanford Computational Imaging Lab and a faculty co-director of the Stanford Center for Image Systems Engineering. At the intersection of computer graphics, machine vision, optics, scientific computing, and applied vision science, Prof. Wetzstein's research has a wide range of applications in next-generation imaging, display, wearable computing, and microscopy systems. Prior to joining Stanford in 2014, Prof. Wetzstein was a Research Scientist in the Camera Culture Group at MIT. He received a Ph.D. in Computer Science from the University of British Columbia in 2011 and graduated with Honors from the Bauhaus in Weimar, Germany before that. He is the recipient of an NSF CAREER Award, an Alfred P. Sloan Fellowship, an ACM SIGGRAPH Significant New Researcher Award, a Presidential Early Career Award for Scientists and Engineers (PECASE), a Terman Fellowship, an Okawa Research Grant, the Electronic Imaging Scientist of the Year 2017 Award, an Alain Fournier Ph.D. Dissertation Award, and a Laval Virtual Award as well as Best Paper and Demo Awards at ICCP 2011, 2014, and 2016 and at ICIP 2016.

Academic Appointments

Administrative Appointments

  • Faculty Co-director, Stanford Center for Image Systems Engineering (SCIEN) (2017 - Present)

Honors & Awards

  • Presidential Early Career Award for Scientists and Engineers (PECASE), The White House Office of Science and Technology Policy (2019)
  • Best Student Paper (Emil Wolf Student Paper Prize), OSA Frontiers in Optics Conference (2018)
  • Qualcomm Faculty Award, Qualcomm (2018)
  • SIGGRAPH Significant New Researcher Award, ACM (2018)
  • Sloan Fellowship, Alfred P. Sloan Foundation (2018)
  • Scientist of the Year Award, IS&T Electronic Imaging (2017)
  • Best Paper (Honorable Mention), Eurographics (2016)
  • CAREER Award, National Science Foundation (2016)
  • Conference Best Paper for Industry Award, IEEE International Conference on Image Processing (ICIP) (2016)
  • Okawa Research Grant, Okawa Foundation (2016)
  • Google Faculty Research Award, Google (2015)
  • Best Paper Award, IEEE International Conference on Computational Photography (ICCP) (2014)
  • Terman Faculty Fellowship, Stanford University (2014)
  • Postdoctoral Fellowship (PDF), National Sciences and Engineering Research Council of Canada (NSERC) (2012)
  • Alain Fournier Ph.D. Dissertation Annual Award, Vancouver Foundation (2011)
  • Best Paper Award, IEEE International Conference on Computational Photography (ICCP) (2011)
  • Laval Virtual Award, Laval Virtual (2005)

Program Affiliations

  • Stanford SystemX Alliance

Professional Education

  • Research Scientist, Massachusetts Institute of Technology, Media Lab, Media Arts and Sciences (2014)
  • Ph.D., University of British Columbia, Computer Science (2011)
  • Dipl., Bauhaus University, Media Systems Science (2006)

2020-21 Courses

Stanford Advisees

All Publications

  • A Light-Field Metasurface for High-Resolution Single-Particle Tracking NANO LETTERS Holsteen, A. L., Lin, D., Kauvar, I., Wetzstein, G., Brongersma, M. L. 2019; 19 (4): 2267–71
  • A Light-Field Metasurface for High-Resolution Single-Particle Tracking. Nano letters Holsteen, A. L., Lin, D., Kauvar, I., Wetzstein, G., Brongersma, M. L. 2019


    Three-dimensional (3D) single-particle tracking (SPT) is a key tool for studying dynamic processes in the life sciences. However, conventional optical elements utilizing light fields impose an inherent trade-off between lateral and axial resolution, preventing SPT with high spatiotemporal resolution across an extended volume. We overcome the typical loss in spatial resolution that accompanies light-field-based approaches to obtain 3D information by placing a standard microscope coverslip patterned with a multifunctional, light-field metasurface on a specimen. This approach enables an otherwise unmodified microscope to gather 3D information at an enhanced spatial resolution. We demonstrate simultaneous tracking of multiple fluorescent particles within a large 0.5 * 0.5 * 0.3 mm3 volume using a standard epi-fluorescent microscope with submicron lateral and micron-level axial resolution.

    View details for PubMedID 30897902

  • Sub-picosecond photon-efficient 3D imaging using single-photon sensors. Scientific reports Heide, F., Diamond, S., Lindell, D. B., Wetzstein, G. 2018; 8 (1): 17726


    Active 3D imaging systems have broad applications across disciplines, including biological imaging, remote sensing and robotics. Applications in these domains require fast acquisition times, high timing accuracy, and high detection sensitivity. Single-photon avalanche diodes (SPADs) have emerged as one of the most promising detector technologies to achieve all of these requirements. However, these detectors are plagued by measurement distortions known as pileup, which fundamentally limit their precision. In this work, we develop a probabilistic image formation model that accurately models pileup. We devise inverse methods to efficiently and robustly estimate scene depth and reflectance from recorded photon counts using the proposed model along with statistical priors. With this algorithm, we not only demonstrate improvements to timing accuracy by more than an order of magnitude compared to the state-of-the-art, but our approach is also the first to facilitate sub-picosecond-accurate, photon-efficient 3D imaging in practical scenarios where widely-varying photon counts are observed.

    View details for PubMedID 30531961

  • Hybrid optical-electronic convolutional neural networks with optimized diffractive optics for image classification. Scientific reports Chang, J., Sitzmann, V., Dun, X., Heidrich, W., Wetzstein, G. 2018; 8 (1): 12324


    Convolutional neural networks (CNNs) excel in a wide variety of computer vision applications, but their high performance also comes at a high computational cost. Despite efforts to increase efficiency both algorithmically and with specialized hardware, it remains difficult to deploy CNNs in embedded systems due to tight power budgets. Here we explore a complementary strategy that incorporates a layer of optical computing prior to electronic computing, improving performance on image classification tasks while adding minimal electronic computational cost or processing time. We propose a design for an optical convolutional layer based on an optimized diffractive optical element and test our design in two simulations: a learned optical correlator and an optoelectronic two-layer CNN. We demonstrate in simulation and with an optical prototype that the classification accuracies of our optical systems rival those of the analogous electronic implementations, while providing substantial savings on computational cost.

    View details for PubMedID 30120316

  • End-to-end Optimization of Optics and Image Processing for Achromatic Extended Depth of Field and Super-resolution Imaging ACM TRANSACTIONS ON GRAPHICS Sitzmann, V., Diamond, S., Peng, Y., Dun, X., Boyd, S., Heidrich, W., Heide, F., Wetzstein, G. 2018; 37 (4)
  • Single-Photon 3D Imaging with Deep Sensor Fusion ACM TRANSACTIONS ON GRAPHICS Lindell, D. B., O'Toole, M., Wetzstein, G. 2018; 37 (4)
  • A convex 3D deconvolution algorithm for low photon count fluorescence imaging. Scientific reports Ikoma, H., Broxton, M., Kudo, T., Wetzstein, G. 2018; 8 (1): 11489


    Deconvolution is widely used to improve the contrast and clarity of a 3D focal stack collected using a fluorescence microscope. But despite being extensively studied, deconvolution algorithms can introduce reconstruction artifacts when their underlying noise models or priors are violated, such as when imaging biological specimens at extremely low light levels. In this paper we propose a deconvolution method specifically designed for 3D fluorescence imaging of biological samples in the low-light regime. Our method utilizes a mixed Poisson-Gaussian model of photon shot noise and camera read noise, which are both present in low light imaging. We formulate a convex loss function and solve the resulting optimization problem using the alternating direction method of multipliers algorithm. Among several possible regularization strategies, we show that a Hessian-based regularizer is most effective for describing locally smooth features present in biological specimens. Our algorithm also estimates noise parameters on-the-fly, thereby eliminating a manual calibration step required by most deconvolution software. We demonstrate our algorithm on simulated images and experimentally-captured images with peak intensities of tens of photoelectrons per voxel. We also demonstrate its performance for live cell imaging, showing its applicability as a tool for biological research.

    View details for PubMedID 30065270

  • Towards a Machine-learning Approach for Sickness Prediction in 360 degrees Stereoscopic Videos Padmanaban, N., Ruban, T., Sitzmann, V., Norcia, A. M., Wetzstein, G. IEEE COMPUTER SOC. 2018: 1594–1603


    Virtual reality systems are widely believed to be the next major computing platform. There are, however, some barriers to adoption that must be addressed, such as that of motion sickness - which can lead to undesirable symptoms including postural instability, headaches, and nausea. Motion sickness in virtual reality occurs as a result of moving visual stimuli that cause users to perceive self-motion while they remain stationary in the real world. There are several contributing factors to both this perception of motion and the subsequent onset of sickness, including field of view, motion velocity, and stimulus depth. We verify first that differences in vection due to relative stimulus depth remain correlated with sickness. Then, we build a dataset of stereoscopic 3D videos and their corresponding sickness ratings in order to quantify their nauseogenicity, which we make available for future use. Using this dataset, we train a machine learning algorithm on hand-crafted features (quantifying speed, direction, and depth as functions of time) from each video, learning the contributions of these various features to the sickness ratings. Our predictor generally outperforms a naïve estimate, but is ultimately limited by the size of the dataset. However, our result is promising and opens the door to future work with more extensive datasets. This and further advances in this space have the potential to alleviate developer and end user concerns about motion sickness in the increasingly commonplace virtual world.

    View details for DOI 10.1109/TVCG.2018.2793560

    View details for Web of Science ID 000427682500022

    View details for PubMedID 29553929

  • Convolutional Sparse Coding for RGB plus NIR Imaging IEEE TRANSACTIONS ON IMAGE PROCESSING Hu, X., Heide, F., Dai, Q., Wetzstein, G. 2018; 27 (4): 1611–25


    Emerging sensor designs increasingly rely on novel color filter arrays (CFAs) to sample the incident spectrum in unconventional ways. In particular, capturing a near-infrared (NIR) channel along with conventional RGB color is an exciting new imaging modality. RGB+NIR sensing has broad applications in computational photography, such as low-light denoising, it has applications in computer vision, such as facial recognition and tracking, and it paves the way toward low-cost single-sensor RGB and depth imaging using structured illumination. However, cost-effective commercial CFAs suffer from severe spectral cross talk. This cross talk represents a major challenge in high-quality RGB+NIR imaging, rendering existing spatially multiplexed sensor designs impractical. In this work, we introduce a new approach to RGB+NIR image reconstruction using learned convolutional sparse priors. We demonstrate high-quality color and NIR imaging for challenging scenes, even including high-frequency structured NIR illumination. The effectiveness of the proposed method is validated on a large data set of experimental captures, and simulated benchmark results which demonstrate that this work achieves unprecedented reconstruction quality.

    View details for DOI 10.1109/TIP.2017.2781303

    View details for Web of Science ID 000429463800005

    View details for PubMedID 29324415

  • Saliency in VR: How do people explore virtual environments? Sitzmann, V., Serrano, A., Pavel, A., Agrawala, M., Gutierrez, D., Masia, B., Wetzstein, G. IEEE COMPUTER SOC. 2018: 1633–42


    Understanding how people explore immersive virtual environments is crucial for many applications, such as designing virtual reality (VR) content, developing new compression algorithms, or learning computational models of saliency or visual attention. Whereas a body of recent work has focused on modeling saliency in desktop viewing conditions, VR is very different from these conditions in that viewing behavior is governed by stereoscopic vision and by the complex interaction of head orientation, gaze, and other kinematic constraints. To further our understanding of viewing behavior and saliency in VR, we capture and analyze gaze and head orientation data of 169 users exploring stereoscopic, static omni-directional panoramas, for a total of 1980 head and gaze trajectories for three different viewing conditions. We provide a thorough analysis of our data, which leads to several important insights, such as the existence of a particular fixation bias, which we then use to adapt existing saliency predictors to immersive VR conditions. In addition, we explore other applications of our data and analysis, including automatic alignment of VR video cuts, panorama thumbnails, panorama video synopsis, and saliency-basedcompression.

    View details for DOI 10.1109/TVCG.2018.2793599

    View details for Web of Science ID 000427682500026

    View details for PubMedID 29553930

  • Confocal non-line-of-sight imaging based on the light-cone transform NATURE O'Toole, M., Lindell, D. B., Wetzstein, G. 2018; 555 (7696): 338–41


    How to image objects that are hidden from a camera's view is a problem of fundamental importance to many fields of research, with applications in robotic vision, defence, remote sensing, medical imaging and autonomous vehicles. Non-line-of-sight (NLOS) imaging at macroscopic scales has been demonstrated by scanning a visible surface with a pulsed laser and a time-resolved detector. Whereas light detection and ranging (LIDAR) systems use such measurements to recover the shape of visible objects from direct reflections, NLOS imaging reconstructs the shape and albedo of hidden objects from multiply scattered light. Despite recent advances, NLOS imaging has remained impractical owing to the prohibitive memory and processing requirements of existing reconstruction algorithms, and the extremely weak signal of multiply scattered light. Here we show that a confocal scanning procedure can address these challenges by facilitating the derivation of the light-cone transform to solve the NLOS reconstruction problem. This method requires much smaller computational and memory resources than previous reconstruction methods do and images hidden objects at unprecedented resolution. Confocal scanning also provides a sizeable increase in signal and range when imaging retroreflective objects. We quantify the resolution bounds of NLOS imaging, demonstrate its potential for real-time tracking and derive efficient algorithms that incorporate image priors and a physically accurate noise model. Additionally, we describe successful outdoor experiments of NLOS imaging under indirect sunlight.

    View details for PubMedID 29513650

  • Single-shot speckle correlation fluorescence microscopy in thick scattering tissue with image reconstruction priors. Journal of biophotonics Chang, J., Wetzstein, G. 2018; 11 (3)


    Deep tissue imaging in the multiple scattering regime remains at the frontier of fluorescence microscopy. Speckle correlation imaging (SCI) can computationally uncover objects hidden behind a scattering layer, but has only been demonstrated with scattered laser illumination and in geometries where the scatterer is in the far field of the target object. Here, SCI is extended to imaging a planar fluorescent signal at the back surface of a 500-mum-thick slice of mouse brain. The object is reconstructed from a single snapshot through phase retrieval using a proximal algorithm that easily incorporates image priors. Simulations and experiments demonstrate improved image recovery with this approach compared to the conventional SCI algorithm.

    View details for PubMedID 29219256

  • Time-multiplexed light field synthesis via factored Wigner distribution function OPTICS LETTERS Hamann, S., Shi, L., Solgaard, O., Wetzstein, G. 2018; 43 (3): 599–602


    An optimization algorithm for preparing display-ready holographic elements (hogels) to synthesize a light field is outlined, and proof of concept is experimentally demonstrated. This method allows for higher-rank factorization, which can be used for time-multiplexing multiple frames for improved image quality, using phase-only and fully complex modulation with a single spatial light modulator.

    View details for DOI 10.1364/OL.43.000599

    View details for Web of Science ID 000423776600064

    View details for PubMedID 29400850

  • Towards Transient Imaging at Interactive Rates with Single-Photon Detectors Lindell, D. B., O'Toole, M., Wetzstein, G., IEEE IEEE. 2018
  • Deep End-to-End Time-of-Flight Imaging Su, S., Heide, F., Wetzstein, G., Heidrich, W., IEEE IEEE. 2018: 6383–92
  • Real-time Non-line-of-sight Imaging O'Toole, M., Lindell, D. B., Wetzstein, G., Assoc Comp Machinery ASSOC COMPUTING MACHINERY. 2018
  • Confocal Non-line-of-sight Imaging O'Toole, M., Lindell, D. B., Wetzstein, G., Assoc Comp Machinery ASSOC COMPUTING MACHINERY. 2018
  • Autofocals: Gaze-Contingent Eyeglasses for Presbyopes Padmanaban, N., Konrad, R., Wetzstein, G., Assoc Comp Machinery ASSOC COMPUTING MACHINERY. 2018
  • SpinVR: Towards Live-Streaming 3D Virtual Reality Video Konrad, R., Dansereau, D. G., Masood, A., Wetzstein, G. ASSOC COMPUTING MACHINERY. 2017
  • Snapshot Difference Imaging using Correlation Time-of-Flight Sensors Callenberg, C., Heide, F., Wetzstein, G., Hullin, M. B. ASSOC COMPUTING MACHINERY. 2017
  • Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays. Proceedings of the National Academy of Sciences of the United States of America Padmanaban, N., Konrad, R., Stramer, T., Cooper, E. A., Wetzstein, G. 2017; 114 (9): 2183-2188


    From the desktop to the laptop to the mobile device, personal computing platforms evolve over time. Moving forward, wearable computing is widely expected to be integral to consumer electronics and beyond. The primary interface between a wearable computer and a user is often a near-eye display. However, current generation near-eye displays suffer from multiple limitations: they are unable to provide fully natural visual cues and comfortable viewing experiences for all users. At their core, many of the issues with near-eye displays are caused by limitations in conventional optics. Current displays cannot reproduce the changes in focus that accompany natural vision, and they cannot support users with uncorrected refractive errors. With two prototype near-eye displays, we show how these issues can be overcome using display modes that adapt to the user via computational optics. By using focus-tunable lenses, mechanically actuated displays, and mobile gaze-tracking technology, these displays can be tailored to correct common refractive errors and provide natural focus cues by dynamically updating the system based on where a user looks in a virtual scene. Indeed, the opportunities afforded by recent advances in computational optics open up the possibility of creating a computing platform in which some users may experience better quality vision in the virtual world than in the real one.

    View details for DOI 10.1073/pnas.1617251114

    View details for PubMedID 28193871

  • A Wide-Field-of-View Monocentric Light Field Camera Dansereau, D. G., Schuster, G., Ford, J., Wetzstein, G., IEEE IEEE. 2017: 3757–66
  • Optimizing VR for All Users Through Adaptive Focus Displays Padmanaban, N., Konrad, R., Cooper, E. A., Wetzstein, G., Assoc Comp Machinery ASSOC COMPUTING MACHINERY. 2017
  • Reconstructing Transient Images from Single-Photon Sensors O'Toole, M., Heide, F., Lindell, D. B., Zang, K., Diamond, S., Wetzstein, G., IEEE IEEE. 2017: 2289–97
  • Consensus Convolutional Sparse Coding Choudhury, B., Swanson, R., Heide, F., Wetzstein, G., Heidrich, W., IEEE IEEE. 2017: 4290–98
  • Computational Near-Eye Displays: Engineering the Interface to the Digital World Wetzstein, G., Natl Acad Engn NATL ACADEMIES PRESS. 2017: 7–12
  • Photonic Multitasking Interleaved Si Nanoantenna Phased Array NANO LETTERS Lin, D., Holsteen, A. L., Maguid, E., Wetzstein, G., Kik, P. G., Hasman, E., Brongersma, M. L. 2016; 16 (12): 7671-7676


    Metasurfaces provide unprecedented control over light propagation by imparting local, space-variant phase changes on an incident electromagnetic wave. They can improve the performance of conventional optical elements and facilitate the creation of optical components with new functionalities and form factors. Here, we build on knowledge from shared aperture phased array antennas and Si-based gradient metasurfaces to realize various multifunctional metasurfaces capable of achieving multiple distinct functions within a single surface region. As a key point, we demonstrate that interleaving multiple optical elements can be accomplished without reducing the aperture of each subelement. Multifunctional optical elements constructed from Si-based gradient metasurface are realized, including axial and lateral multifocus geometric phase metasurface lenses. We further demonstrate multiwavelength color imaging with a high spatial resolution. Finally, optical imaging functionality with simultaneous color separation has been obtained by using multifunctional metasurfaces, which opens up new opportunities for the field of advanced imaging and display.

    View details for DOI 10.1021/acs.nanolett.6b03505

    View details for PubMedID 27960478

  • Factored Displays Improving resolution, dynamic range, color reproduction, and light field characteristics with advanced signal processing IEEE SIGNAL PROCESSING MAGAZINE Wetzstein, G., Lanman, D. 2016; 33 (5): 119-129
  • Computational Imaging with Multi-Camera Time-of-Flight Systems ACM TRANSACTIONS ON GRAPHICS Shrestha, S., Heide, F., Heidrich, W., Wetzstein, G. 2016; 35 (4)
  • ProxImaL: Efficient Image Optimization using Proximal Algorithms ACM TRANSACTIONS ON GRAPHICS Heide, F., Diamond, S., Niessner, M., Ragan-Kelley, J., Heidrich, W., Wetzstein, G. 2016; 35 (4)
  • Convolutional Sparse Coding for High Dynamic Range Imaging COMPUTER GRAPHICS FORUM Serrano, A., Heide, F., Gutierrez, D., Wetzstein, G., Masia, B. 2016; 35 (2): 153-163

    View details for DOI 10.1111/cgf.12819

    View details for Web of Science ID 000377222200015

  • Tensor low-rank and sparse light field photography COMPUTER VISION AND IMAGE UNDERSTANDING Kamal, M. H., Heshmat, B., Raskar, R., Vandergheynst, P., Wetzstein, G. 2016; 145: 172-181
  • 3D Displays ANNUAL REVIEW OF VISION SCIENCE, VOL 2 Banks, M. S., Hoffman, D. M., Kim, J., Wetzstein, G. 2016; 2: 397-435


    Creating realistic three-dimensional (3D) experiences has been a very active area of research and development, and this article describes progress and what remains to be solved. A very active area of technical development has been to build displays that create the correct relationship between viewing parameters and triangulation depth cues: stereo, motion, and focus. Several disciplines are involved in the design, construction, evaluation, and use of 3D displays, but an understanding of human vision is crucial to this enterprise because in the end, the goal is to provide the desired perceptual experience for the viewer. In this article, we review research and development concerning displays that create 3D experiences. And we highlight areas in which further research and development is needed.

    View details for DOI 10.1146/annurev-vision-082114-035800

    View details for Web of Science ID 000389589000018

  • Extended field-of-view and increased-signal 3D holographic illumination with time-division multiplexing OPTICS EXPRESS Yang, S. J., Allen, W. E., Kauvar, I., Andalman, A. S., Young, N. P., Kim, C. K., Marshel, J. H., Wetzstein, G., Deisseroth, K. 2015; 23 (25): 32573-32581


    Phase spatial light modulators (SLMs) are widely used for generating multifocal three-dimensional (3D) illumination patterns, but these are limited to a field of view constrained by the pixel count or size of the SLM. Further, with two-photon SLM-based excitation, increasing the number of focal spots penalizes the total signal linearly--requiring more laser power than is available or can be tolerated by the sample. Here we analyze and demonstrate a method of using galvanometer mirrors to time-sequentially reposition multiple 3D holograms, both extending the field of view and increasing the total time-averaged two-photon signal. We apply our approach to 3D two-photon in vivo neuronal calcium imaging.

    View details for DOI 10.1364/OE.23.032573

    View details for Web of Science ID 000366687200093

    View details for PubMedID 26699047

    View details for PubMedCentralID PMC4775739

  • Adaptive Color Display via Perceptually-driven Factored Spectral Projection ACM TRANSACTIONS ON GRAPHICS Kauvar, I., Yang, S. J., Shi, L., McDowall, I., Wetzstein, G. 2015; 34 (6)
  • Doppler Time-of-Flight Imaging ACM TRANSACTIONS ON GRAPHICS Heide, F., Heidrich, W., Hullin, M., Wetzstein, G. 2015; 34 (4)

    View details for DOI 10.1145/2766953

    View details for Web of Science ID 000358786600002

  • The Light Field Stereoscope Immersive Computer Graphics via Factored Near-Eye Light Field Displays with Focus Cues ACM TRANSACTIONS ON GRAPHICS Huang, F., Chen, K., Wetzstein, G. 2015; 34 (4)

    View details for DOI 10.1145/2766922

    View details for Web of Science ID 000358786600026

  • Toward BxDF Display using Multilayer Diffraction ACM TRANSACTIONS ON GRAPHICS Ye, G., Jolly, S., Bove, V. M., Dai, Q., Raskar, R., Wetzstein, G. 2014; 33 (6)
  • Wide field of view compressive light field display using a multilayer architecture and tracked viewers JOURNAL OF THE SOCIETY FOR INFORMATION DISPLAY Chen, R., Maimone, A., Fuchs, H., Raskar, R., Wetzstein, G. 2014; 22 (10): 525-534

    View details for DOI 10.1002/jsid.285

    View details for Web of Science ID 000354201900005

  • Attenuation-corrected fluorescence spectra unmixing for spectroscopy and microscopy OPTICS EXPRESS Ikoma, H., Heshmat, B., Wetzstein, G., Raskar, R. 2014; 22 (16)


    In fluorescence measurements, light is often absorbed and scattered by a sample both for excitation and emission, resulting in the measured spectra to be distorted. Conventional linear unmixing methods computationally separate overlapping spectra but do not account for these effects. We propose a new algorithm for fluorescence unmixing that accounts for the attenuation-related distortion effect on fluorescence spectra. Using a matrix representation, we derive forward measurement formation and a corresponding inverse method; the unmixing algorithm is based on nonnegative matrix factorization. We also demonstrate how this method can be extended to a higher-dimensional tensor form, which is useful for unmixing overlapping spectra observed under the attenuation effect in spectral imaging microscopy. We evaluate the proposed methods in simulation and experiments and show that it outperforms a conventional, linear unmixing method when absorption and scattering contributes to the measured signals, as in deep tissue imaging.

    View details for DOI 10.1364/OE.22.019469

    View details for Web of Science ID 000340714100058

    View details for PubMedID 25321030

  • A Compressive Light Field Projection System ACM TRANSACTIONS ON GRAPHICS Hirsch, M., Wetzstein, G., Raskar, R. 2014; 33 (4)
  • Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy NATURE METHODS Prevedel, R., Yoon, Y., Hoffmann, M., Pak, N., Wetzstein, G., Kato, S., Schroedel, T., Raskar, R., Zimmer, M., Boyden, E. S., Vaziri, A. 2014; 11 (7): 727-U161


    High-speed, large-scale three-dimensional (3D) imaging of neuronal activity poses a major challenge in neuroscience. Here we demonstrate simultaneous functional imaging of neuronal activity at single-neuron resolution in an entire Caenorhabditis elegans and in larval zebrafish brain. Our technique captures the dynamics of spiking neurons in volumes of ∼700 μm × 700 μm × 200 μm at 20 Hz. Its simplicity makes it an attractive tool for high-speed volumetric calcium imaging.

    View details for DOI 10.1038/NMETH.2964

    View details for PubMedID 24836920

  • Eyeglasses-free Display: Towards Correcting Visual Aberrations with Computational Light Field Displays ACM TRANSACTIONS ON GRAPHICS Huang, F., Wetzstein, G., Barsky, B. A., Raskar, R. 2014; 33 (4)
  • Compressive multi-mode superresolution display OPTICS EXPRESS Heide, F., Gregson, J., Wetzstein, G., Raskar, R., Heidrich, W. 2014; 22 (12): 14981-14992


    Compressive displays are an emerging technology exploring the co-design of new optical device configurations and compressive computation. Previously, research has shown how to improve the dynamic range of displays and facilitate high-quality light field or glasses-free 3D image synthesis. In this paper, we introduce a new multi-mode compressive display architecture that supports switching between 3D and high dynamic range (HDR) modes as well as a new super-resolution mode. The proposed hardware consists of readily-available components and is driven by a novel splitting algorithm that computes the pixel states from a target high-resolution image. In effect, the display pixels present a compressed representation of the target image that is perceived as a single, high resolution image.

    View details for DOI 10.1364/OE.22.014981

    View details for Web of Science ID 000338044300090

    View details for PubMedID 24977592

  • Dual-coded compressive hyperspectral imaging OPTICS LETTERS Lin, X., Wetzstein, G., Liu, Y., Dai, Q. 2014; 39 (7): 2044-2047


    This Letter presents a new snapshot approach to hyperspectral imaging via dual-optical coding and compressive computational reconstruction. We demonstrate that two high-speed spatial light modulators, located conjugate to the image and spectral plane, respectively, can code the hyperspectral datacube into a single sensor image such that the high-resolution signal can be recovered in postprocessing. We show various applications by designing different optical modulation functions, including programmable spatially varying color filtering, multiplexed hyperspectral imaging, and high-resolution compressive hyperspectral imaging.

    View details for DOI 10.1364/OL.39.002044

    View details for Web of Science ID 000333887800086

    View details for PubMedID 24686670

  • Display adaptive 3D content remapping COMPUTERS & GRAPHICS-UK Masia, B., Wetzstein, G., Aliaga, C., Raskar, R., Gutierrez, D. 2013; 37 (8): 983-996
  • A survey on computational displays: Pushing the boundaries of optics, computation, and perception COMPUTERS & GRAPHICS-UK Masia, B., Wetzstein, G., Didyk, P., Gutierrez, D. 2013; 37 (8): 1012-1038
  • Focus 3D: Compressive Accommodation Display ACM TRANSACTIONS ON GRAPHICS Maimone, A., Wetzstein, G., Hirsch, M., Lanman, D., Raskar, R., Fuchs, H. 2013; 32 (5)

    View details for DOI 10.1145/2503144

    View details for Web of Science ID 000326922900007

  • Adaptive Image Synthesis for Compressive Displays ACM TRANSACTIONS ON GRAPHICS Heide, F., Wetzstein, G., Raskar, R., Heidrich, W. 2013; 32 (4)
  • Compressive Light Field Photography using Overcomplete Dictionaries and Optimized Projections ACM TRANSACTIONS ON GRAPHICS Marwah, K., Wetzstein, G., Bando, Y., Raskar, R. 2013; 32 (4)
  • Real-time Image Generation for Compressive Light Field Displays 9th International Symposium on Display Holography (ISDH) Wetzstein, G., Lanman, D., Hirsch, M., Raskar, R. IOP PUBLISHING LTD. 2013
  • Depth of Field Analysis for Multilayer Automultiscopic Displays 9th International Symposium on Display Holography (ISDH) Lanman, D., Wetzstein, G., Hirsch, M., Raskar, R. IOP PUBLISHING LTD. 2013
  • Construction and Calibration of Optically Efficient LCD-based Multi-Layer Light Field Displays 9th International Symposium on Display Holography (ISDH) Hirsch, M., Lanman, D., Wetzstein, G., Raskar, R. IOP PUBLISHING LTD. 2013
  • On Plenoptic Multiplexing and Reconstruction INTERNATIONAL JOURNAL OF COMPUTER VISION Wetzstein, G., Ihrke, I., Heidrich, W. 2013; 101 (2): 384-400
  • Compressive Light Field Displays IEEE COMPUTER GRAPHICS AND APPLICATIONS Wetzstein, G., Lanman, D., Hirsch, M., Heidrich, W., Raskar, R. 2012; 32 (5): 6-11


    Light fields are the multiview extension of stereo image pairs: a collection of images showing a 3D scene from slightly different perspectives. Depicting high-resolution light fields usually requires an excessively large display bandwidth; compressive light field displays are enabled by the codesign of optical elements and computational-processing algorithms. Rather than pursuing a direct "optical" solution (for example, adding one more pixel to support the emission of one additional light ray), compressive displays aim to create flexible optical systems that can synthesize a compressed target light field. In effect, each pixel emits a superposition of light rays. Through compression and tailored optical designs, fewer display pixels are necessary to emit a given light field than a direct optical solution would require.

    View details for Web of Science ID 000307910800003

    View details for PubMedID 24806982

  • Tensor Displays: Compressive Light Field Synthesis using Multilayer Displays with Directional Backlighting ACM TRANSACTIONS ON GRAPHICS Wetzstein, G., Lanman, D., Hirsch, M., Raskar, R. 2012; 31 (4)
  • Compressive Light Field Photography Special-Interest-Group-on-Computer-Graphics-and-Interactive-Techniques Conference (SIGGRAPH) Marwah, K., Wetzstein, G., Veeraraghavan, A., Raskar, R. ASSOC COMPUTING MACHINERY. 2012
  • Polarization Fields: Dynamic Light Field Display using Multi-Layer LCDs ACM TRANSACTIONS ON GRAPHICS Lanman, D., Wetzstein, G., Hirsch, M., Heidrich, W., Raskar, R. 2011; 30 (6)
  • Computational Plenoptic Imaging COMPUTER GRAPHICS FORUM Wetzstein, G., Ihrke, I., Lanman, D., Heidrich, W. 2011; 30 (8): 2397-2426
  • Layered 3D: Tomographic Image Synthesis for Attenuation-based Light Field and High Dynamic Range Displays ACM TRANSACTIONS ON GRAPHICS Wetzstein, G., Lanman, D., Heidrich, W., Raskar, R. 2011; 30 (4)