Bio


Gordon Wetzstein is an Associate Professor of Electrical Engineering and, by courtesy, of Computer Science at Stanford University. He is the leader of the Stanford Computational Imaging Lab and a faculty co-director of the Stanford Center for Image Systems Engineering. At the intersection of computer graphics and vision, artificial intelligence, computational optics, and applied vision science, Prof. Wetzstein's research has a wide range of applications in next-generation imaging, wearable computing, and neural rendering systems. Prof. Wetzstein is a Fellow of Optica and the recipient of numerous awards, including an NSF CAREER Award, an Alfred P. Sloan Fellowship, an ACM SIGGRAPH Significant New Researcher Award, a Presidential Early Career Award for Scientists and Engineers (PECASE), an SPIE Early Career Achievement Award, an Electronic Imaging Scientist of the Year Award, an Alain Fournier Ph.D. Dissertation Award as well as many Best Paper and Demo Awards.

Academic Appointments


Administrative Appointments


  • Faculty Co-director, Stanford Center for Image Systems Engineering (SCIEN) (2017 - Present)

Honors & Awards


  • Best Paper (Honorable Mention), ACM SIGGRAPH (2023)
  • Distinguished Lecturer, IEEE Signal Processing Society (2023)
  • Fellow, Optica (formerly OSA) (2023)
  • Raymond C. Bowman Award, Society for Imaging Science and Technology (IS&T) (2023)
  • Best Journal Paper, IEEE Virtual Reality Conference (2022)
  • Early Career Achievement Award, International society for optics and photonics (SPIE) (2020)
  • Presidential Early Career Award for Scientists and Engineers (PECASE), The White House Office of Science and Technology Policy (2019)
  • Best Student Paper (Emil Wolf Student Paper Prize), OSA Frontiers in Optics Conference (2018)
  • SIGGRAPH Significant New Researcher Award, ACM (2018)
  • Sloan Fellowship, Alfred P. Sloan Foundation (2018)
  • Scientist of the Year Award, IS&T Electronic Imaging (2017)
  • Best Paper (Honorable Mention), Eurographics (2016)
  • CAREER Award, National Science Foundation (2016)
  • Conference Best Paper for Industry Award, IEEE International Conference on Image Processing (ICIP) (2016)
  • Okawa Research Grant, Okawa Foundation (2016)
  • Google Faculty Research Award, Google (2015)
  • Best Paper Award, IEEE International Conference on Computational Photography (ICCP) (2014)
  • Terman Faculty Fellowship, Stanford University (2014)
  • Postdoctoral Fellowship (PDF), National Sciences and Engineering Research Council of Canada (NSERC) (2012)
  • Alain Fournier Ph.D. Dissertation Annual Award, Vancouver Foundation (2011)
  • Best Paper Award, IEEE International Conference on Computational Photography (ICCP) (2011)

Program Affiliations


  • Stanford SystemX Alliance

Professional Education


  • Research Scientist, Massachusetts Institute of Technology, Media Lab, Media Arts and Sciences (2014)
  • Ph.D., University of British Columbia, Computer Science (2011)
  • Dipl., Bauhaus University, Media Systems Science (2006)

2023-24 Courses


Stanford Advisees


All Publications


  • Inference in artificial intelligence with deep optics and photonics. Nature Wetzstein, G., Ozcan, A., Gigan, S., Fan, S., Englund, D., Soljacic, M., Denz, C., Miller, D. A., Psaltis, D. 2020; 588 (7836): 39–47

    Abstract

    Artificial intelligence tasks across numerous applications require accelerators for fast and low-power execution. Optical computing systems may be able to meet these domain-specific needs but, despite half a century of research, general-purpose optical computing systems have yet to mature into a practical technology. Artificial intelligence inference, however, especially for visual computing applications, may offer opportunities for inference based on optical and photonic systems. In this Perspective, we review recent work on optical computing for artificial intelligence applications and discuss its promise and challenges.

    View details for DOI 10.1038/s41586-020-2973-6

    View details for PubMedID 33268862

  • Neural Holography with Camera-in-the-loop Training ACM TRANSACTIONS ON GRAPHICS Peng, Y., Choi, S., Padmanaban, N., Wetzstein, G. 2020; 39 (6)
  • Autofocals: Evaluating gaze-contingent eyeglasses for presbyopes. Science advances Padmanaban, N., Konrad, R., Wetzstein, G. 2019; 5 (6): eaav6187

    Abstract

    As humans age, they gradually lose the ability to accommodate, or refocus, to near distances because of the stiffening of the crystalline lens. This condition, known as presbyopia, affects nearly 20% of people worldwide. We design and build a new presbyopia correction, autofocals, to externally mimic the natural accommodation response, combining eye tracker and depth sensor data to automatically drive focus-tunable lenses. We evaluated 19 users on visual acuity, contrast sensitivity, and a refocusing task. Autofocals exhibit better visual acuity when compared to monovision and progressive lenses while maintaining similar contrast sensitivity. On the refocusing task, autofocals are faster and, compared to progressives, also significantly more accurate. In a separate study, a majority of 23 of 37 users ranked autofocals as the best correction in terms of ease of refocusing. Our work demonstrates the superiority of autofocals over current forms of presbyopia correction and could affect the lives of millions.

    View details for DOI 10.1126/sciadv.aav6187

    View details for PubMedID 31259239

  • Confocal non-line-of-sight imaging based on the light-cone transform NATURE O'Toole, M., Lindell, D. B., Wetzstein, G. 2018; 555 (7696): 338–41

    Abstract

    How to image objects that are hidden from a camera's view is a problem of fundamental importance to many fields of research, with applications in robotic vision, defence, remote sensing, medical imaging and autonomous vehicles. Non-line-of-sight (NLOS) imaging at macroscopic scales has been demonstrated by scanning a visible surface with a pulsed laser and a time-resolved detector. Whereas light detection and ranging (LIDAR) systems use such measurements to recover the shape of visible objects from direct reflections, NLOS imaging reconstructs the shape and albedo of hidden objects from multiply scattered light. Despite recent advances, NLOS imaging has remained impractical owing to the prohibitive memory and processing requirements of existing reconstruction algorithms, and the extremely weak signal of multiply scattered light. Here we show that a confocal scanning procedure can address these challenges by facilitating the derivation of the light-cone transform to solve the NLOS reconstruction problem. This method requires much smaller computational and memory resources than previous reconstruction methods do and images hidden objects at unprecedented resolution. Confocal scanning also provides a sizeable increase in signal and range when imaging retroreflective objects. We quantify the resolution bounds of NLOS imaging, demonstrate its potential for real-time tracking and derive efficient algorithms that incorporate image priors and a physically accurate noise model. Additionally, we describe successful outdoor experiments of NLOS imaging under indirect sunlight.

    View details for PubMedID 29513650

  • Acorn: Adaptive Coordinate Networks for Neural Scene Representation ACM TRANSACTIONS ON GRAPHICS Martel, J. P., Lindell, D. B., Lin, C. Z., Chan, E. R., Monteiro, M., Wetzstein, G. 2021; 40 (4)
  • A Perceptual Model for Eccentricity-dependent Spatio-temporal Flicker Fusion and its Applications to Foveated Graphics ACM TRANSACTIONS ON GRAPHICS Krajancich, B., Kellnhofer, P., Wetzstein, G. 2021; 40 (4)
  • Event-Based Near-Eye Gaze Tracking Beyond 10,000 Hz Angelopoulos, A. N., Martel, J. P., Kohli, A. P., Conradt, J., Wetzstein, G. IEEE COMPUTER SOC. 2021: 2577-2586

    Abstract

    The cameras in modern gaze-tracking systems suffer from fundamental bandwidth and power limitations, constraining data acquisition speed to 300 Hz realistically. This obstructs the use of mobile eye trackers to perform, e.g., low latency predictive rendering, or to study quick and subtle eye motions like microsaccades using head-mounted devices in the wild. Here, we propose a hybrid frame-event-based near-eye gaze tracking system offering update rates beyond 10,000 Hz with an accuracy that matches that of high-end desktop-mounted commercial trackers when evaluated in the same conditions. Our system, previewed in Figure 1, builds on emerging event cameras that simultaneously acquire regularly sampled frames and adaptively sampled events. We develop an online 2D pupil fitting method that updates a parametric model every one or few events. Moreover, we propose a polynomial regressor for estimating the point of gaze from the parametric pupil model in real time. Using the first event-based gaze dataset, we demonstrate that our system achieves accuracies of 0.45°-1.75° for fields of view from 45° to 98°. With this technology, we hope to enable a new generation of ultra-low-latency gaze-contingent rendering and display techniques for virtual and augmented reality.

    View details for DOI 10.1109/TVCG.2021.3067784

    View details for Web of Science ID 000641972200008

    View details for PubMedID 33780340

  • Optimizing image quality for holographic near-eye displays with Michelson Holography OPTICA Choi, S., Kim, J., Peng, Y., Wetzstein, G. 2021; 8 (2): 143–46
  • Keyhole Imaging:Non-Line-of-Sight Imaging and Tracking of Moving Objects Along a Single Optical Path IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING Metzler, C. A., Lindell, D. B., Wetzstein, G. 2021; 7: 1–12
  • Neural Light Field 3D Printing ACM TRANSACTIONS ON GRAPHICS Zheng, Q., Babaei, V., Wetzstein, G., Seidel, H., Zwicker, M., Singh, G. 2020; 39 (6)
  • Optimizing Depth Perception in Virtual and Augmented Reality through Gaze-contingent Stereo Rendering ACM TRANSACTIONS ON GRAPHICS Krajancich, B., Kellnhofer, P., Wetzstein, G. 2020; 39 (6)
  • Toward the next-generation VR/AR optics: a review of holographic near-eye displays from a human-centric perspective OPTICA Chang, C., Bang, K., Wetzstein, G., Lee, B., Gao, L. 2020; 7 (11): 1563–78
  • Toward the next-generation VR/AR optics: a review of holographic near-eye displays from a human-centric perspective. Optica Chang, C., Bang, K., Wetzstein, G., Lee, B., Gao, L. 2020; 7 (11): 1563-1578

    Abstract

    Wearable near-eye displays for virtual and augmented reality (VR/AR) have seen enormous growth in recent years. While researchers are exploiting a plethora of techniques to create life-like three-dimensional (3D) objects, there is a lack of awareness of the role of human perception in guiding the hardware development. An ultimate VR/AR headset must integrate the display, sensors, and processors in a compact enclosure that people can comfortably wear for a long time while allowing a superior immersion experience and user-friendly human-computer interaction. Compared with other 3D displays, the holographic display has unique advantages in providing natural depth cues and correcting eye aberrations. Therefore, it holds great promise to be the enabling technology for next-generation VR/AR devices. In this review, we survey the recent progress in holographic near-eye displays from the human-centric perspective.

    View details for DOI 10.1364/OPTICA.406004

    View details for PubMedID 34141829

    View details for PubMedCentralID PMC8208705

  • Roadmap on 3D integral imaging: sensing, processing, and display OPTICS EXPRESS Javidi, B., Carnicer, A., Arai, J., Fujii, T., Hua, H., Liao, H., Martinez-Corral, M., Pla, F., Stern, A., Waller, L., Wang, Q., Wetzstein, G., Yamaguchi, M., Yamamoto, H. 2020; 28 (22): 32266–93

    Abstract

    This Roadmap article on three-dimensional integral imaging provides an overview of some of the research activities in the field of integral imaging. The article discusses various aspects of the field including sensing of 3D scenes, processing of captured information, and 3D display and visualization of information. The paper consists of a series of 15 sections from the experts presenting various aspects of the field on sensing, processing, displays, augmented reality, microscopy, object recognition, and other applications. Each section represents the vision of its author to describe the progress, potential, vision, and challenging issues in this field.

    View details for DOI 10.1364/OE.402193

    View details for Web of Science ID 000582499400003

    View details for PubMedID 33114917

  • Learned rotationally symmetric diffractive achromat for full-spectrum computational imaging OPTICA Dun, X., Ikoma, H., Wetzstein, G., Wang, Z., Cheng, X., Peng, Y. 2020; 7 (8): 913–22
  • Neural Sensors: Learning Pixel Exposures for HDR Imaging and Video Compressive Sensing With Programmable Sensors. IEEE transactions on pattern analysis and machine intelligence Martel, J. N., Muller, L. K., Carey, S. J., Dudek, P., Wetzstein, G. 2020; 42 (7): 1642–53

    Abstract

    Camera sensors rely on global or rolling shutter functions to expose an image. This fixed function approach severely limits the sensors' ability to capture high-dynamic-range (HDR) scenes and resolve high-speed dynamics. Spatially varying pixel exposures have been introduced as a powerful computational photography approach to optically encode irradiance on a sensor and computationally recover additional information of a scene, but existing approaches rely on heuristic coding schemes and bulky spatial light modulators to optically implement these exposure functions. Here, we introduce neural sensors as a methodology to optimize per-pixel shutter functions jointly with a differentiable image processing method, such as a neural network, in an end-to-end fashion. Moreover, we demonstrate how to leverage emerging programmable and re-configurable sensor-processors to implement the optimized exposure functions directly on the sensor. Our system takes specific limitations of the sensor into account to optimize physically feasible optical codes and we evaluate its performance for snapshot HDR and high-speed compressive imaging both in simulation and experimentally with real scenes.

    View details for DOI 10.1109/TPAMI.2020.2986944

    View details for PubMedID 32305899

  • Optically sensing neural activity without imaging NATURE PHOTONICS Wetzstein, G., Kauvar, I. 2020; 14 (6): 340–41
  • Non-line-of-sight imaging NATURE REVIEWS PHYSICS Faccio, D., Velten, A., Wetzstein, G. 2020
  • SPADnet: deep RGB-SPAD sensor fusion assisted by monocular depth estimation OPTICS EXPRESS Sun, Z., Lindell, D. B., Solgaard, O., Wetzstein, G. 2020; 28 (10): 14948–62

    Abstract

    Single-photon light detection and ranging (LiDAR) techniques use emerging single-photon detectors (SPADs) to push 3D imaging capabilities to unprecedented ranges. However, it remains challenging to robustly estimate scene depth from the noisy and otherwise corrupted measurements recorded by a SPAD. Here, we propose a deep sensor fusion strategy that combines corrupted SPAD data and a conventional 2D image to estimate the depth of a scene. Our primary contribution is a neural network architecture-SPADnet-that uses a monocular depth estimation algorithm together with a SPAD denoising and sensor fusion strategy. This architecture, together with several techniques in network training, achieves state-of-the-art results for RGB-SPAD fusion with simulated and captured data. Moreover, SPADnet is more computationally efficient than previous RGB-SPAD fusion networks.

    View details for DOI 10.1364/OE.392386

    View details for Web of Science ID 000538870000067

    View details for PubMedID 32403527

  • Factored Occlusion: Single Spatial Light Modulator Occlusion-capable Optical See-through Augmented Reality Display Krajancich, B., Padmanaban, N., Wetzstein, G. IEEE COMPUTER SOC. 2020: 1871–79

    Abstract

    Occlusion is a powerful visual cue that is crucial for depth perception and realism in optical see-through augmented reality (OST-AR). However, existing OST-AR systems additively overlay physical and digital content with beam combiners - an approach that does not easily support mutual occlusion, resulting in virtual objects that appear semi-transparent and unrealistic. In this work, we propose a new type of occlusion-capable OST-AR system. Rather than additively combining the real and virtual worlds, we employ a single digital micromirror device (DMD) to merge the respective light paths in a multiplicative manner. This unique approach allows us to simultaneously block light incident from the physical scene on a pixel-by-pixel basis while also modulating the light emitted by a light-emitting diode (LED) to display digital content. Our technique builds on mixed binary/continuous factorization algorithms to optimize time-multiplexed binary DMD patterns and their corresponding LED colors to approximate a target augmented reality (AR) scene. In simulations and with a prototype benchtop display, we demonstrate hard-edge occlusions, plausible shadows, and also gaze-contingent optimization of this novel display mode, which only requires a single spatial light modulator.

    View details for DOI 10.1109/TVCG.2020.2973443

    View details for Web of Science ID 000523746000006

    View details for PubMedID 32070978

  • Gaze-Contingent Ocular Parallax Rendering for Virtual Reality ACM TRANSACTIONS ON GRAPHICS Konrad, R., Angelopoulos, A., Wetzstein, G. 2020; 39 (2)

    View details for DOI 10.1145/3361330

    View details for Web of Science ID 000583691000002

  • Three-dimensional imaging through scattering media based on confocal diffuse tomography. Nature communications Lindell, D. B., Wetzstein, G. n. 2020; 11 (1): 4517

    Abstract

    Optical imaging techniques, such as light detection and ranging (LiDAR), are essential tools in remote sensing, robotic vision, and autonomous driving. However, the presence of scattering places fundamental limits on our ability to image through fog, rain, dust, or the atmosphere. Conventional approaches for imaging through scattering media operate at microscopic scales or require a priori knowledge of the target location for 3D imaging. We introduce a technique that co-designs single-photon avalanche diodes, ultra-fast pulsed lasers, and a new inverse method to capture 3D shape through scattering media. We demonstrate acquisition of shape and position for objects hidden behind a thick diffuser (≈6 transport mean free paths) at macroscopic scales. Our technique, confocal diffuse tomography, may be of considerable value to the aforementioned applications.

    View details for DOI 10.1038/s41467-020-18346-3

    View details for PubMedID 32908155

  • DEEP OPTICS: LEARNING CAMERAS AND OPTICAL COMPUTING SYSTEMS Wetzstein, G., Ikoma, H., Metzler, C., Peng, Y., Matthews, M. B. IEEE. 2020: 1313-1315
  • Comparison of head pose tracking methods for mixed-reality neuronavigation for transcranial magnetic stimulation SPIE Medical Imaging Sathyanarayana, S., Leuze, C., Hargreaves, B., Daniel, B. L., Wetzstein, G., Etkin, A., Bhati, M. T., McNab, J. A. 2020

    View details for DOI 10.1117/12.2547917

  • Semantic Implicit Neural Scene Representations With Semi-Supervised Training Kohli, A., Sitzmann, V., Wetzstein, G., IEEE IEEE. 2020: 423-433
  • Neural Holography Peng, Y., Choi, S., Padmanaban, N., Kim, J., Wetzstein, G., ACM ASSOC COMPUTING MACHINERY. 2020
  • Cortical Observation by Synchronous Multifocal Optical Sampling Reveals Widespread Population Encoding of Actions. Neuron Kauvar, I. V., Machado, T. A., Yuen, E. n., Kochalka, J. n., Choi, M. n., Allen, W. E., Wetzstein, G. n., Deisseroth, K. n. 2020

    Abstract

    To advance the measurement of distributed neuronal population representations of targeted motor actions on single trials, we developed an optical method (COSMOS) for tracking neural activity in a largely uncharacterized spatiotemporal regime. COSMOS allowed simultaneous recording of neural dynamics at ∼30 Hz from over a thousand near-cellular resolution neuronal sources spread across the entire dorsal neocortex of awake, behaving mice during a three-option lick-to-target task. We identified spatially distributed neuronal population representations spanning the dorsal cortex that precisely encoded ongoing motor actions on single trials. Neuronal correlations measured at video rate using unaveraged, whole-session data had localized spatial structure, whereas trial-averaged data exhibited widespread correlations. Separable modes of neural activity encoded history-guided motor plans, with similar population dynamics in individual areas throughout cortex. These initial experiments illustrate how COSMOS enables investigation of large-scale cortical dynamics and that information about motor actions is widely shared between areas, potentially underlying distributed computations.

    View details for DOI 10.1016/j.neuron.2020.04.023

    View details for PubMedID 32433908

  • Deep Optics for Single-shot High-dynamic-range Imaging Metzler, C. A., Ikoma, H., Peng, Y., Wetzstein, G., IEEE IEEE. 2020: 1372–82
  • Deep Adaptive LiDAR: End-to-end Optimization of Sampling and Depth Completion at Low Sampling Rates Bergman, A. W., Lindell, D. B., Wetzstein, G., IEEE IEEE. 2020
  • Non-line-of-sight Surface Reconstruction Using the Directional Light-cone Transform Young, S., Lindell, D. B., Girod, B., Taubman, D., Wetzstein, G., IEEE IEEE. 2020: 1404–13
  • Panoramic single-aperture multi-sensor light field camera OPTICS EXPRESS Schuster, G. M., Dansereau, D. G., Wetzstein, G., Ford, J. E. 2019; 27 (26): 37257–73

    Abstract

    We describe a panoramic camera using one monocentric lens and an array of light field (LF) sensors to capture overlapping contiguous regions of the spherical image surface. Refractive sub-field consolidators divide the light before the image surface and concentrate the sub-images onto the optically active areas of adjacent CMOS sensors. We show the design of a 160° × 24° field-of-view (FOV) LF camera, and experimental test of a three sensor F/2.5 96° × 24° and five sensor (25 MPixel) F/4 140° × 24° camera. We demonstrate computational field curvature correction, refocusing, resolution enhancement, and depth mapping of a laboratory scene. We also present a 155° full circular field camera design compatible with LF or direct 164 MPixel sensing of 13 spherical sub-images, fitting within a one inch diameter sphere.

    View details for DOI 10.1364/OE.27.037257

    View details for Web of Science ID 000507254300014

    View details for PubMedID 31878509

  • Learned Large Field-of-View Imaging With Thin-Plate Optics ACM TRANSACTIONS ON GRAPHICS Peng, Y., Sun, Q., Dun, X., Wetzstein, G., Heidrich, W., Heide, F. 2019; 38 (6)
  • Varifocal Occlusion-Capable Optical See-through Augmented Reality Display based on Focus-tunable Optics Rathinavel, K., Wetzstein, G., Fuchs, H. IEEE COMPUTER SOC. 2019: 3125–34

    Abstract

    Optical see-through augmented reality (AR) systems are a next-generation computing platform that offer unprecedented user experiences by seamlessly combining physical and digital content. Many of the traditional challenges of these displays have been significantly improved over the last few years, but AR experiences offered by today's systems are far from seamless and perceptually realistic. Mutually consistent occlusions between physical and digital objects are typically not supported. When mutual occlusion is supported, it is only supported for a fixed depth. We propose a new optical see-through AR display system that renders mutual occlusion in a depth-dependent, perceptually realistic manner. To this end, we introduce varifocal occlusion displays based on focus-tunable optics, which comprise a varifocal lens system and spatial light modulators that enable depth-corrected hard-edge occlusions for AR experiences. We derive formal optimization methods and closed-form solutions for driving this tunable lens system and demonstrate a monocular varifocal occlusion-capable optical see-through AR display capable of perceptually realistic occlusion across a large depth range.

    View details for DOI 10.1109/TVCG.2019.2933120

    View details for Web of Science ID 000489833000010

    View details for PubMedID 31502977

  • Holographic Near-Eye Displays Based on Overlap-Add Stereograms ACM TRANSACTIONS ON GRAPHICS Padmanaban, N., Peng, Y., Wetzstein, G. 2019; 38 (6)
  • Preface COMPUTER GRAPHICS FORUM Lee, J., Theobalt, C., Wetzstein, G. 2019; 38 (7)
  • Wave-Based Non-Line-of-Sight Imaging using Fast f-k Migration ACM TRANSACTIONS ON GRAPHICS Lindell, D. B., Wetzstein, G., O'Toole, M. 2019; 38 (4)
  • Non-line-of-sight Imaging with Partial Occluders and Surface Normals ACM TRANSACTIONS ON GRAPHICS Heide, F., O'Toole, M., Zang, K., Lindell, D., Diamond, S., Wetzstein, G. 2019; 38 (3)

    View details for DOI 10.1145/3269977

    View details for Web of Science ID 000495415600004

  • A Light-Field Metasurface for High-Resolution Single-Particle Tracking NANO LETTERS Holsteen, A. L., Lin, D., Kauvar, I., Wetzstein, G., Brongersma, M. L. 2019; 19 (4): 2267–71
  • A Light-Field Metasurface for High-Resolution Single-Particle Tracking. Nano letters Holsteen, A. L., Lin, D., Kauvar, I., Wetzstein, G., Brongersma, M. L. 2019

    Abstract

    Three-dimensional (3D) single-particle tracking (SPT) is a key tool for studying dynamic processes in the life sciences. However, conventional optical elements utilizing light fields impose an inherent trade-off between lateral and axial resolution, preventing SPT with high spatiotemporal resolution across an extended volume. We overcome the typical loss in spatial resolution that accompanies light-field-based approaches to obtain 3D information by placing a standard microscope coverslip patterned with a multifunctional, light-field metasurface on a specimen. This approach enables an otherwise unmodified microscope to gather 3D information at an enhanced spatial resolution. We demonstrate simultaneous tracking of multiple fluorescent particles within a large 0.5 * 0.5 * 0.3 mm3 volume using a standard epi-fluorescent microscope with submicron lateral and micron-level axial resolution.

    View details for PubMedID 30897902

  • Acoustic Non-Line-of-Sight Imaging Lindell, D. B., Wetzstein, G., Koltun, V., IEEE Comp Soc IEEE. 2019: 3773–6782
  • Deep Optics for Monocular Depth Estimation and 3D Object Detection Chang, J., Wetzstein, G., IEEE IEEE. 2019: 10192–201
  • Scene Representation Networks: Continuous 3D-Structure-Aware Neural Scene Representations Sitzmann, V., Zollhofer, M., Wetzstein, G., Wallach, H., Larochelle, H., Beygelzimer, A., d'Alche-Buc, F., Fox, E., Garnett, R. NEURAL INFORMATION PROCESSING SYSTEMS (NIPS). 2019
  • DeepVoxels: Learning Persistent 3D Feature Embeddings Sitzmann, V., Thies, J., Heide, F., Niessner, M., Wetzstein, G., Zollhofer, M., IEEE Comp Soc IEEE COMPUTER SOC. 2019: 2432–41
  • LiFF: Light Field Features in Scale and Depth Dansereau, D. G., Girod, B., Wetzstein, G., IEEE Comp Soc IEEE. 2019: 8034–43
  • Sub-picosecond photon-efficient 3D imaging using single-photon sensors. Scientific reports Heide, F., Diamond, S., Lindell, D. B., Wetzstein, G. 2018; 8 (1): 17726

    Abstract

    Active 3D imaging systems have broad applications across disciplines, including biological imaging, remote sensing and robotics. Applications in these domains require fast acquisition times, high timing accuracy, and high detection sensitivity. Single-photon avalanche diodes (SPADs) have emerged as one of the most promising detector technologies to achieve all of these requirements. However, these detectors are plagued by measurement distortions known as pileup, which fundamentally limit their precision. In this work, we develop a probabilistic image formation model that accurately models pileup. We devise inverse methods to efficiently and robustly estimate scene depth and reflectance from recorded photon counts using the proposed model along with statistical priors. With this algorithm, we not only demonstrate improvements to timing accuracy by more than an order of magnitude compared to the state-of-the-art, but our approach is also the first to facilitate sub-picosecond-accurate, photon-efficient 3D imaging in practical scenarios where widely-varying photon counts are observed.

    View details for PubMedID 30531961

  • Sub-picosecond photon-efficient 3D imaging using single-photon sensors SCIENTIFIC REPORTS Heide, F., Diamond, S., Lindell, D. B., Wetzstein, G. 2018; 8
  • Hybrid optical-electronic convolutional neural networks with optimized diffractive optics for image classification. Scientific reports Chang, J., Sitzmann, V., Dun, X., Heidrich, W., Wetzstein, G. 2018; 8 (1): 12324

    Abstract

    Convolutional neural networks (CNNs) excel in a wide variety of computer vision applications, but their high performance also comes at a high computational cost. Despite efforts to increase efficiency both algorithmically and with specialized hardware, it remains difficult to deploy CNNs in embedded systems due to tight power budgets. Here we explore a complementary strategy that incorporates a layer of optical computing prior to electronic computing, improving performance on image classification tasks while adding minimal electronic computational cost or processing time. We propose a design for an optical convolutional layer based on an optimized diffractive optical element and test our design in two simulations: a learned optical correlator and an optoelectronic two-layer CNN. We demonstrate in simulation and with an optical prototype that the classification accuracies of our optical systems rival those of the analogous electronic implementations, while providing substantial savings on computational cost.

    View details for PubMedID 30120316

  • Hybrid optical-electronic convolutional neural networks with optimized diffractive optics for image classification SCIENTIFIC REPORTS Chang, J., Sitzmann, V., Dun, X., Heidrich, W., Wetzstein, G. 2018; 8
  • End-to-end Optimization of Optics and Image Processing for Achromatic Extended Depth of Field and Super-resolution Imaging ACM TRANSACTIONS ON GRAPHICS Sitzmann, V., Diamond, S., Peng, Y., Dun, X., Boyd, S., Heidrich, W., Heide, F., Wetzstein, G. 2018; 37 (4)
  • Single-Photon 3D Imaging with Deep Sensor Fusion ACM TRANSACTIONS ON GRAPHICS Lindell, D. B., O'Toole, M., Wetzstein, G. 2018; 37 (4)
  • A convex 3D deconvolution algorithm for low photon count fluorescence imaging. Scientific reports Ikoma, H., Broxton, M., Kudo, T., Wetzstein, G. 2018; 8 (1): 11489

    Abstract

    Deconvolution is widely used to improve the contrast and clarity of a 3D focal stack collected using a fluorescence microscope. But despite being extensively studied, deconvolution algorithms can introduce reconstruction artifacts when their underlying noise models or priors are violated, such as when imaging biological specimens at extremely low light levels. In this paper we propose a deconvolution method specifically designed for 3D fluorescence imaging of biological samples in the low-light regime. Our method utilizes a mixed Poisson-Gaussian model of photon shot noise and camera read noise, which are both present in low light imaging. We formulate a convex loss function and solve the resulting optimization problem using the alternating direction method of multipliers algorithm. Among several possible regularization strategies, we show that a Hessian-based regularizer is most effective for describing locally smooth features present in biological specimens. Our algorithm also estimates noise parameters on-the-fly, thereby eliminating a manual calibration step required by most deconvolution software. We demonstrate our algorithm on simulated images and experimentally-captured images with peak intensities of tens of photoelectrons per voxel. We also demonstrate its performance for live cell imaging, showing its applicability as a tool for biological research.

    View details for PubMedID 30065270

  • A convex 3D deconvolution algorithm for low photon count fluorescence imaging SCIENTIFIC REPORTS Ikoma, H., Broxton, M., Kudo, T., Wetzstein, G. 2018; 8
  • Towards a Machine-learning Approach for Sickness Prediction in 360 degrees Stereoscopic Videos Padmanaban, N., Ruban, T., Sitzmann, V., Norcia, A. M., Wetzstein, G. IEEE COMPUTER SOC. 2018: 1594–1603

    Abstract

    Virtual reality systems are widely believed to be the next major computing platform. There are, however, some barriers to adoption that must be addressed, such as that of motion sickness - which can lead to undesirable symptoms including postural instability, headaches, and nausea. Motion sickness in virtual reality occurs as a result of moving visual stimuli that cause users to perceive self-motion while they remain stationary in the real world. There are several contributing factors to both this perception of motion and the subsequent onset of sickness, including field of view, motion velocity, and stimulus depth. We verify first that differences in vection due to relative stimulus depth remain correlated with sickness. Then, we build a dataset of stereoscopic 3D videos and their corresponding sickness ratings in order to quantify their nauseogenicity, which we make available for future use. Using this dataset, we train a machine learning algorithm on hand-crafted features (quantifying speed, direction, and depth as functions of time) from each video, learning the contributions of these various features to the sickness ratings. Our predictor generally outperforms a naïve estimate, but is ultimately limited by the size of the dataset. However, our result is promising and opens the door to future work with more extensive datasets. This and further advances in this space have the potential to alleviate developer and end user concerns about motion sickness in the increasingly commonplace virtual world.

    View details for DOI 10.1109/TVCG.2018.2793560

    View details for Web of Science ID 000427682500022

    View details for PubMedID 29553929

  • Convolutional Sparse Coding for RGB plus NIR Imaging IEEE TRANSACTIONS ON IMAGE PROCESSING Hu, X., Heide, F., Dai, Q., Wetzstein, G. 2018; 27 (4): 1611–25

    Abstract

    Emerging sensor designs increasingly rely on novel color filter arrays (CFAs) to sample the incident spectrum in unconventional ways. In particular, capturing a near-infrared (NIR) channel along with conventional RGB color is an exciting new imaging modality. RGB+NIR sensing has broad applications in computational photography, such as low-light denoising, it has applications in computer vision, such as facial recognition and tracking, and it paves the way toward low-cost single-sensor RGB and depth imaging using structured illumination. However, cost-effective commercial CFAs suffer from severe spectral cross talk. This cross talk represents a major challenge in high-quality RGB+NIR imaging, rendering existing spatially multiplexed sensor designs impractical. In this work, we introduce a new approach to RGB+NIR image reconstruction using learned convolutional sparse priors. We demonstrate high-quality color and NIR imaging for challenging scenes, even including high-frequency structured NIR illumination. The effectiveness of the proposed method is validated on a large data set of experimental captures, and simulated benchmark results which demonstrate that this work achieves unprecedented reconstruction quality.

    View details for DOI 10.1109/TIP.2017.2781303

    View details for Web of Science ID 000429463800005

    View details for PubMedID 29324415

  • Saliency in VR: How do people explore virtual environments? Sitzmann, V., Serrano, A., Pavel, A., Agrawala, M., Gutierrez, D., Masia, B., Wetzstein, G. IEEE COMPUTER SOC. 2018: 1633–42

    Abstract

    Understanding how people explore immersive virtual environments is crucial for many applications, such as designing virtual reality (VR) content, developing new compression algorithms, or learning computational models of saliency or visual attention. Whereas a body of recent work has focused on modeling saliency in desktop viewing conditions, VR is very different from these conditions in that viewing behavior is governed by stereoscopic vision and by the complex interaction of head orientation, gaze, and other kinematic constraints. To further our understanding of viewing behavior and saliency in VR, we capture and analyze gaze and head orientation data of 169 users exploring stereoscopic, static omni-directional panoramas, for a total of 1980 head and gaze trajectories for three different viewing conditions. We provide a thorough analysis of our data, which leads to several important insights, such as the existence of a particular fixation bias, which we then use to adapt existing saliency predictors to immersive VR conditions. In addition, we explore other applications of our data and analysis, including automatic alignment of VR video cuts, panorama thumbnails, panorama video synopsis, and saliency-basedcompression.

    View details for DOI 10.1109/TVCG.2018.2793599

    View details for Web of Science ID 000427682500026

    View details for PubMedID 29553930

  • An Easy-to-Use Pipeline for an RGBD Camera and an AR Headset PRESENCE-VIRTUAL AND AUGMENTED REALITY Jun, H., Bailenson, J. N., Fuchs, H., Wetzstein, G. 2018; 27 (2): 202-205
  • Single-shot speckle correlation fluorescence microscopy in thick scattering tissue with image reconstruction priors JOURNAL OF BIOPHOTONICS Chang, J., Wetzstein, G. 2018; 11 (3)
  • Single-shot speckle correlation fluorescence microscopy in thick scattering tissue with image reconstruction priors. Journal of biophotonics Chang, J., Wetzstein, G. 2018; 11 (3)

    Abstract

    Deep tissue imaging in the multiple scattering regime remains at the frontier of fluorescence microscopy. Speckle correlation imaging (SCI) can computationally uncover objects hidden behind a scattering layer, but has only been demonstrated with scattered laser illumination and in geometries where the scatterer is in the far field of the target object. Here, SCI is extended to imaging a planar fluorescent signal at the back surface of a 500-mum-thick slice of mouse brain. The object is reconstructed from a single snapshot through phase retrieval using a proximal algorithm that easily incorporates image priors. Simulations and experiments demonstrate improved image recovery with this approach compared to the conventional SCI algorithm.

    View details for PubMedID 29219256

  • Time-multiplexed light field synthesis via factored Wigner distribution function OPTICS LETTERS Hamann, S., Shi, L., Solgaard, O., Wetzstein, G. 2018; 43 (3): 599–602

    Abstract

    An optimization algorithm for preparing display-ready holographic elements (hogels) to synthesize a light field is outlined, and proof of concept is experimentally demonstrated. This method allows for higher-rank factorization, which can be used for time-multiplexing multiple frames for improved image quality, using phase-only and fully complex modulation with a single spatial light modulator.

    View details for DOI 10.1364/OL.43.000599

    View details for Web of Science ID 000423776600064

    View details for PubMedID 29400850

  • Towards Transient Imaging at Interactive Rates with Single-Photon Detectors Lindell, D. B., O'Toole, M., Wetzstein, G., IEEE IEEE. 2018
  • Deep End-to-End Time-of-Flight Imaging Su, S., Heide, F., Wetzstein, G., Heidrich, W., IEEE IEEE. 2018: 6383–92
  • Real-time Non-line-of-sight Imaging O'Toole, M., Lindell, D. B., Wetzstein, G., Assoc Comp Machinery ASSOC COMPUTING MACHINERY. 2018
  • Confocal Non-line-of-sight Imaging O'Toole, M., Lindell, D. B., Wetzstein, G., Assoc Comp Machinery ASSOC COMPUTING MACHINERY. 2018
  • Autofocals: Gaze-Contingent Eyeglasses for Presbyopes Padmanaban, N., Konrad, R., Wetzstein, G., Assoc Comp Machinery ASSOC COMPUTING MACHINERY. 2018
  • SpinVR: Towards Live-Streaming 3D Virtual Reality Video Konrad, R., Dansereau, D. G., Masood, A., Wetzstein, G. ASSOC COMPUTING MACHINERY. 2017
  • Snapshot Difference Imaging using Correlation Time-of-Flight Sensors Callenberg, C., Heide, F., Wetzstein, G., Hullin, M. B. ASSOC COMPUTING MACHINERY. 2017
  • Accommodation-invariant Computational Near-eye Displays ACM TRANSACTIONS ON GRAPHICS Konrad, R., Padmanaban, N., Molner, K., Cooper, E. A., Wetzstein, G. 2017; 36 (4)
  • Movie Editing and Cognitive Event Segmentation in Virtual Reality Video ACM TRANSACTIONS ON GRAPHICS Serrano, A., Sitzmann, V., Ruiz-Borau, J., Wetzstei, G., Gutierrez, D., Masia, B. 2017; 36 (4)
  • Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays. Proceedings of the National Academy of Sciences of the United States of America Padmanaban, N., Konrad, R., Stramer, T., Cooper, E. A., Wetzstein, G. 2017; 114 (9): 2183-2188

    Abstract

    From the desktop to the laptop to the mobile device, personal computing platforms evolve over time. Moving forward, wearable computing is widely expected to be integral to consumer electronics and beyond. The primary interface between a wearable computer and a user is often a near-eye display. However, current generation near-eye displays suffer from multiple limitations: they are unable to provide fully natural visual cues and comfortable viewing experiences for all users. At their core, many of the issues with near-eye displays are caused by limitations in conventional optics. Current displays cannot reproduce the changes in focus that accompany natural vision, and they cannot support users with uncorrected refractive errors. With two prototype near-eye displays, we show how these issues can be overcome using display modes that adapt to the user via computational optics. By using focus-tunable lenses, mechanically actuated displays, and mobile gaze-tracking technology, these displays can be tailored to correct common refractive errors and provide natural focus cues by dynamically updating the system based on where a user looks in a virtual scene. Indeed, the opportunities afforded by recent advances in computational optics open up the possibility of creating a computing platform in which some users may experience better quality vision in the virtual world than in the real one.

    View details for DOI 10.1073/pnas.1617251114

    View details for PubMedID 28193871

  • Aperture interference and the volumetric resolution of light field fluorescence microscopy Kauvar, I., Chang, J., Wetzstein, G., IEEE IEEE. 2017: 83–94
  • Optimizing VR for All Users Through Adaptive Focus Displays Padmanaban, N., Konrad, R., Cooper, E. A., Wetzstein, G., Assoc Comp Machinery ASSOC COMPUTING MACHINERY. 2017
  • Reconstructing Transient Images from Single-Photon Sensors O'Toole, M., Heide, F., Lindell, D. B., Zang, K., Diamond, S., Wetzstein, G., IEEE IEEE. 2017: 2289–97
  • Consensus Convolutional Sparse Coding Choudhury, B., Swanson, R., Heide, F., Wetzstein, G., Heidrich, W., IEEE IEEE. 2017: 4290–98
  • Computational Near-Eye Displays: Engineering the Interface to the Digital World Wetzstein, G., Natl Acad Engn NATL ACADEMIES PRESS. 2017: 7–12
  • A Wide-Field-of-View Monocentric Light Field Camera Dansereau, D. G., Schuster, G., Ford, J., Wetzstein, G., IEEE IEEE. 2017: 3757–66
  • Photonic Multitasking Interleaved Si Nanoantenna Phased Array NANO LETTERS Lin, D., Holsteen, A. L., Maguid, E., Wetzstein, G., Kik, P. G., Hasman, E., Brongersma, M. L. 2016; 16 (12): 7671-7676

    Abstract

    Metasurfaces provide unprecedented control over light propagation by imparting local, space-variant phase changes on an incident electromagnetic wave. They can improve the performance of conventional optical elements and facilitate the creation of optical components with new functionalities and form factors. Here, we build on knowledge from shared aperture phased array antennas and Si-based gradient metasurfaces to realize various multifunctional metasurfaces capable of achieving multiple distinct functions within a single surface region. As a key point, we demonstrate that interleaving multiple optical elements can be accomplished without reducing the aperture of each subelement. Multifunctional optical elements constructed from Si-based gradient metasurface are realized, including axial and lateral multifocus geometric phase metasurface lenses. We further demonstrate multiwavelength color imaging with a high spatial resolution. Finally, optical imaging functionality with simultaneous color separation has been obtained by using multifunctional metasurfaces, which opens up new opportunities for the field of advanced imaging and display.

    View details for DOI 10.1021/acs.nanolett.6b03505

    View details for PubMedID 27960478

  • 3D Displays. Annual review of vision science Banks, M. S., Hoffman, D. M., Kim, J., Wetzstein, G. 2016; 2: 397-435

    Abstract

    Creating realistic three-dimensional (3D) experiences has been a very active area of research and development, and this article describes progress and what remains to be solved. A very active area of technical development has been to build displays that create the correct relationship between viewing parameters and triangulation depth cues: stereo, motion, and focus. Several disciplines are involved in the design, construction, evaluation, and use of 3D displays, but an understanding of human vision is crucial to this enterprise because in the end, the goal is to provide the desired perceptual experience for the viewer. In this article, we review research and development concerning displays that create 3D experiences. And we highlight areas in which further research and development is needed.

    View details for DOI 10.1146/annurev-vision-082114-035800

    View details for PubMedID 28532351

  • Factored Displays Improving resolution, dynamic range, color reproduction, and light field characteristics with advanced signal processing IEEE SIGNAL PROCESSING MAGAZINE Wetzstein, G., Lanman, D. 2016; 33 (5): 119-129
  • Computational Imaging with Multi-Camera Time-of-Flight Systems ACM TRANSACTIONS ON GRAPHICS Shrestha, S., Heide, F., Heidrich, W., Wetzstein, G. 2016; 35 (4)
  • ProxImaL: Efficient Image Optimization using Proximal Algorithms ACM TRANSACTIONS ON GRAPHICS Heide, F., Diamond, S., Niessner, M., Ragan-Kelley, J., Heidrich, W., Wetzstein, G. 2016; 35 (4)
  • Convolutional Sparse Coding for High Dynamic Range Imaging COMPUTER GRAPHICS FORUM Serrano, A., Heide, F., Gutierrez, D., Wetzstein, G., Masia, B. 2016; 35 (2): 153-163

    View details for DOI 10.1111/cgf.12819

    View details for Web of Science ID 000377222200015

  • Tensor low-rank and sparse light field photography COMPUTER VISION AND IMAGE UNDERSTANDING Kamal, M. H., Heshmat, B., Raskar, R., Vandergheynst, P., Wetzstein, G. 2016; 145: 172-181
  • 3D Displays ANNUAL REVIEW OF VISION SCIENCE, VOL 2 Banks, M. S., Hoffman, D. M., Kim, J., Wetzstein, G. 2016; 2: 397-435

    Abstract

    Creating realistic three-dimensional (3D) experiences has been a very active area of research and development, and this article describes progress and what remains to be solved. A very active area of technical development has been to build displays that create the correct relationship between viewing parameters and triangulation depth cues: stereo, motion, and focus. Several disciplines are involved in the design, construction, evaluation, and use of 3D displays, but an understanding of human vision is crucial to this enterprise because in the end, the goal is to provide the desired perceptual experience for the viewer. In this article, we review research and development concerning displays that create 3D experiences. And we highlight areas in which further research and development is needed.

    View details for DOI 10.1146/annurev-vision-082114-035800

    View details for Web of Science ID 000389589000018

  • Novel Optical Configurations for Virtual Reality: Evaluating User Preference and Performance with Focus-tunable and Monovision Near-eye Displays Konrad, R., Cooper, E. A., Wetzstein, G., ACM ASSOC COMPUTING MACHINERY. 2016: 1211-1220
  • DEPTH AUGMENTED STEREO PANORAMA FOR CINEMATIC VIRTUAL REALITY WITH FOCUS CUES Thatte, J., Boin, J., Lakshman, H., Wetzstein, G., Girod, B., IEEE IEEE. 2016: 1569-1573
  • Variable Aperture Light Field Photography: Overcoming the Diffraction-limited Spatio-angular Resolution Tradeoff Chang, J., Kauvar, I., Hu, X., Wetzstein, G., IEEE IEEE. 2016: 3737–45
  • Extended field-of-view and increased-signal 3D holographic illumination with time-division multiplexing OPTICS EXPRESS Yang, S. J., Allen, W. E., Kauvar, I., Andalman, A. S., Young, N. P., Kim, C. K., Marshel, J. H., Wetzstein, G., Deisseroth, K. 2015; 23 (25): 32573-32581

    Abstract

    Phase spatial light modulators (SLMs) are widely used for generating multifocal three-dimensional (3D) illumination patterns, but these are limited to a field of view constrained by the pixel count or size of the SLM. Further, with two-photon SLM-based excitation, increasing the number of focal spots penalizes the total signal linearly--requiring more laser power than is available or can be tolerated by the sample. Here we analyze and demonstrate a method of using galvanometer mirrors to time-sequentially reposition multiple 3D holograms, both extending the field of view and increasing the total time-averaged two-photon signal. We apply our approach to 3D two-photon in vivo neuronal calcium imaging.

    View details for DOI 10.1364/OE.23.032573

    View details for Web of Science ID 000366687200093

    View details for PubMedID 26699047

    View details for PubMedCentralID PMC4775739

  • Adaptive Color Display via Perceptually-driven Factored Spectral Projection ACM TRANSACTIONS ON GRAPHICS Kauvar, I., Yang, S. J., Shi, L., McDowall, I., Wetzstein, G. 2015; 34 (6)
  • Doppler Time-of-Flight Imaging ACM TRANSACTIONS ON GRAPHICS Heide, F., Heidrich, W., Hullin, M., Wetzstein, G. 2015; 34 (4)

    View details for DOI 10.1145/2766953

    View details for Web of Science ID 000358786600002

  • The Light Field Stereoscope Immersive Computer Graphics via Factored Near-Eye Light Field Displays with Focus Cues ACM TRANSACTIONS ON GRAPHICS Huang, F., Chen, K., Wetzstein, G. 2015; 34 (4)

    View details for DOI 10.1145/2766922

    View details for Web of Science ID 000358786600026

  • Fast and Flexible Convolutional Sparse Coding Heide, F., Heidrich, W., Wetzstein, G., IEEE IEEE. 2015: 5135-5143
  • Vision Correcting Displays Based on Inverse Blurring and Aberration Compensation Barsky, B. A., Huang, F., Lanman, D., Wetzstein, G., Raskar, R., Agapito, L., Bronstein, M. M., Rother, C. SPRINGER-VERLAG BERLIN. 2015: 524-538
  • Toward BxDF Display using Multilayer Diffraction ACM TRANSACTIONS ON GRAPHICS Ye, G., Jolly, S., Bove, V. M., Dai, Q., Raskar, R., Wetzstein, G. 2014; 33 (6)
  • Ultra-fast Lensless Computational Imaging through 5D Frequency Analysis of Time-resolved Light Transport INTERNATIONAL JOURNAL OF COMPUTER VISION Wu, D., Wetzstein, G., Barsi, C., Willwacher, T., Dai, Q., Raskar, R. 2014; 110 (2): 128-140
  • Computational Schlieren Photography with Light Field Probes INTERNATIONAL JOURNAL OF COMPUTER VISION Wetzstein, G., Heidrich, W., Raskar, R. 2014; 110 (2): 113-127
  • Light Field Reconstruction Using Sparsity in the Continuous Fourier Domain ACM TRANSACTIONS ON GRAPHICS Shi, L., Hassanieh, H., Davis, A., Katabi, D., Durand, F. 2014; 34 (1)

    View details for DOI 10.1145/2682631

    View details for Web of Science ID 000347029500012

  • Wide field of view compressive light field display using a multilayer architecture and tracked viewers JOURNAL OF THE SOCIETY FOR INFORMATION DISPLAY Chen, R., Maimone, A., Fuchs, H., Raskar, R., Wetzstein, G. 2014; 22 (10): 525-534

    View details for DOI 10.1002/jsid.285

    View details for Web of Science ID 000354201900005

  • Attenuation-corrected fluorescence spectra unmixing for spectroscopy and microscopy OPTICS EXPRESS Ikoma, H., Heshmat, B., Wetzstein, G., Raskar, R. 2014; 22 (16)

    Abstract

    In fluorescence measurements, light is often absorbed and scattered by a sample both for excitation and emission, resulting in the measured spectra to be distorted. Conventional linear unmixing methods computationally separate overlapping spectra but do not account for these effects. We propose a new algorithm for fluorescence unmixing that accounts for the attenuation-related distortion effect on fluorescence spectra. Using a matrix representation, we derive forward measurement formation and a corresponding inverse method; the unmixing algorithm is based on nonnegative matrix factorization. We also demonstrate how this method can be extended to a higher-dimensional tensor form, which is useful for unmixing overlapping spectra observed under the attenuation effect in spectral imaging microscopy. We evaluate the proposed methods in simulation and experiments and show that it outperforms a conventional, linear unmixing method when absorption and scattering contributes to the measured signals, as in deep tissue imaging.

    View details for DOI 10.1364/OE.22.019469

    View details for Web of Science ID 000340714100058

    View details for PubMedID 25321030

  • A Compressive Light Field Projection System ACM TRANSACTIONS ON GRAPHICS Hirsch, M., Wetzstein, G., Raskar, R. 2014; 33 (4)
  • Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy NATURE METHODS Prevedel, R., Yoon, Y., Hoffmann, M., Pak, N., Wetzstein, G., Kato, S., Schroedel, T., Raskar, R., Zimmer, M., Boyden, E. S., Vaziri, A. 2014; 11 (7): 727-U161

    Abstract

    High-speed, large-scale three-dimensional (3D) imaging of neuronal activity poses a major challenge in neuroscience. Here we demonstrate simultaneous functional imaging of neuronal activity at single-neuron resolution in an entire Caenorhabditis elegans and in larval zebrafish brain. Our technique captures the dynamics of spiking neurons in volumes of ∼700 μm × 700 μm × 200 μm at 20 Hz. Its simplicity makes it an attractive tool for high-speed volumetric calcium imaging.

    View details for DOI 10.1038/NMETH.2964

    View details for PubMedID 24836920

  • Eyeglasses-free Display: Towards Correcting Visual Aberrations with Computational Light Field Displays ACM TRANSACTIONS ON GRAPHICS Huang, F., Wetzstein, G., Barsky, B. A., Raskar, R. 2014; 33 (4)
  • Compressive multi-mode superresolution display OPTICS EXPRESS Heide, F., Gregson, J., Wetzstein, G., Raskar, R., Heidrich, W. 2014; 22 (12): 14981-14992

    Abstract

    Compressive displays are an emerging technology exploring the co-design of new optical device configurations and compressive computation. Previously, research has shown how to improve the dynamic range of displays and facilitate high-quality light field or glasses-free 3D image synthesis. In this paper, we introduce a new multi-mode compressive display architecture that supports switching between 3D and high dynamic range (HDR) modes as well as a new super-resolution mode. The proposed hardware consists of readily-available components and is driven by a novel splitting algorithm that computes the pixel states from a target high-resolution image. In effect, the display pixels present a compressed representation of the target image that is perceived as a single, high resolution image.

    View details for DOI 10.1364/OE.22.014981

    View details for Web of Science ID 000338044300090

    View details for PubMedID 24977592

  • Dual-coded compressive hyperspectral imaging OPTICS LETTERS Lin, X., Wetzstein, G., Liu, Y., Dai, Q. 2014; 39 (7): 2044-2047

    Abstract

    This Letter presents a new snapshot approach to hyperspectral imaging via dual-optical coding and compressive computational reconstruction. We demonstrate that two high-speed spatial light modulators, located conjugate to the image and spectral plane, respectively, can code the hyperspectral datacube into a single sensor image such that the high-resolution signal can be recovered in postprocessing. We show various applications by designing different optical modulation functions, including programmable spatially varying color filtering, multiplexed hyperspectral imaging, and high-resolution compressive hyperspectral imaging.

    View details for DOI 10.1364/OL.39.002044

    View details for Web of Science ID 000333887800086

    View details for PubMedID 24686670

  • A Switchable Light Field Camera Architecture with Angle Sensitive Pixels and Dictionary-based Sparse Coding Hirsch, M., Sivaramakrishnan, S., Jayasuriya, S., Wang, A., Molnar, A., Raskar, R., Wetzstein, G., IEEE IEEE. 2014
  • Nonlinear Fluorescence Spectra Unmixing Ikoma, H., Heshmat, B., Wetzstein, G., Raskar, R., IEEE IEEE. 2014
  • Display adaptive 3D content remapping COMPUTERS & GRAPHICS-UK Masia, B., Wetzstein, G., Aliaga, C., Raskar, R., Gutierrez, D. 2013; 37 (8): 983-996
  • A survey on computational displays: Pushing the boundaries of optics, computation, and perception COMPUTERS & GRAPHICS-UK Masia, B., Wetzstein, G., Didyk, P., Gutierrez, D. 2013; 37 (8): 1012-1038
  • Focus 3D: Compressive Accommodation Display ACM TRANSACTIONS ON GRAPHICS Maimone, A., Wetzstein, G., Hirsch, M., Lanman, D., Raskar, R., Fuchs, H. 2013; 32 (5)

    View details for DOI 10.1145/2503144

    View details for Web of Science ID 000326922900007

  • Adaptive Image Synthesis for Compressive Displays ACM TRANSACTIONS ON GRAPHICS Heide, F., Wetzstein, G., Raskar, R., Heidrich, W. 2013; 32 (4)
  • Compressive Light Field Photography using Overcomplete Dictionaries and Optimized Projections ACM TRANSACTIONS ON GRAPHICS Marwah, K., Wetzstein, G., Bando, Y., Raskar, R. 2013; 32 (4)
  • Real-time Image Generation for Compressive Light Field Displays 9th International Symposium on Display Holography (ISDH) Wetzstein, G., Lanman, D., Hirsch, M., Raskar, R. IOP PUBLISHING LTD. 2013
  • Depth of Field Analysis for Multilayer Automultiscopic Displays 9th International Symposium on Display Holography (ISDH) Lanman, D., Wetzstein, G., Hirsch, M., Raskar, R. IOP PUBLISHING LTD. 2013
  • Construction and Calibration of Optically Efficient LCD-based Multi-Layer Light Field Displays 9th International Symposium on Display Holography (ISDH) Hirsch, M., Lanman, D., Wetzstein, G., Raskar, R. IOP PUBLISHING LTD. 2013
  • On Plenoptic Multiplexing and Reconstruction INTERNATIONAL JOURNAL OF COMPUTER VISION Wetzstein, G., Ihrke, I., Heidrich, W. 2013; 101 (2): 384-400
  • Compressive Light Field Displays IEEE COMPUTER GRAPHICS AND APPLICATIONS Wetzstein, G., Lanman, D., Hirsch, M., Heidrich, W., Raskar, R. 2012; 32 (5): 6-11

    Abstract

    Light fields are the multiview extension of stereo image pairs: a collection of images showing a 3D scene from slightly different perspectives. Depicting high-resolution light fields usually requires an excessively large display bandwidth; compressive light field displays are enabled by the codesign of optical elements and computational-processing algorithms. Rather than pursuing a direct "optical" solution (for example, adding one more pixel to support the emission of one additional light ray), compressive displays aim to create flexible optical systems that can synthesize a compressed target light field. In effect, each pixel emits a superposition of light rays. Through compression and tailored optical designs, fewer display pixels are necessary to emit a given light field than a direct optical solution would require.

    View details for Web of Science ID 000307910800003

    View details for PubMedID 24806982

  • Tensor Displays: Compressive Light Field Synthesis using Multilayer Displays with Directional Backlighting ACM TRANSACTIONS ON GRAPHICS Wetzstein, G., Lanman, D., Hirsch, M., Raskar, R. 2012; 31 (4)
  • Compressive Light Field Photography Special-Interest-Group-on-Computer-Graphics-and-Interactive-Techniques Conference (SIGGRAPH) Marwah, K., Wetzstein, G., Veeraraghavan, A., Raskar, R. ASSOC COMPUTING MACHINERY. 2012
  • Beyond Parallax Barriers: Applying Formal Optimization Methods to Multi-Layer Automultiscopic Displays Lanman, D., Wetzstein, G., Hirsch, M., Heidrich, W., Raskar, R., Woods, A. J., Holliman, N. S., Favalora, G. E. SPIE-INT SOC OPTICAL ENGINEERING. 2012

    View details for DOI 10.1117/12.907146

    View details for Web of Science ID 000302558300008

  • Frequency Analysis of Transient Light Transport with Applications in Bare Sensor Imaging Wu, D., Wetzstein, G., Barsi, C., Willwacher, T., O'Toole, M., Naik, N., Dai, Q., Kutulakos, K., Raskar, R., Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. SPRINGER-VERLAG BERLIN. 2012: 542-555
  • Polarization Fields: Dynamic Light Field Display using Multi-Layer LCDs ACM TRANSACTIONS ON GRAPHICS Lanman, D., Wetzstein, G., Hirsch, M., Heidrich, W., Raskar, R. 2011; 30 (6)
  • Computational Plenoptic Imaging COMPUTER GRAPHICS FORUM Wetzstein, G., Ihrke, I., Lanman, D., Heidrich, W. 2011; 30 (8): 2397-2426
  • Layered 3D: Tomographic Image Synthesis for Attenuation-based Light Field and High Dynamic Range Displays ACM TRANSACTIONS ON GRAPHICS Wetzstein, G., Lanman, D., Heidrich, W., Raskar, R. 2011; 30 (4)
  • Refractive Shape from Light Field Distortion Wetzstein, G., Roodnick, D., Heidrich, W., Raskar, R., IEEE IEEE. 2011: 1180-1186