
Gordon Wetzstein
Assistant Professor of Electrical Engineering and, by courtesy, of Computer Science
Bio
Gordon Wetzstein is an Assistant Professor of Electrical Engineering and, by courtesy, of Computer Science at Stanford University. He is the leader of the Stanford Computational Imaging Lab and a faculty co-director of the Stanford Center for Image Systems Engineering. At the intersection of computer graphics, machine vision, optics, scientific computing, and applied vision science, Prof. Wetzstein's research has a wide range of applications in next-generation imaging, display, wearable computing, and microscopy systems. Prior to joining Stanford in 2014, Prof. Wetzstein was a Research Scientist in the Camera Culture Group at MIT. He received a Ph.D. in Computer Science from the University of British Columbia in 2011 and graduated with Honors from the Bauhaus in Weimar, Germany before that. He is the recipient of an NSF CAREER Award, an Alfred P. Sloan Fellowship, an ACM SIGGRAPH Significant New Researcher Award, a Presidential Early Career Award for Scientists and Engineers (PECASE), a Terman Fellowship, an Okawa Research Grant, the Electronic Imaging Scientist of the Year 2017 Award, an Alain Fournier Ph.D. Dissertation Award, and a Laval Virtual Award as well as Best Paper and Demo Awards at ICCP 2011, 2014, and 2016 and at ICIP 2016.
Academic Appointments
-
Assistant Professor, Electrical Engineering
-
Assistant Professor (By courtesy), Computer Science
-
Member, Bio-X
-
Member, Wu Tsai Neurosciences Institute
Administrative Appointments
-
Faculty Co-director, Stanford Center for Image Systems Engineering (SCIEN) (2017 - Present)
Honors & Awards
-
Presidential Early Career Award for Scientists and Engineers (PECASE), The White House Office of Science and Technology Policy (2019)
-
Best Student Paper (Emil Wolf Student Paper Prize), OSA Frontiers in Optics Conference (2018)
-
Qualcomm Faculty Award, Qualcomm (2018)
-
SIGGRAPH Significant New Researcher Award, ACM (2018)
-
Sloan Fellowship, Alfred P. Sloan Foundation (2018)
-
Scientist of the Year Award, IS&T Electronic Imaging (2017)
-
Best Paper (Honorable Mention), Eurographics (2016)
-
CAREER Award, National Science Foundation (2016)
-
Conference Best Paper for Industry Award, IEEE International Conference on Image Processing (ICIP) (2016)
-
Okawa Research Grant, Okawa Foundation (2016)
-
Google Faculty Research Award, Google (2015)
-
Best Paper Award, IEEE International Conference on Computational Photography (ICCP) (2014)
-
Terman Faculty Fellowship, Stanford University (2014)
-
Postdoctoral Fellowship (PDF), National Sciences and Engineering Research Council of Canada (NSERC) (2012)
-
Alain Fournier Ph.D. Dissertation Annual Award, Vancouver Foundation (2011)
-
Best Paper Award, IEEE International Conference on Computational Photography (ICCP) (2011)
-
Laval Virtual Award, Laval Virtual (2005)
Program Affiliations
-
Stanford SystemX Alliance
Professional Education
-
Research Scientist, Massachusetts Institute of Technology, Media Lab, Media Arts and Sciences (2014)
-
Ph.D., University of British Columbia, Computer Science (2011)
-
Dipl., Bauhaus University, Media Systems Science (2006)
2020-21 Courses
- Computational Imaging and Display
EE 367 (Win) - Seminar Series for Image Systems Engineering
EE 292E (Aut, Win, Spr) - Virtual Reality
EE 267 (Spr) - Virtual Reality (WIM)
EE 267W (Spr) -
Independent Studies (10)
- Advanced Reading and Research
CS 499 (Aut, Win, Spr, Sum) - Advanced Reading and Research
CS 499P (Aut, Win, Spr, Sum) - Curricular Practical Training
CS 390A (Aut, Win, Sum) - Independent Project
CS 399 (Aut, Win, Spr) - Independent Project
CS 399P (Win, Spr) - Special Studies and Reports in Electrical Engineering
EE 391 (Aut, Win, Spr, Sum) - Special Studies and Reports in Electrical Engineering (WIM)
EE 191W (Aut) - Special Studies or Projects in Electrical Engineering
EE 190 (Aut) - Special Studies or Projects in Electrical Engineering
EE 390 (Aut, Win, Spr, Sum) - Writing Intensive Senior Project (WIM)
CS 191W (Spr)
- Advanced Reading and Research
-
Prior Year Courses
2019-20 Courses
- Computational Imaging and Display
CS 448I, EE 367 (Win) - Seminar Series for Image Systems Engineering
EE 292E (Aut, Win, Spr) - Virtual Reality
EE 267 (Spr) - Virtual Reality (WIM)
EE 267W (Spr)
2018-19 Courses
- Computational Imaging and Display
CS 448I, EE 367 (Win) - Seminar Series for Image Systems Engineering
EE 292E (Aut, Win, Spr) - Virtual Reality
EE 267 (Spr) - Virtual Reality (WIM)
EE 267W (Spr)
2017-18 Courses
- Computational Imaging and Display
CS 448I, EE 367 (Win) - Seminar Series for Image Systems Engineering
EE 292E (Aut, Win, Spr) - Virtual Reality
EE 267 (Spr) - Virtual Reality (WIM)
EE 267W (Spr)
- Computational Imaging and Display
Stanford Advisees
-
Doctoral Dissertation Reader (AC)
Liyue Shen, Zhanghao Sun, Sophia Williams, Feng Xie -
Postdoctoral Faculty Sponsor
David Lindell, Julien Martel, Yifan (Evan) Peng, Joshua Rapp -
Doctoral Dissertation Advisor (AC)
Alex Bergman, Hayato Ikoma, Brooke Krajancich, Mark Nishimura -
Master's Program Advisor
Ruoyan Chen, Suyeon Choi, Weiyun Jiang, Yanhao Jiang, Sean Konz, Qingxi Meng, Arjun Soin, Jessica Tawade, Jizhen Wang, Claire Zhang -
Doctoral (Program)
Suyeon Choi, Riley Culberg, Manu Gopakumar, Hayato Ikoma, Thomas Teisberg, Kailas Vodrahalli
All Publications
-
A Light-Field Metasurface for High-Resolution Single-Particle Tracking
NANO LETTERS
2019; 19 (4): 2267–71
View details for DOI 10.1021/acs.nanolett.8b04673
View details for Web of Science ID 000464769100010
-
A Light-Field Metasurface for High-Resolution Single-Particle Tracking.
Nano letters
2019
Abstract
Three-dimensional (3D) single-particle tracking (SPT) is a key tool for studying dynamic processes in the life sciences. However, conventional optical elements utilizing light fields impose an inherent trade-off between lateral and axial resolution, preventing SPT with high spatiotemporal resolution across an extended volume. We overcome the typical loss in spatial resolution that accompanies light-field-based approaches to obtain 3D information by placing a standard microscope coverslip patterned with a multifunctional, light-field metasurface on a specimen. This approach enables an otherwise unmodified microscope to gather 3D information at an enhanced spatial resolution. We demonstrate simultaneous tracking of multiple fluorescent particles within a large 0.5 * 0.5 * 0.3 mm3 volume using a standard epi-fluorescent microscope with submicron lateral and micron-level axial resolution.
View details for PubMedID 30897902
-
Sub-picosecond photon-efficient 3D imaging using single-photon sensors.
Scientific reports
2018; 8 (1): 17726
Abstract
Active 3D imaging systems have broad applications across disciplines, including biological imaging, remote sensing and robotics. Applications in these domains require fast acquisition times, high timing accuracy, and high detection sensitivity. Single-photon avalanche diodes (SPADs) have emerged as one of the most promising detector technologies to achieve all of these requirements. However, these detectors are plagued by measurement distortions known as pileup, which fundamentally limit their precision. In this work, we develop a probabilistic image formation model that accurately models pileup. We devise inverse methods to efficiently and robustly estimate scene depth and reflectance from recorded photon counts using the proposed model along with statistical priors. With this algorithm, we not only demonstrate improvements to timing accuracy by more than an order of magnitude compared to the state-of-the-art, but our approach is also the first to facilitate sub-picosecond-accurate, photon-efficient 3D imaging in practical scenarios where widely-varying photon counts are observed.
View details for PubMedID 30531961
-
Hybrid optical-electronic convolutional neural networks with optimized diffractive optics for image classification.
Scientific reports
2018; 8 (1): 12324
Abstract
Convolutional neural networks (CNNs) excel in a wide variety of computer vision applications, but their high performance also comes at a high computational cost. Despite efforts to increase efficiency both algorithmically and with specialized hardware, it remains difficult to deploy CNNs in embedded systems due to tight power budgets. Here we explore a complementary strategy that incorporates a layer of optical computing prior to electronic computing, improving performance on image classification tasks while adding minimal electronic computational cost or processing time. We propose a design for an optical convolutional layer based on an optimized diffractive optical element and test our design in two simulations: a learned optical correlator and an optoelectronic two-layer CNN. We demonstrate in simulation and with an optical prototype that the classification accuracies of our optical systems rival those of the analogous electronic implementations, while providing substantial savings on computational cost.
View details for PubMedID 30120316
-
End-to-end Optimization of Optics and Image Processing for Achromatic Extended Depth of Field and Super-resolution Imaging
ACM TRANSACTIONS ON GRAPHICS
2018; 37 (4)
View details for DOI 10.1145/3197517.3201333
View details for Web of Science ID 000448185000075
-
Single-Photon 3D Imaging with Deep Sensor Fusion
ACM TRANSACTIONS ON GRAPHICS
2018; 37 (4)
View details for DOI 10.1145/3197517.3201316
View details for Web of Science ID 000448185000074
-
A convex 3D deconvolution algorithm for low photon count fluorescence imaging.
Scientific reports
2018; 8 (1): 11489
Abstract
Deconvolution is widely used to improve the contrast and clarity of a 3D focal stack collected using a fluorescence microscope. But despite being extensively studied, deconvolution algorithms can introduce reconstruction artifacts when their underlying noise models or priors are violated, such as when imaging biological specimens at extremely low light levels. In this paper we propose a deconvolution method specifically designed for 3D fluorescence imaging of biological samples in the low-light regime. Our method utilizes a mixed Poisson-Gaussian model of photon shot noise and camera read noise, which are both present in low light imaging. We formulate a convex loss function and solve the resulting optimization problem using the alternating direction method of multipliers algorithm. Among several possible regularization strategies, we show that a Hessian-based regularizer is most effective for describing locally smooth features present in biological specimens. Our algorithm also estimates noise parameters on-the-fly, thereby eliminating a manual calibration step required by most deconvolution software. We demonstrate our algorithm on simulated images and experimentally-captured images with peak intensities of tens of photoelectrons per voxel. We also demonstrate its performance for live cell imaging, showing its applicability as a tool for biological research.
View details for PubMedID 30065270
-
Towards a Machine-learning Approach for Sickness Prediction in 360 degrees Stereoscopic Videos
IEEE COMPUTER SOC. 2018: 1594–1603
Abstract
Virtual reality systems are widely believed to be the next major computing platform. There are, however, some barriers to adoption that must be addressed, such as that of motion sickness - which can lead to undesirable symptoms including postural instability, headaches, and nausea. Motion sickness in virtual reality occurs as a result of moving visual stimuli that cause users to perceive self-motion while they remain stationary in the real world. There are several contributing factors to both this perception of motion and the subsequent onset of sickness, including field of view, motion velocity, and stimulus depth. We verify first that differences in vection due to relative stimulus depth remain correlated with sickness. Then, we build a dataset of stereoscopic 3D videos and their corresponding sickness ratings in order to quantify their nauseogenicity, which we make available for future use. Using this dataset, we train a machine learning algorithm on hand-crafted features (quantifying speed, direction, and depth as functions of time) from each video, learning the contributions of these various features to the sickness ratings. Our predictor generally outperforms a naïve estimate, but is ultimately limited by the size of the dataset. However, our result is promising and opens the door to future work with more extensive datasets. This and further advances in this space have the potential to alleviate developer and end user concerns about motion sickness in the increasingly commonplace virtual world.
View details for DOI 10.1109/TVCG.2018.2793560
View details for Web of Science ID 000427682500022
View details for PubMedID 29553929
-
Saliency in VR: How do people explore virtual environments?
IEEE COMPUTER SOC. 2018: 1633–42
Abstract
Understanding how people explore immersive virtual environments is crucial for many applications, such as designing virtual reality (VR) content, developing new compression algorithms, or learning computational models of saliency or visual attention. Whereas a body of recent work has focused on modeling saliency in desktop viewing conditions, VR is very different from these conditions in that viewing behavior is governed by stereoscopic vision and by the complex interaction of head orientation, gaze, and other kinematic constraints. To further our understanding of viewing behavior and saliency in VR, we capture and analyze gaze and head orientation data of 169 users exploring stereoscopic, static omni-directional panoramas, for a total of 1980 head and gaze trajectories for three different viewing conditions. We provide a thorough analysis of our data, which leads to several important insights, such as the existence of a particular fixation bias, which we then use to adapt existing saliency predictors to immersive VR conditions. In addition, we explore other applications of our data and analysis, including automatic alignment of VR video cuts, panorama thumbnails, panorama video synopsis, and saliency-basedcompression.
View details for DOI 10.1109/TVCG.2018.2793599
View details for Web of Science ID 000427682500026
View details for PubMedID 29553930
-
Convolutional Sparse Coding for RGB plus NIR Imaging
IEEE TRANSACTIONS ON IMAGE PROCESSING
2018; 27 (4): 1611–25
Abstract
Emerging sensor designs increasingly rely on novel color filter arrays (CFAs) to sample the incident spectrum in unconventional ways. In particular, capturing a near-infrared (NIR) channel along with conventional RGB color is an exciting new imaging modality. RGB+NIR sensing has broad applications in computational photography, such as low-light denoising, it has applications in computer vision, such as facial recognition and tracking, and it paves the way toward low-cost single-sensor RGB and depth imaging using structured illumination. However, cost-effective commercial CFAs suffer from severe spectral cross talk. This cross talk represents a major challenge in high-quality RGB+NIR imaging, rendering existing spatially multiplexed sensor designs impractical. In this work, we introduce a new approach to RGB+NIR image reconstruction using learned convolutional sparse priors. We demonstrate high-quality color and NIR imaging for challenging scenes, even including high-frequency structured NIR illumination. The effectiveness of the proposed method is validated on a large data set of experimental captures, and simulated benchmark results which demonstrate that this work achieves unprecedented reconstruction quality.
View details for DOI 10.1109/TIP.2017.2781303
View details for Web of Science ID 000429463800005
View details for PubMedID 29324415
-
Confocal non-line-of-sight imaging based on the light-cone transform
NATURE
2018; 555 (7696): 338–41
Abstract
How to image objects that are hidden from a camera's view is a problem of fundamental importance to many fields of research, with applications in robotic vision, defence, remote sensing, medical imaging and autonomous vehicles. Non-line-of-sight (NLOS) imaging at macroscopic scales has been demonstrated by scanning a visible surface with a pulsed laser and a time-resolved detector. Whereas light detection and ranging (LIDAR) systems use such measurements to recover the shape of visible objects from direct reflections, NLOS imaging reconstructs the shape and albedo of hidden objects from multiply scattered light. Despite recent advances, NLOS imaging has remained impractical owing to the prohibitive memory and processing requirements of existing reconstruction algorithms, and the extremely weak signal of multiply scattered light. Here we show that a confocal scanning procedure can address these challenges by facilitating the derivation of the light-cone transform to solve the NLOS reconstruction problem. This method requires much smaller computational and memory resources than previous reconstruction methods do and images hidden objects at unprecedented resolution. Confocal scanning also provides a sizeable increase in signal and range when imaging retroreflective objects. We quantify the resolution bounds of NLOS imaging, demonstrate its potential for real-time tracking and derive efficient algorithms that incorporate image priors and a physically accurate noise model. Additionally, we describe successful outdoor experiments of NLOS imaging under indirect sunlight.
View details for PubMedID 29513650
-
Single-shot speckle correlation fluorescence microscopy in thick scattering tissue with image reconstruction priors.
Journal of biophotonics
2018; 11 (3)
Abstract
Deep tissue imaging in the multiple scattering regime remains at the frontier of fluorescence microscopy. Speckle correlation imaging (SCI) can computationally uncover objects hidden behind a scattering layer, but has only been demonstrated with scattered laser illumination and in geometries where the scatterer is in the far field of the target object. Here, SCI is extended to imaging a planar fluorescent signal at the back surface of a 500-mum-thick slice of mouse brain. The object is reconstructed from a single snapshot through phase retrieval using a proximal algorithm that easily incorporates image priors. Simulations and experiments demonstrate improved image recovery with this approach compared to the conventional SCI algorithm.
View details for PubMedID 29219256
-
Time-multiplexed light field synthesis via factored Wigner distribution function
OPTICS LETTERS
2018; 43 (3): 599–602
Abstract
An optimization algorithm for preparing display-ready holographic elements (hogels) to synthesize a light field is outlined, and proof of concept is experimentally demonstrated. This method allows for higher-rank factorization, which can be used for time-multiplexing multiple frames for improved image quality, using phase-only and fully complex modulation with a single spatial light modulator.
View details for DOI 10.1364/OL.43.000599
View details for Web of Science ID 000423776600064
View details for PubMedID 29400850
-
Towards Transient Imaging at Interactive Rates with Single-Photon Detectors
IEEE. 2018
View details for Web of Science ID 000435001500006
-
Deep End-to-End Time-of-Flight Imaging
IEEE. 2018: 6383–92
View details for DOI 10.1109/CVPR.2018.00668
View details for Web of Science ID 000457843606056
-
Real-time Non-line-of-sight Imaging
ASSOC COMPUTING MACHINERY. 2018
View details for DOI 10.1145/3214907.3214920
View details for Web of Science ID 000455250500015
-
Confocal Non-line-of-sight Imaging
ASSOC COMPUTING MACHINERY. 2018
View details for DOI 10.1145/3214745.3214795
View details for Web of Science ID 000455248900001
-
Autofocals: Gaze-Contingent Eyeglasses for Presbyopes
ASSOC COMPUTING MACHINERY. 2018
View details for DOI 10.1145/3214907.3214918
View details for Web of Science ID 000455250500003
-
SpinVR: Towards Live-Streaming 3D Virtual Reality Video
ASSOC COMPUTING MACHINERY. 2017
View details for DOI 10.1145/3130800.3130836
View details for Web of Science ID 000417448700039
-
Snapshot Difference Imaging using Correlation Time-of-Flight Sensors
ASSOC COMPUTING MACHINERY. 2017
View details for DOI 10.1145/3130800.3130885
View details for Web of Science ID 000417448700050
-
Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays.
Proceedings of the National Academy of Sciences of the United States of America
2017; 114 (9): 2183-2188
Abstract
From the desktop to the laptop to the mobile device, personal computing platforms evolve over time. Moving forward, wearable computing is widely expected to be integral to consumer electronics and beyond. The primary interface between a wearable computer and a user is often a near-eye display. However, current generation near-eye displays suffer from multiple limitations: they are unable to provide fully natural visual cues and comfortable viewing experiences for all users. At their core, many of the issues with near-eye displays are caused by limitations in conventional optics. Current displays cannot reproduce the changes in focus that accompany natural vision, and they cannot support users with uncorrected refractive errors. With two prototype near-eye displays, we show how these issues can be overcome using display modes that adapt to the user via computational optics. By using focus-tunable lenses, mechanically actuated displays, and mobile gaze-tracking technology, these displays can be tailored to correct common refractive errors and provide natural focus cues by dynamically updating the system based on where a user looks in a virtual scene. Indeed, the opportunities afforded by recent advances in computational optics open up the possibility of creating a computing platform in which some users may experience better quality vision in the virtual world than in the real one.
View details for DOI 10.1073/pnas.1617251114
View details for PubMedID 28193871
-
Reconstructing Transient Images from Single-Photon Sensors
IEEE. 2017: 2289–97
View details for DOI 10.1109/CVPR.2017.246
View details for Web of Science ID 000418371402038
-
Consensus Convolutional Sparse Coding
IEEE. 2017: 4290–98
View details for DOI 10.1109/ICCV.2017.459
View details for Web of Science ID 000425498404038
-
Computational Near-Eye Displays: Engineering the Interface to the Digital World
NATL ACADEMIES PRESS. 2017: 7–12
View details for Web of Science ID 000431842000002
-
A Wide-Field-of-View Monocentric Light Field Camera
IEEE. 2017: 3757–66
View details for DOI 10.1109/CVPR.2017.400
View details for Web of Science ID 000418371403089
-
Optimizing VR for All Users Through Adaptive Focus Displays
ASSOC COMPUTING MACHINERY. 2017
View details for DOI 10.1145/3084363.3085029
View details for Web of Science ID 000441139200076
-
Photonic Multitasking Interleaved Si Nanoantenna Phased Array
NANO LETTERS
2016; 16 (12): 7671-7676
Abstract
Metasurfaces provide unprecedented control over light propagation by imparting local, space-variant phase changes on an incident electromagnetic wave. They can improve the performance of conventional optical elements and facilitate the creation of optical components with new functionalities and form factors. Here, we build on knowledge from shared aperture phased array antennas and Si-based gradient metasurfaces to realize various multifunctional metasurfaces capable of achieving multiple distinct functions within a single surface region. As a key point, we demonstrate that interleaving multiple optical elements can be accomplished without reducing the aperture of each subelement. Multifunctional optical elements constructed from Si-based gradient metasurface are realized, including axial and lateral multifocus geometric phase metasurface lenses. We further demonstrate multiwavelength color imaging with a high spatial resolution. Finally, optical imaging functionality with simultaneous color separation has been obtained by using multifunctional metasurfaces, which opens up new opportunities for the field of advanced imaging and display.
View details for DOI 10.1021/acs.nanolett.6b03505
View details for PubMedID 27960478
-
Factored Displays Improving resolution, dynamic range, color reproduction, and light field characteristics with advanced signal processing
IEEE SIGNAL PROCESSING MAGAZINE
2016; 33 (5): 119-129
View details for DOI 10.1109/MSP.2016.2569621
View details for Web of Science ID 000384016400012
-
Computational Imaging with Multi-Camera Time-of-Flight Systems
ACM TRANSACTIONS ON GRAPHICS
2016; 35 (4)
View details for DOI 10.1145/2897824.2925928
View details for Web of Science ID 000380112400003
-
ProxImaL: Efficient Image Optimization using Proximal Algorithms
ACM TRANSACTIONS ON GRAPHICS
2016; 35 (4)
View details for DOI 10.1145/2897824.2925875
View details for Web of Science ID 000380112400054
-
Convolutional Sparse Coding for High Dynamic Range Imaging
COMPUTER GRAPHICS FORUM
2016; 35 (2): 153-163
View details for DOI 10.1111/cgf.12819
View details for Web of Science ID 000377222200015
-
Tensor low-rank and sparse light field photography
COMPUTER VISION AND IMAGE UNDERSTANDING
2016; 145: 172-181
View details for DOI 10.1016/j.cviu.2015.11.004
View details for Web of Science ID 000372378200015
-
3D Displays
ANNUAL REVIEW OF VISION SCIENCE, VOL 2
2016; 2: 397-435
Abstract
Creating realistic three-dimensional (3D) experiences has been a very active area of research and development, and this article describes progress and what remains to be solved. A very active area of technical development has been to build displays that create the correct relationship between viewing parameters and triangulation depth cues: stereo, motion, and focus. Several disciplines are involved in the design, construction, evaluation, and use of 3D displays, but an understanding of human vision is crucial to this enterprise because in the end, the goal is to provide the desired perceptual experience for the viewer. In this article, we review research and development concerning displays that create 3D experiences. And we highlight areas in which further research and development is needed.
View details for DOI 10.1146/annurev-vision-082114-035800
View details for Web of Science ID 000389589000018
-
Extended field-of-view and increased-signal 3D holographic illumination with time-division multiplexing
OPTICS EXPRESS
2015; 23 (25): 32573-32581
Abstract
Phase spatial light modulators (SLMs) are widely used for generating multifocal three-dimensional (3D) illumination patterns, but these are limited to a field of view constrained by the pixel count or size of the SLM. Further, with two-photon SLM-based excitation, increasing the number of focal spots penalizes the total signal linearly--requiring more laser power than is available or can be tolerated by the sample. Here we analyze and demonstrate a method of using galvanometer mirrors to time-sequentially reposition multiple 3D holograms, both extending the field of view and increasing the total time-averaged two-photon signal. We apply our approach to 3D two-photon in vivo neuronal calcium imaging.
View details for DOI 10.1364/OE.23.032573
View details for Web of Science ID 000366687200093
View details for PubMedID 26699047
View details for PubMedCentralID PMC4775739
-
Adaptive Color Display via Perceptually-driven Factored Spectral Projection
ACM TRANSACTIONS ON GRAPHICS
2015; 34 (6)
View details for DOI 10.1145/2816795.2818070
View details for Web of Science ID 000363671200002
-
Doppler Time-of-Flight Imaging
ACM TRANSACTIONS ON GRAPHICS
2015; 34 (4)
View details for DOI 10.1145/2766953
View details for Web of Science ID 000358786600002
-
The Light Field Stereoscope Immersive Computer Graphics via Factored Near-Eye Light Field Displays with Focus Cues
ACM TRANSACTIONS ON GRAPHICS
2015; 34 (4)
View details for DOI 10.1145/2766922
View details for Web of Science ID 000358786600026
-
Toward BxDF Display using Multilayer Diffraction
ACM TRANSACTIONS ON GRAPHICS
2014; 33 (6)
View details for DOI 10.1145/2661229.2661246
View details for Web of Science ID 000345855600020
-
Wide field of view compressive light field display using a multilayer architecture and tracked viewers
JOURNAL OF THE SOCIETY FOR INFORMATION DISPLAY
2014; 22 (10): 525-534
View details for DOI 10.1002/jsid.285
View details for Web of Science ID 000354201900005
-
Attenuation-corrected fluorescence spectra unmixing for spectroscopy and microscopy
OPTICS EXPRESS
2014; 22 (16)
Abstract
In fluorescence measurements, light is often absorbed and scattered by a sample both for excitation and emission, resulting in the measured spectra to be distorted. Conventional linear unmixing methods computationally separate overlapping spectra but do not account for these effects. We propose a new algorithm for fluorescence unmixing that accounts for the attenuation-related distortion effect on fluorescence spectra. Using a matrix representation, we derive forward measurement formation and a corresponding inverse method; the unmixing algorithm is based on nonnegative matrix factorization. We also demonstrate how this method can be extended to a higher-dimensional tensor form, which is useful for unmixing overlapping spectra observed under the attenuation effect in spectral imaging microscopy. We evaluate the proposed methods in simulation and experiments and show that it outperforms a conventional, linear unmixing method when absorption and scattering contributes to the measured signals, as in deep tissue imaging.
View details for DOI 10.1364/OE.22.019469
View details for Web of Science ID 000340714100058
View details for PubMedID 25321030
-
A Compressive Light Field Projection System
ACM TRANSACTIONS ON GRAPHICS
2014; 33 (4)
View details for DOI 10.1145/2601097.2601144
View details for Web of Science ID 000340000100025
-
Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy
NATURE METHODS
2014; 11 (7): 727-U161
Abstract
High-speed, large-scale three-dimensional (3D) imaging of neuronal activity poses a major challenge in neuroscience. Here we demonstrate simultaneous functional imaging of neuronal activity at single-neuron resolution in an entire Caenorhabditis elegans and in larval zebrafish brain. Our technique captures the dynamics of spiking neurons in volumes of ∼700 μm × 700 μm × 200 μm at 20 Hz. Its simplicity makes it an attractive tool for high-speed volumetric calcium imaging.
View details for DOI 10.1038/NMETH.2964
View details for PubMedID 24836920
-
Eyeglasses-free Display: Towards Correcting Visual Aberrations with Computational Light Field Displays
ACM TRANSACTIONS ON GRAPHICS
2014; 33 (4)
View details for DOI 10.1145/2601097.2601122
View details for Web of Science ID 000340000100026
-
Compressive multi-mode superresolution display
OPTICS EXPRESS
2014; 22 (12): 14981-14992
Abstract
Compressive displays are an emerging technology exploring the co-design of new optical device configurations and compressive computation. Previously, research has shown how to improve the dynamic range of displays and facilitate high-quality light field or glasses-free 3D image synthesis. In this paper, we introduce a new multi-mode compressive display architecture that supports switching between 3D and high dynamic range (HDR) modes as well as a new super-resolution mode. The proposed hardware consists of readily-available components and is driven by a novel splitting algorithm that computes the pixel states from a target high-resolution image. In effect, the display pixels present a compressed representation of the target image that is perceived as a single, high resolution image.
View details for DOI 10.1364/OE.22.014981
View details for Web of Science ID 000338044300090
View details for PubMedID 24977592
-
Dual-coded compressive hyperspectral imaging
OPTICS LETTERS
2014; 39 (7): 2044-2047
Abstract
This Letter presents a new snapshot approach to hyperspectral imaging via dual-optical coding and compressive computational reconstruction. We demonstrate that two high-speed spatial light modulators, located conjugate to the image and spectral plane, respectively, can code the hyperspectral datacube into a single sensor image such that the high-resolution signal can be recovered in postprocessing. We show various applications by designing different optical modulation functions, including programmable spatially varying color filtering, multiplexed hyperspectral imaging, and high-resolution compressive hyperspectral imaging.
View details for DOI 10.1364/OL.39.002044
View details for Web of Science ID 000333887800086
View details for PubMedID 24686670
-
Display adaptive 3D content remapping
COMPUTERS & GRAPHICS-UK
2013; 37 (8): 983-996
View details for DOI 10.1016/j.cag.2013.06.004
View details for Web of Science ID 000329541800006
-
A survey on computational displays: Pushing the boundaries of optics, computation, and perception
COMPUTERS & GRAPHICS-UK
2013; 37 (8): 1012-1038
View details for DOI 10.1016/j.cag.2013.10.003
View details for Web of Science ID 000329541800008
-
Focus 3D: Compressive Accommodation Display
ACM TRANSACTIONS ON GRAPHICS
2013; 32 (5)
View details for DOI 10.1145/2503144
View details for Web of Science ID 000326922900007
-
Adaptive Image Synthesis for Compressive Displays
ACM TRANSACTIONS ON GRAPHICS
2013; 32 (4)
View details for DOI 10.1145/2461912.2461925
View details for Web of Science ID 000321840100101
-
Compressive Light Field Photography using Overcomplete Dictionaries and Optimized Projections
ACM TRANSACTIONS ON GRAPHICS
2013; 32 (4)
View details for DOI 10.1145/2461912.2461914
View details for Web of Science ID 000321840100015
-
Real-time Image Generation for Compressive Light Field Displays
9th International Symposium on Display Holography (ISDH)
IOP PUBLISHING LTD. 2013
View details for DOI 10.1088/1742-6596/415/1/012045
View details for Web of Science ID 000317123700045
-
Depth of Field Analysis for Multilayer Automultiscopic Displays
9th International Symposium on Display Holography (ISDH)
IOP PUBLISHING LTD. 2013
View details for DOI 10.1088/1742-6596/415/1/012036
View details for Web of Science ID 000317123700036
-
Construction and Calibration of Optically Efficient LCD-based Multi-Layer Light Field Displays
9th International Symposium on Display Holography (ISDH)
IOP PUBLISHING LTD. 2013
View details for DOI 10.1088/1742-6596/415/1/012071
View details for Web of Science ID 000317123700071
-
On Plenoptic Multiplexing and Reconstruction
INTERNATIONAL JOURNAL OF COMPUTER VISION
2013; 101 (2): 384-400
View details for DOI 10.1007/s11263-012-0585-9
View details for Web of Science ID 000314291600009
-
Compressive Light Field Displays
IEEE COMPUTER GRAPHICS AND APPLICATIONS
2012; 32 (5): 6-11
Abstract
Light fields are the multiview extension of stereo image pairs: a collection of images showing a 3D scene from slightly different perspectives. Depicting high-resolution light fields usually requires an excessively large display bandwidth; compressive light field displays are enabled by the codesign of optical elements and computational-processing algorithms. Rather than pursuing a direct "optical" solution (for example, adding one more pixel to support the emission of one additional light ray), compressive displays aim to create flexible optical systems that can synthesize a compressed target light field. In effect, each pixel emits a superposition of light rays. Through compression and tailored optical designs, fewer display pixels are necessary to emit a given light field than a direct optical solution would require.
View details for Web of Science ID 000307910800003
View details for PubMedID 24806982
-
Tensor Displays: Compressive Light Field Synthesis using Multilayer Displays with Directional Backlighting
ACM TRANSACTIONS ON GRAPHICS
2012; 31 (4)
View details for DOI 10.1145/2185520.2185576
View details for Web of Science ID 000308250300056
-
Compressive Light Field Photography
Special-Interest-Group-on-Computer-Graphics-and-Interactive-Techniques Conference (SIGGRAPH)
ASSOC COMPUTING MACHINERY. 2012
View details for DOI 10.1145/2343045.2343101
View details for Web of Science ID 000325066900041
-
Polarization Fields: Dynamic Light Field Display using Multi-Layer LCDs
ACM TRANSACTIONS ON GRAPHICS
2011; 30 (6)
View details for DOI 10.1145/2024156.2024220
View details for Web of Science ID 000297681100064
-
Computational Plenoptic Imaging
COMPUTER GRAPHICS FORUM
2011; 30 (8): 2397-2426
View details for DOI 10.1111/j.1467-8659.2011.02073.x
View details for Web of Science ID 000297317200020
-
Layered 3D: Tomographic Image Synthesis for Attenuation-based Light Field and High Dynamic Range Displays
ACM TRANSACTIONS ON GRAPHICS
2011; 30 (4)
View details for DOI 10.1145/1964921.1964990
View details for Web of Science ID 000297216400069