Gordon Wetzstein
Associate Professor of Electrical Engineering and, by courtesy, of Computer Science
Bio
Gordon Wetzstein is an Associate Professor of Electrical Engineering and, by courtesy, of Computer Science at Stanford University. He is the leader of the Stanford Computational Imaging Lab and a faculty co-director of the Stanford Center for Image Systems Engineering. At the intersection of computer graphics and vision, artificial intelligence, computational optics, and applied vision science, Prof. Wetzstein's research has a wide range of applications in next-generation imaging, wearable computing, and neural rendering systems. Prof. Wetzstein is a Fellow of Optica and the recipient of numerous awards, including an NSF CAREER Award, an Alfred P. Sloan Fellowship, an ACM SIGGRAPH Significant New Researcher Award, a Presidential Early Career Award for Scientists and Engineers (PECASE), an SPIE Early Career Achievement Award, an Electronic Imaging Scientist of the Year Award, an Alain Fournier Ph.D. Dissertation Award as well as many Best Paper and Demo Awards.
Academic Appointments
-
Associate Professor, Electrical Engineering
-
Associate Professor (By courtesy), Computer Science
-
Member, Bio-X
-
Member, Wu Tsai Neurosciences Institute
Administrative Appointments
-
Faculty Co-director, Stanford Center for Image Systems Engineering (SCIEN) (2017 - Present)
Honors & Awards
-
Best Paper (Honorable Mention), ACM SIGGRAPH (2023)
-
Distinguished Lecturer, IEEE Signal Processing Society (2023)
-
Fellow, Optica (formerly OSA) (2023)
-
Raymond C. Bowman Award, Society for Imaging Science and Technology (IS&T) (2023)
-
Best Journal Paper, IEEE Virtual Reality Conference (2022)
-
Early Career Achievement Award, International society for optics and photonics (SPIE) (2020)
-
Presidential Early Career Award for Scientists and Engineers (PECASE), The White House Office of Science and Technology Policy (2019)
-
Best Student Paper (Emil Wolf Student Paper Prize), OSA Frontiers in Optics Conference (2018)
-
SIGGRAPH Significant New Researcher Award, ACM (2018)
-
Sloan Fellowship, Alfred P. Sloan Foundation (2018)
-
Scientist of the Year Award, IS&T Electronic Imaging (2017)
-
Best Paper (Honorable Mention), Eurographics (2016)
-
CAREER Award, National Science Foundation (2016)
-
Conference Best Paper for Industry Award, IEEE International Conference on Image Processing (ICIP) (2016)
-
Okawa Research Grant, Okawa Foundation (2016)
-
Google Faculty Research Award, Google (2015)
-
Best Paper Award, IEEE International Conference on Computational Photography (ICCP) (2014)
-
Terman Faculty Fellowship, Stanford University (2014)
-
Postdoctoral Fellowship (PDF), National Sciences and Engineering Research Council of Canada (NSERC) (2012)
-
Alain Fournier Ph.D. Dissertation Annual Award, Vancouver Foundation (2011)
-
Best Paper Award, IEEE International Conference on Computational Photography (ICCP) (2011)
Program Affiliations
-
Stanford SystemX Alliance
Professional Education
-
Research Scientist, Massachusetts Institute of Technology, Media Lab, Media Arts and Sciences (2014)
-
Ph.D., University of British Columbia, Computer Science (2011)
-
Dipl., Bauhaus University, Media Systems Science (2006)
2024-25 Courses
- Computational Imaging
CS 448I, EE 367 (Win) - Seminar Series for Image Systems Engineering
EE 292E (Aut, Win, Spr) - Virtual Reality
EE 267 (Spr) - Virtual Reality (WIM)
EE 267W (Spr) -
Independent Studies (16)
- Advanced Reading and Research
CS 499 (Aut, Win, Spr, Sum) - Advanced Reading and Research
CS 499P (Aut, Win, Spr, Sum) - Curricular Practical Training
CS 390A (Aut, Win, Spr, Sum) - Curricular Practical Training
CS 390B (Aut, Win, Spr, Sum) - Independent Project
CS 399 (Aut, Win, Spr, Sum) - Independent Project
CS 399P (Aut, Win, Spr, Sum) - Master's Research
CME 291 (Aut, Win, Spr, Sum) - Part-time Curricular Practical Training
CS 390D (Aut, Win, Spr, Sum) - Senior Project
CS 191 (Aut, Win, Spr, Sum) - Special Studies and Reports in Electrical Engineering
EE 191 (Aut, Sum) - Special Studies and Reports in Electrical Engineering
EE 391 (Aut, Win, Spr, Sum) - Special Studies and Reports in Electrical Engineering (WIM)
EE 191W (Aut, Win, Spr, Sum) - Special Studies or Projects in Electrical Engineering
EE 190 (Aut, Win, Spr, Sum) - Special Studies or Projects in Electrical Engineering
EE 390 (Aut, Win, Spr, Sum) - Supervised Undergraduate Research
CS 195 (Aut, Win, Spr, Sum) - Writing Intensive Senior Research Project
CS 191W (Aut, Win, Spr)
- Advanced Reading and Research
-
Prior Year Courses
2023-24 Courses
- Computational Imaging
CS 448I, EE 367 (Win) - Seminar Series for Image Systems Engineering
EE 292E (Aut, Win, Spr) - Virtual Reality
EE 267 (Spr) - Virtual Reality (WIM)
EE 267W (Spr)
2022-23 Courses
- Seminar Series for Image Systems Engineering
EE 292E (Aut, Win, Spr)
2021-22 Courses
- Computational Imaging
CS 448I, EE 367 (Win) - Seminar Series for Image Systems Engineering
EE 292E (Aut, Win, Spr) - Virtual Reality
EE 267 (Spr) - Virtual Reality (WIM)
EE 267W (Spr)
- Computational Imaging
Stanford Advisees
-
Doctoral Dissertation Reader (AC)
Silas Alberti, Honglin Chen, Bella Hofflich, Sreela Kodali, Axel Levy, Connor Lin, William Meng, Mark Nishimura, Colton Stearns, Thomas Teisberg, Qi Zhou, Yueming Zhuo -
Postdoctoral Faculty Sponsor
Suyeon Choi, Sara Fridovich-Keil, Gun-Yeal Lee, Tong Wu, Yinghao Xu -
Doctoral Dissertation Advisor (AC)
Brian Chao, Manu Gopakumar, Ryan Po, Haley So, Qingqing Zhao -
Orals Evaluator
Honglin Chen, Connor Lin -
Master's Program Advisor
Pauline Arnoud, Jiayu Chang, Bryan Chiang, Maximilian Drach, ALEX GILBERT, Anita Lu, Youjin Song -
Doctoral Dissertation Co-Advisor (AC)
Zhengfei Kuang, Yang Zheng -
Doctoral (Program)
Shengqu Cai, Eric Chan, Brian Chao, Jasmine Cheng, Boyang Deng, Manu Gopakumar, Nathan Jensen, Ryan Po, Jay Shenoy, Thomas Teisberg, Kailas Vodrahalli -
Postdoctoral Research Mentor
Guandao Yang
All Publications
-
Inference in artificial intelligence with deep optics and photonics.
Nature
2020; 588 (7836): 39–47
Abstract
Artificial intelligence tasks across numerous applications require accelerators for fast and low-power execution. Optical computing systems may be able to meet these domain-specific needs but, despite half a century of research, general-purpose optical computing systems have yet to mature into a practical technology. Artificial intelligence inference, however, especially for visual computing applications, may offer opportunities for inference based on optical and photonic systems. In this Perspective, we review recent work on optical computing for artificial intelligence applications and discuss its promise and challenges.
View details for DOI 10.1038/s41586-020-2973-6
View details for PubMedID 33268862
-
Neural Holography with Camera-in-the-loop Training
ACM TRANSACTIONS ON GRAPHICS
2020; 39 (6)
View details for DOI 10.1145/3414685.3417802
View details for Web of Science ID 000595589100025
-
Autofocals: Evaluating gaze-contingent eyeglasses for presbyopes.
Science advances
2019; 5 (6): eaav6187
Abstract
As humans age, they gradually lose the ability to accommodate, or refocus, to near distances because of the stiffening of the crystalline lens. This condition, known as presbyopia, affects nearly 20% of people worldwide. We design and build a new presbyopia correction, autofocals, to externally mimic the natural accommodation response, combining eye tracker and depth sensor data to automatically drive focus-tunable lenses. We evaluated 19 users on visual acuity, contrast sensitivity, and a refocusing task. Autofocals exhibit better visual acuity when compared to monovision and progressive lenses while maintaining similar contrast sensitivity. On the refocusing task, autofocals are faster and, compared to progressives, also significantly more accurate. In a separate study, a majority of 23 of 37 users ranked autofocals as the best correction in terms of ease of refocusing. Our work demonstrates the superiority of autofocals over current forms of presbyopia correction and could affect the lives of millions.
View details for DOI 10.1126/sciadv.aav6187
View details for PubMedID 31259239
-
Confocal non-line-of-sight imaging based on the light-cone transform
NATURE
2018; 555 (7696): 338–41
Abstract
How to image objects that are hidden from a camera's view is a problem of fundamental importance to many fields of research, with applications in robotic vision, defence, remote sensing, medical imaging and autonomous vehicles. Non-line-of-sight (NLOS) imaging at macroscopic scales has been demonstrated by scanning a visible surface with a pulsed laser and a time-resolved detector. Whereas light detection and ranging (LIDAR) systems use such measurements to recover the shape of visible objects from direct reflections, NLOS imaging reconstructs the shape and albedo of hidden objects from multiply scattered light. Despite recent advances, NLOS imaging has remained impractical owing to the prohibitive memory and processing requirements of existing reconstruction algorithms, and the extremely weak signal of multiply scattered light. Here we show that a confocal scanning procedure can address these challenges by facilitating the derivation of the light-cone transform to solve the NLOS reconstruction problem. This method requires much smaller computational and memory resources than previous reconstruction methods do and images hidden objects at unprecedented resolution. Confocal scanning also provides a sizeable increase in signal and range when imaging retroreflective objects. We quantify the resolution bounds of NLOS imaging, demonstrate its potential for real-time tracking and derive efficient algorithms that incorporate image priors and a physically accurate noise model. Additionally, we describe successful outdoor experiments of NLOS imaging under indirect sunlight.
View details for PubMedID 29513650
-
A Perceptual Model for Eccentricity-dependent Spatio-temporal Flicker Fusion and its Applications to Foveated Graphics
ACM TRANSACTIONS ON GRAPHICS
2021; 40 (4)
View details for DOI 10.1145/3450626.3459784
View details for Web of Science ID 000674930900014
-
Acorn: Adaptive Coordinate Networks for Neural Scene Representation
ACM TRANSACTIONS ON GRAPHICS
2021; 40 (4)
View details for DOI 10.1145/3450626.3459785
View details for Web of Science ID 000674930900025
-
Event-Based Near-Eye Gaze Tracking Beyond 10,000 Hz
IEEE COMPUTER SOC. 2021: 2577-2586
Abstract
The cameras in modern gaze-tracking systems suffer from fundamental bandwidth and power limitations, constraining data acquisition speed to 300 Hz realistically. This obstructs the use of mobile eye trackers to perform, e.g., low latency predictive rendering, or to study quick and subtle eye motions like microsaccades using head-mounted devices in the wild. Here, we propose a hybrid frame-event-based near-eye gaze tracking system offering update rates beyond 10,000 Hz with an accuracy that matches that of high-end desktop-mounted commercial trackers when evaluated in the same conditions. Our system, previewed in Figure 1, builds on emerging event cameras that simultaneously acquire regularly sampled frames and adaptively sampled events. We develop an online 2D pupil fitting method that updates a parametric model every one or few events. Moreover, we propose a polynomial regressor for estimating the point of gaze from the parametric pupil model in real time. Using the first event-based gaze dataset, we demonstrate that our system achieves accuracies of 0.45°-1.75° for fields of view from 45° to 98°. With this technology, we hope to enable a new generation of ultra-low-latency gaze-contingent rendering and display techniques for virtual and augmented reality.
View details for DOI 10.1109/TVCG.2021.3067784
View details for Web of Science ID 000641972200008
View details for PubMedID 33780340
-
Optimizing image quality for holographic near-eye displays with Michelson Holography
OPTICA
2021; 8 (2): 143–46
View details for DOI 10.1364/OPTICA.410622
View details for Web of Science ID 000621094900004
-
Keyhole Imaging:Non-Line-of-Sight Imaging and Tracking of Moving Objects Along a Single Optical Path
IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING
2021; 7: 1–12
View details for DOI 10.1109/TCI.2020.3046472
View details for Web of Science ID 000607372700001
-
Optimizing Depth Perception in Virtual and Augmented Reality through Gaze-contingent Stereo Rendering
ACM TRANSACTIONS ON GRAPHICS
2020; 39 (6)
View details for DOI 10.1145/3414685.3417820
View details for Web of Science ID 000595589100109
-
Neural Light Field 3D Printing
ACM TRANSACTIONS ON GRAPHICS
2020; 39 (6)
View details for DOI 10.1145/3414685.3417879
View details for Web of Science ID 000595589100047
-
Toward the next-generation VR/AR optics: a review of holographic near-eye displays from a human-centric perspective
OPTICA
2020; 7 (11): 1563–78
View details for DOI 10.1364/OPTICA.406004
View details for Web of Science ID 000593180100001
-
Toward the next-generation VR/AR optics: a review of holographic near-eye displays from a human-centric perspective.
Optica
2020; 7 (11): 1563-1578
Abstract
Wearable near-eye displays for virtual and augmented reality (VR/AR) have seen enormous growth in recent years. While researchers are exploiting a plethora of techniques to create life-like three-dimensional (3D) objects, there is a lack of awareness of the role of human perception in guiding the hardware development. An ultimate VR/AR headset must integrate the display, sensors, and processors in a compact enclosure that people can comfortably wear for a long time while allowing a superior immersion experience and user-friendly human-computer interaction. Compared with other 3D displays, the holographic display has unique advantages in providing natural depth cues and correcting eye aberrations. Therefore, it holds great promise to be the enabling technology for next-generation VR/AR devices. In this review, we survey the recent progress in holographic near-eye displays from the human-centric perspective.
View details for DOI 10.1364/OPTICA.406004
View details for PubMedID 34141829
View details for PubMedCentralID PMC8208705
-
Roadmap on 3D integral imaging: sensing, processing, and display
OPTICS EXPRESS
2020; 28 (22): 32266–93
Abstract
This Roadmap article on three-dimensional integral imaging provides an overview of some of the research activities in the field of integral imaging. The article discusses various aspects of the field including sensing of 3D scenes, processing of captured information, and 3D display and visualization of information. The paper consists of a series of 15 sections from the experts presenting various aspects of the field on sensing, processing, displays, augmented reality, microscopy, object recognition, and other applications. Each section represents the vision of its author to describe the progress, potential, vision, and challenging issues in this field.
View details for DOI 10.1364/OE.402193
View details for Web of Science ID 000582499400003
View details for PubMedID 33114917
-
Learned rotationally symmetric diffractive achromat for full-spectrum computational imaging
OPTICA
2020; 7 (8): 913–22
View details for DOI 10.1364/OPTICA.394413
View details for Web of Science ID 000564176000008
-
Neural Sensors: Learning Pixel Exposures for HDR Imaging and Video Compressive Sensing With Programmable Sensors.
IEEE transactions on pattern analysis and machine intelligence
2020; 42 (7): 1642–53
Abstract
Camera sensors rely on global or rolling shutter functions to expose an image. This fixed function approach severely limits the sensors' ability to capture high-dynamic-range (HDR) scenes and resolve high-speed dynamics. Spatially varying pixel exposures have been introduced as a powerful computational photography approach to optically encode irradiance on a sensor and computationally recover additional information of a scene, but existing approaches rely on heuristic coding schemes and bulky spatial light modulators to optically implement these exposure functions. Here, we introduce neural sensors as a methodology to optimize per-pixel shutter functions jointly with a differentiable image processing method, such as a neural network, in an end-to-end fashion. Moreover, we demonstrate how to leverage emerging programmable and re-configurable sensor-processors to implement the optimized exposure functions directly on the sensor. Our system takes specific limitations of the sensor into account to optimize physically feasible optical codes and we evaluate its performance for snapshot HDR and high-speed compressive imaging both in simulation and experimentally with real scenes.
View details for DOI 10.1109/TPAMI.2020.2986944
View details for PubMedID 32305899
-
Optically sensing neural activity without imaging
NATURE PHOTONICS
2020; 14 (6): 340–41
View details for DOI 10.1038/s41566-020-0642-9
View details for Web of Science ID 000536360800003
-
Non-line-of-sight imaging
NATURE REVIEWS PHYSICS
2020
View details for DOI 10.1038/s42254-020-0174-8
View details for Web of Science ID 000538176800001
-
SPADnet: deep RGB-SPAD sensor fusion assisted by monocular depth estimation
OPTICS EXPRESS
2020; 28 (10): 14948–62
Abstract
Single-photon light detection and ranging (LiDAR) techniques use emerging single-photon detectors (SPADs) to push 3D imaging capabilities to unprecedented ranges. However, it remains challenging to robustly estimate scene depth from the noisy and otherwise corrupted measurements recorded by a SPAD. Here, we propose a deep sensor fusion strategy that combines corrupted SPAD data and a conventional 2D image to estimate the depth of a scene. Our primary contribution is a neural network architecture-SPADnet-that uses a monocular depth estimation algorithm together with a SPAD denoising and sensor fusion strategy. This architecture, together with several techniques in network training, achieves state-of-the-art results for RGB-SPAD fusion with simulated and captured data. Moreover, SPADnet is more computationally efficient than previous RGB-SPAD fusion networks.
View details for DOI 10.1364/OE.392386
View details for Web of Science ID 000538870000067
View details for PubMedID 32403527
-
Factored Occlusion: Single Spatial Light Modulator Occlusion-capable Optical See-through Augmented Reality Display
IEEE COMPUTER SOC. 2020: 1871–79
Abstract
Occlusion is a powerful visual cue that is crucial for depth perception and realism in optical see-through augmented reality (OST-AR). However, existing OST-AR systems additively overlay physical and digital content with beam combiners - an approach that does not easily support mutual occlusion, resulting in virtual objects that appear semi-transparent and unrealistic. In this work, we propose a new type of occlusion-capable OST-AR system. Rather than additively combining the real and virtual worlds, we employ a single digital micromirror device (DMD) to merge the respective light paths in a multiplicative manner. This unique approach allows us to simultaneously block light incident from the physical scene on a pixel-by-pixel basis while also modulating the light emitted by a light-emitting diode (LED) to display digital content. Our technique builds on mixed binary/continuous factorization algorithms to optimize time-multiplexed binary DMD patterns and their corresponding LED colors to approximate a target augmented reality (AR) scene. In simulations and with a prototype benchtop display, we demonstrate hard-edge occlusions, plausible shadows, and also gaze-contingent optimization of this novel display mode, which only requires a single spatial light modulator.
View details for DOI 10.1109/TVCG.2020.2973443
View details for Web of Science ID 000523746000006
View details for PubMedID 32070978
-
Gaze-Contingent Ocular Parallax Rendering for Virtual Reality
ACM TRANSACTIONS ON GRAPHICS
2020; 39 (2)
View details for DOI 10.1145/3361330
View details for Web of Science ID 000583691000002
-
DEEP OPTICS: LEARNING CAMERAS AND OPTICAL COMPUTING SYSTEMS
IEEE. 2020: 1313-1315
View details for DOI 10.1109/IEEECONF51394.2020.9443575
View details for Web of Science ID 000681731800251
-
Neural Holography
ASSOC COMPUTING MACHINERY. 2020
View details for DOI 10.1145/3388534.3407295
View details for Web of Science ID 000684182700011
-
Semantic Implicit Neural Scene Representations With Semi-Supervised Training
IEEE. 2020: 423-433
View details for DOI 10.1109/3DV50981.2020.00052
View details for Web of Science ID 000653085200043
-
Deep Optics for Single-shot High-dynamic-range Imaging
IEEE. 2020: 1372–82
View details for DOI 10.1109/CVPR42600.2020.00145
View details for Web of Science ID 000620679501062
-
Non-line-of-sight Surface Reconstruction Using the Directional Light-cone Transform
IEEE. 2020: 1404–13
View details for DOI 10.1109/CVPR42600.2020.00148
View details for Web of Science ID 000620679501065
-
Comparison of head pose tracking methods for mixed-reality neuronavigation for transcranial magnetic stimulation
SPIE Medical Imaging
2020
View details for DOI 10.1117/12.2547917
-
Deep Adaptive LiDAR: End-to-end Optimization of Sampling and Depth Completion at Low Sampling Rates
IEEE. 2020
View details for Web of Science ID 000589708300015
-
Three-dimensional imaging through scattering media based on confocal diffuse tomography.
Nature communications
2020; 11 (1): 4517
Abstract
Optical imaging techniques, such as light detection and ranging (LiDAR), are essential tools in remote sensing, robotic vision, and autonomous driving. However, the presence of scattering places fundamental limits on our ability to image through fog, rain, dust, or the atmosphere. Conventional approaches for imaging through scattering media operate at microscopic scales or require a priori knowledge of the target location for 3D imaging. We introduce a technique that co-designs single-photon avalanche diodes, ultra-fast pulsed lasers, and a new inverse method to capture 3D shape through scattering media. We demonstrate acquisition of shape and position for objects hidden behind a thick diffuser (≈6 transport mean free paths) at macroscopic scales. Our technique, confocal diffuse tomography, may be of considerable value to the aforementioned applications.
View details for DOI 10.1038/s41467-020-18346-3
View details for PubMedID 32908155
-
Cortical Observation by Synchronous Multifocal Optical Sampling Reveals Widespread Population Encoding of Actions.
Neuron
2020
Abstract
To advance the measurement of distributed neuronal population representations of targeted motor actions on single trials, we developed an optical method (COSMOS) for tracking neural activity in a largely uncharacterized spatiotemporal regime. COSMOS allowed simultaneous recording of neural dynamics at ∼30 Hz from over a thousand near-cellular resolution neuronal sources spread across the entire dorsal neocortex of awake, behaving mice during a three-option lick-to-target task. We identified spatially distributed neuronal population representations spanning the dorsal cortex that precisely encoded ongoing motor actions on single trials. Neuronal correlations measured at video rate using unaveraged, whole-session data had localized spatial structure, whereas trial-averaged data exhibited widespread correlations. Separable modes of neural activity encoded history-guided motor plans, with similar population dynamics in individual areas throughout cortex. These initial experiments illustrate how COSMOS enables investigation of large-scale cortical dynamics and that information about motor actions is widely shared between areas, potentially underlying distributed computations.
View details for DOI 10.1016/j.neuron.2020.04.023
View details for PubMedID 32433908
-
Panoramic single-aperture multi-sensor light field camera
OPTICS EXPRESS
2019; 27 (26): 37257–73
Abstract
We describe a panoramic camera using one monocentric lens and an array of light field (LF) sensors to capture overlapping contiguous regions of the spherical image surface. Refractive sub-field consolidators divide the light before the image surface and concentrate the sub-images onto the optically active areas of adjacent CMOS sensors. We show the design of a 160° × 24° field-of-view (FOV) LF camera, and experimental test of a three sensor F/2.5 96° × 24° and five sensor (25 MPixel) F/4 140° × 24° camera. We demonstrate computational field curvature correction, refocusing, resolution enhancement, and depth mapping of a laboratory scene. We also present a 155° full circular field camera design compatible with LF or direct 164 MPixel sensing of 13 spherical sub-images, fitting within a one inch diameter sphere.
View details for DOI 10.1364/OE.27.037257
View details for Web of Science ID 000507254300014
View details for PubMedID 31878509
-
Varifocal Occlusion-Capable Optical See-through Augmented Reality Display based on Focus-tunable Optics
IEEE COMPUTER SOC. 2019: 3125–34
Abstract
Optical see-through augmented reality (AR) systems are a next-generation computing platform that offer unprecedented user experiences by seamlessly combining physical and digital content. Many of the traditional challenges of these displays have been significantly improved over the last few years, but AR experiences offered by today's systems are far from seamless and perceptually realistic. Mutually consistent occlusions between physical and digital objects are typically not supported. When mutual occlusion is supported, it is only supported for a fixed depth. We propose a new optical see-through AR display system that renders mutual occlusion in a depth-dependent, perceptually realistic manner. To this end, we introduce varifocal occlusion displays based on focus-tunable optics, which comprise a varifocal lens system and spatial light modulators that enable depth-corrected hard-edge occlusions for AR experiences. We derive formal optimization methods and closed-form solutions for driving this tunable lens system and demonstrate a monocular varifocal occlusion-capable optical see-through AR display capable of perceptually realistic occlusion across a large depth range.
View details for DOI 10.1109/TVCG.2019.2933120
View details for Web of Science ID 000489833000010
View details for PubMedID 31502977
-
Holographic Near-Eye Displays Based on Overlap-Add Stereograms
ACM TRANSACTIONS ON GRAPHICS
2019; 38 (6)
View details for DOI 10.1145/3355089.3356517
View details for Web of Science ID 000498397300063
-
Learned Large Field-of-View Imaging With Thin-Plate Optics
ACM TRANSACTIONS ON GRAPHICS
2019; 38 (6)
View details for DOI 10.1145/3355089.3356526
View details for Web of Science ID 000498397300068
-
Preface
COMPUTER GRAPHICS FORUM
2019; 38 (7)
View details for Web of Science ID 000496351100070
-
Wave-Based Non-Line-of-Sight Imaging using Fast f-k Migration
ACM TRANSACTIONS ON GRAPHICS
2019; 38 (4)
View details for DOI 10.1145/3306346.3322937
View details for Web of Science ID 000475740600090
-
Non-line-of-sight Imaging with Partial Occluders and Surface Normals
ACM TRANSACTIONS ON GRAPHICS
2019; 38 (3)
View details for DOI 10.1145/3269977
View details for Web of Science ID 000495415600004
-
A Light-Field Metasurface for High-Resolution Single-Particle Tracking
NANO LETTERS
2019; 19 (4): 2267–71
View details for DOI 10.1021/acs.nanolett.8b04673
View details for Web of Science ID 000464769100010
-
A Light-Field Metasurface for High-Resolution Single-Particle Tracking.
Nano letters
2019
Abstract
Three-dimensional (3D) single-particle tracking (SPT) is a key tool for studying dynamic processes in the life sciences. However, conventional optical elements utilizing light fields impose an inherent trade-off between lateral and axial resolution, preventing SPT with high spatiotemporal resolution across an extended volume. We overcome the typical loss in spatial resolution that accompanies light-field-based approaches to obtain 3D information by placing a standard microscope coverslip patterned with a multifunctional, light-field metasurface on a specimen. This approach enables an otherwise unmodified microscope to gather 3D information at an enhanced spatial resolution. We demonstrate simultaneous tracking of multiple fluorescent particles within a large 0.5 * 0.5 * 0.3 mm3 volume using a standard epi-fluorescent microscope with submicron lateral and micron-level axial resolution.
View details for PubMedID 30897902
-
Scene Representation Networks: Continuous 3D-Structure-Aware Neural Scene Representations
NEURAL INFORMATION PROCESSING SYSTEMS (NIPS). 2019
View details for Web of Science ID 000534424301015
-
Deep Optics for Monocular Depth Estimation and 3D Object Detection
IEEE. 2019: 10192–201
View details for DOI 10.1109/ICCV.2019.01029
View details for Web of Science ID 000548549205031
-
Acoustic Non-Line-of-Sight Imaging
IEEE. 2019: 3773–6782
View details for DOI 10.1109/CVPR.2019.00694
View details for Web of Science ID 000542649300024
-
LiFF: Light Field Features in Scale and Depth
IEEE. 2019: 8034–43
View details for DOI 10.1109/CVPR.2019.00823
View details for Web of Science ID 000542649301065
-
DeepVoxels: Learning Persistent 3D Feature Embeddings
IEEE COMPUTER SOC. 2019: 2432–41
View details for DOI 10.1109/CVPR.2019.00254
View details for Web of Science ID 000529484002061
-
Sub-picosecond photon-efficient 3D imaging using single-photon sensors.
Scientific reports
2018; 8 (1): 17726
Abstract
Active 3D imaging systems have broad applications across disciplines, including biological imaging, remote sensing and robotics. Applications in these domains require fast acquisition times, high timing accuracy, and high detection sensitivity. Single-photon avalanche diodes (SPADs) have emerged as one of the most promising detector technologies to achieve all of these requirements. However, these detectors are plagued by measurement distortions known as pileup, which fundamentally limit their precision. In this work, we develop a probabilistic image formation model that accurately models pileup. We devise inverse methods to efficiently and robustly estimate scene depth and reflectance from recorded photon counts using the proposed model along with statistical priors. With this algorithm, we not only demonstrate improvements to timing accuracy by more than an order of magnitude compared to the state-of-the-art, but our approach is also the first to facilitate sub-picosecond-accurate, photon-efficient 3D imaging in practical scenarios where widely-varying photon counts are observed.
View details for PubMedID 30531961
-
Sub-picosecond photon-efficient 3D imaging using single-photon sensors
SCIENTIFIC REPORTS
2018; 8
View details for DOI 10.1038/s41598-018-35212-x
View details for Web of Science ID 000452635000028
-
Hybrid optical-electronic convolutional neural networks with optimized diffractive optics for image classification.
Scientific reports
2018; 8 (1): 12324
Abstract
Convolutional neural networks (CNNs) excel in a wide variety of computer vision applications, but their high performance also comes at a high computational cost. Despite efforts to increase efficiency both algorithmically and with specialized hardware, it remains difficult to deploy CNNs in embedded systems due to tight power budgets. Here we explore a complementary strategy that incorporates a layer of optical computing prior to electronic computing, improving performance on image classification tasks while adding minimal electronic computational cost or processing time. We propose a design for an optical convolutional layer based on an optimized diffractive optical element and test our design in two simulations: a learned optical correlator and an optoelectronic two-layer CNN. We demonstrate in simulation and with an optical prototype that the classification accuracies of our optical systems rival those of the analogous electronic implementations, while providing substantial savings on computational cost.
View details for PubMedID 30120316
-
Hybrid optical-electronic convolutional neural networks with optimized diffractive optics for image classification
SCIENTIFIC REPORTS
2018; 8
View details for DOI 10.1038/s41598-018-30619-y
View details for Web of Science ID 000441876700027
-
End-to-end Optimization of Optics and Image Processing for Achromatic Extended Depth of Field and Super-resolution Imaging
ACM TRANSACTIONS ON GRAPHICS
2018; 37 (4)
View details for DOI 10.1145/3197517.3201333
View details for Web of Science ID 000448185000075
-
Single-Photon 3D Imaging with Deep Sensor Fusion
ACM TRANSACTIONS ON GRAPHICS
2018; 37 (4)
View details for DOI 10.1145/3197517.3201316
View details for Web of Science ID 000448185000074
-
A convex 3D deconvolution algorithm for low photon count fluorescence imaging.
Scientific reports
2018; 8 (1): 11489
Abstract
Deconvolution is widely used to improve the contrast and clarity of a 3D focal stack collected using a fluorescence microscope. But despite being extensively studied, deconvolution algorithms can introduce reconstruction artifacts when their underlying noise models or priors are violated, such as when imaging biological specimens at extremely low light levels. In this paper we propose a deconvolution method specifically designed for 3D fluorescence imaging of biological samples in the low-light regime. Our method utilizes a mixed Poisson-Gaussian model of photon shot noise and camera read noise, which are both present in low light imaging. We formulate a convex loss function and solve the resulting optimization problem using the alternating direction method of multipliers algorithm. Among several possible regularization strategies, we show that a Hessian-based regularizer is most effective for describing locally smooth features present in biological specimens. Our algorithm also estimates noise parameters on-the-fly, thereby eliminating a manual calibration step required by most deconvolution software. We demonstrate our algorithm on simulated images and experimentally-captured images with peak intensities of tens of photoelectrons per voxel. We also demonstrate its performance for live cell imaging, showing its applicability as a tool for biological research.
View details for PubMedID 30065270
-
A convex 3D deconvolution algorithm for low photon count fluorescence imaging
SCIENTIFIC REPORTS
2018; 8
View details for DOI 10.1038/s41598-018-29768-x
View details for Web of Science ID 000440288600023
-
Towards a Machine-learning Approach for Sickness Prediction in 360 degrees Stereoscopic Videos
IEEE COMPUTER SOC. 2018: 1594–1603
Abstract
Virtual reality systems are widely believed to be the next major computing platform. There are, however, some barriers to adoption that must be addressed, such as that of motion sickness - which can lead to undesirable symptoms including postural instability, headaches, and nausea. Motion sickness in virtual reality occurs as a result of moving visual stimuli that cause users to perceive self-motion while they remain stationary in the real world. There are several contributing factors to both this perception of motion and the subsequent onset of sickness, including field of view, motion velocity, and stimulus depth. We verify first that differences in vection due to relative stimulus depth remain correlated with sickness. Then, we build a dataset of stereoscopic 3D videos and their corresponding sickness ratings in order to quantify their nauseogenicity, which we make available for future use. Using this dataset, we train a machine learning algorithm on hand-crafted features (quantifying speed, direction, and depth as functions of time) from each video, learning the contributions of these various features to the sickness ratings. Our predictor generally outperforms a naïve estimate, but is ultimately limited by the size of the dataset. However, our result is promising and opens the door to future work with more extensive datasets. This and further advances in this space have the potential to alleviate developer and end user concerns about motion sickness in the increasingly commonplace virtual world.
View details for DOI 10.1109/TVCG.2018.2793560
View details for Web of Science ID 000427682500022
View details for PubMedID 29553929
-
Convolutional Sparse Coding for RGB plus NIR Imaging
IEEE TRANSACTIONS ON IMAGE PROCESSING
2018; 27 (4): 1611–25
Abstract
Emerging sensor designs increasingly rely on novel color filter arrays (CFAs) to sample the incident spectrum in unconventional ways. In particular, capturing a near-infrared (NIR) channel along with conventional RGB color is an exciting new imaging modality. RGB+NIR sensing has broad applications in computational photography, such as low-light denoising, it has applications in computer vision, such as facial recognition and tracking, and it paves the way toward low-cost single-sensor RGB and depth imaging using structured illumination. However, cost-effective commercial CFAs suffer from severe spectral cross talk. This cross talk represents a major challenge in high-quality RGB+NIR imaging, rendering existing spatially multiplexed sensor designs impractical. In this work, we introduce a new approach to RGB+NIR image reconstruction using learned convolutional sparse priors. We demonstrate high-quality color and NIR imaging for challenging scenes, even including high-frequency structured NIR illumination. The effectiveness of the proposed method is validated on a large data set of experimental captures, and simulated benchmark results which demonstrate that this work achieves unprecedented reconstruction quality.
View details for DOI 10.1109/TIP.2017.2781303
View details for Web of Science ID 000429463800005
View details for PubMedID 29324415
-
Saliency in VR: How do people explore virtual environments?
IEEE COMPUTER SOC. 2018: 1633–42
Abstract
Understanding how people explore immersive virtual environments is crucial for many applications, such as designing virtual reality (VR) content, developing new compression algorithms, or learning computational models of saliency or visual attention. Whereas a body of recent work has focused on modeling saliency in desktop viewing conditions, VR is very different from these conditions in that viewing behavior is governed by stereoscopic vision and by the complex interaction of head orientation, gaze, and other kinematic constraints. To further our understanding of viewing behavior and saliency in VR, we capture and analyze gaze and head orientation data of 169 users exploring stereoscopic, static omni-directional panoramas, for a total of 1980 head and gaze trajectories for three different viewing conditions. We provide a thorough analysis of our data, which leads to several important insights, such as the existence of a particular fixation bias, which we then use to adapt existing saliency predictors to immersive VR conditions. In addition, we explore other applications of our data and analysis, including automatic alignment of VR video cuts, panorama thumbnails, panorama video synopsis, and saliency-basedcompression.
View details for DOI 10.1109/TVCG.2018.2793599
View details for Web of Science ID 000427682500026
View details for PubMedID 29553930
-
Single-shot speckle correlation fluorescence microscopy in thick scattering tissue with image reconstruction priors
JOURNAL OF BIOPHOTONICS
2018; 11 (3)
View details for DOI 10.1002/jbio.201700224
View details for Web of Science ID 000426731000028
-
An Easy-to-Use Pipeline for an RGBD Camera and an AR Headset
PRESENCE-VIRTUAL AND AUGMENTED REALITY
2018; 27 (2): 202-205
View details for DOI 10.1162/PRES_a_00326
View details for Web of Science ID 000568215900003
-
Single-shot speckle correlation fluorescence microscopy in thick scattering tissue with image reconstruction priors.
Journal of biophotonics
2018; 11 (3)
Abstract
Deep tissue imaging in the multiple scattering regime remains at the frontier of fluorescence microscopy. Speckle correlation imaging (SCI) can computationally uncover objects hidden behind a scattering layer, but has only been demonstrated with scattered laser illumination and in geometries where the scatterer is in the far field of the target object. Here, SCI is extended to imaging a planar fluorescent signal at the back surface of a 500-mum-thick slice of mouse brain. The object is reconstructed from a single snapshot through phase retrieval using a proximal algorithm that easily incorporates image priors. Simulations and experiments demonstrate improved image recovery with this approach compared to the conventional SCI algorithm.
View details for PubMedID 29219256
-
Time-multiplexed light field synthesis via factored Wigner distribution function
OPTICS LETTERS
2018; 43 (3): 599–602
Abstract
An optimization algorithm for preparing display-ready holographic elements (hogels) to synthesize a light field is outlined, and proof of concept is experimentally demonstrated. This method allows for higher-rank factorization, which can be used for time-multiplexing multiple frames for improved image quality, using phase-only and fully complex modulation with a single spatial light modulator.
View details for DOI 10.1364/OL.43.000599
View details for Web of Science ID 000423776600064
View details for PubMedID 29400850
-
Towards Transient Imaging at Interactive Rates with Single-Photon Detectors
IEEE. 2018
View details for Web of Science ID 000435001500006
-
Deep End-to-End Time-of-Flight Imaging
IEEE. 2018: 6383–92
View details for DOI 10.1109/CVPR.2018.00668
View details for Web of Science ID 000457843606056
-
Real-time Non-line-of-sight Imaging
ASSOC COMPUTING MACHINERY. 2018
View details for DOI 10.1145/3214907.3214920
View details for Web of Science ID 000455250500015
-
Confocal Non-line-of-sight Imaging
ASSOC COMPUTING MACHINERY. 2018
View details for DOI 10.1145/3214745.3214795
View details for Web of Science ID 000455248900001
-
Autofocals: Gaze-Contingent Eyeglasses for Presbyopes
ASSOC COMPUTING MACHINERY. 2018
View details for DOI 10.1145/3214907.3214918
View details for Web of Science ID 000455250500003
-
Snapshot Difference Imaging using Correlation Time-of-Flight Sensors
ASSOC COMPUTING MACHINERY. 2017
View details for DOI 10.1145/3130800.3130885
View details for Web of Science ID 000417448700050
-
SpinVR: Towards Live-Streaming 3D Virtual Reality Video
ASSOC COMPUTING MACHINERY. 2017
View details for DOI 10.1145/3130800.3130836
View details for Web of Science ID 000417448700039
-
Accommodation-invariant Computational Near-eye Displays
ACM TRANSACTIONS ON GRAPHICS
2017; 36 (4)
View details for DOI 10.1145/3072959.3073594
View details for Web of Science ID 000406432100056
-
Movie Editing and Cognitive Event Segmentation in Virtual Reality Video
ACM TRANSACTIONS ON GRAPHICS
2017; 36 (4)
View details for DOI 10.1145/3072959.3073668
View details for Web of Science ID 000406432100015
-
Optimizing virtual reality for all users through gaze-contingent and adaptive focus displays.
Proceedings of the National Academy of Sciences of the United States of America
2017; 114 (9): 2183-2188
Abstract
From the desktop to the laptop to the mobile device, personal computing platforms evolve over time. Moving forward, wearable computing is widely expected to be integral to consumer electronics and beyond. The primary interface between a wearable computer and a user is often a near-eye display. However, current generation near-eye displays suffer from multiple limitations: they are unable to provide fully natural visual cues and comfortable viewing experiences for all users. At their core, many of the issues with near-eye displays are caused by limitations in conventional optics. Current displays cannot reproduce the changes in focus that accompany natural vision, and they cannot support users with uncorrected refractive errors. With two prototype near-eye displays, we show how these issues can be overcome using display modes that adapt to the user via computational optics. By using focus-tunable lenses, mechanically actuated displays, and mobile gaze-tracking technology, these displays can be tailored to correct common refractive errors and provide natural focus cues by dynamically updating the system based on where a user looks in a virtual scene. Indeed, the opportunities afforded by recent advances in computational optics open up the possibility of creating a computing platform in which some users may experience better quality vision in the virtual world than in the real one.
View details for DOI 10.1073/pnas.1617251114
View details for PubMedID 28193871
-
Reconstructing Transient Images from Single-Photon Sensors
IEEE. 2017: 2289–97
View details for DOI 10.1109/CVPR.2017.246
View details for Web of Science ID 000418371402038
-
Aperture interference and the volumetric resolution of light field fluorescence microscopy
IEEE. 2017: 83–94
View details for Web of Science ID 000414284100009
-
Optimizing VR for All Users Through Adaptive Focus Displays
ASSOC COMPUTING MACHINERY. 2017
View details for DOI 10.1145/3084363.3085029
View details for Web of Science ID 000441139200076
-
Consensus Convolutional Sparse Coding
IEEE. 2017: 4290–98
View details for DOI 10.1109/ICCV.2017.459
View details for Web of Science ID 000425498404038
-
Computational Near-Eye Displays: Engineering the Interface to the Digital World
NATL ACADEMIES PRESS. 2017: 7–12
View details for Web of Science ID 000431842000002
-
A Wide-Field-of-View Monocentric Light Field Camera
IEEE. 2017: 3757–66
View details for DOI 10.1109/CVPR.2017.400
View details for Web of Science ID 000418371403089
-
Photonic Multitasking Interleaved Si Nanoantenna Phased Array
NANO LETTERS
2016; 16 (12): 7671-7676
Abstract
Metasurfaces provide unprecedented control over light propagation by imparting local, space-variant phase changes on an incident electromagnetic wave. They can improve the performance of conventional optical elements and facilitate the creation of optical components with new functionalities and form factors. Here, we build on knowledge from shared aperture phased array antennas and Si-based gradient metasurfaces to realize various multifunctional metasurfaces capable of achieving multiple distinct functions within a single surface region. As a key point, we demonstrate that interleaving multiple optical elements can be accomplished without reducing the aperture of each subelement. Multifunctional optical elements constructed from Si-based gradient metasurface are realized, including axial and lateral multifocus geometric phase metasurface lenses. We further demonstrate multiwavelength color imaging with a high spatial resolution. Finally, optical imaging functionality with simultaneous color separation has been obtained by using multifunctional metasurfaces, which opens up new opportunities for the field of advanced imaging and display.
View details for DOI 10.1021/acs.nanolett.6b03505
View details for PubMedID 27960478
-
3D Displays.
Annual review of vision science
2016; 2: 397-435
Abstract
Creating realistic three-dimensional (3D) experiences has been a very active area of research and development, and this article describes progress and what remains to be solved. A very active area of technical development has been to build displays that create the correct relationship between viewing parameters and triangulation depth cues: stereo, motion, and focus. Several disciplines are involved in the design, construction, evaluation, and use of 3D displays, but an understanding of human vision is crucial to this enterprise because in the end, the goal is to provide the desired perceptual experience for the viewer. In this article, we review research and development concerning displays that create 3D experiences. And we highlight areas in which further research and development is needed.
View details for DOI 10.1146/annurev-vision-082114-035800
View details for PubMedID 28532351
-
Factored Displays Improving resolution, dynamic range, color reproduction, and light field characteristics with advanced signal processing
IEEE SIGNAL PROCESSING MAGAZINE
2016; 33 (5): 119-129
View details for DOI 10.1109/MSP.2016.2569621
View details for Web of Science ID 000384016400012
-
ProxImaL: Efficient Image Optimization using Proximal Algorithms
ACM TRANSACTIONS ON GRAPHICS
2016; 35 (4)
View details for DOI 10.1145/2897824.2925875
View details for Web of Science ID 000380112400054
-
Computational Imaging with Multi-Camera Time-of-Flight Systems
ACM TRANSACTIONS ON GRAPHICS
2016; 35 (4)
View details for DOI 10.1145/2897824.2925928
View details for Web of Science ID 000380112400003
-
Convolutional Sparse Coding for High Dynamic Range Imaging
COMPUTER GRAPHICS FORUM
2016; 35 (2): 153-163
View details for DOI 10.1111/cgf.12819
View details for Web of Science ID 000377222200015
-
Tensor low-rank and sparse light field photography
COMPUTER VISION AND IMAGE UNDERSTANDING
2016; 145: 172-181
View details for DOI 10.1016/j.cviu.2015.11.004
View details for Web of Science ID 000372378200015
-
3D Displays
ANNUAL REVIEW OF VISION SCIENCE, VOL 2
2016; 2: 397-435
Abstract
Creating realistic three-dimensional (3D) experiences has been a very active area of research and development, and this article describes progress and what remains to be solved. A very active area of technical development has been to build displays that create the correct relationship between viewing parameters and triangulation depth cues: stereo, motion, and focus. Several disciplines are involved in the design, construction, evaluation, and use of 3D displays, but an understanding of human vision is crucial to this enterprise because in the end, the goal is to provide the desired perceptual experience for the viewer. In this article, we review research and development concerning displays that create 3D experiences. And we highlight areas in which further research and development is needed.
View details for DOI 10.1146/annurev-vision-082114-035800
View details for Web of Science ID 000389589000018
-
Novel Optical Configurations for Virtual Reality: Evaluating User Preference and Performance with Focus-tunable and Monovision Near-eye Displays
ASSOC COMPUTING MACHINERY. 2016: 1211-1220
View details for DOI 10.1145/2858036.2858140
View details for Web of Science ID 000380532901024
-
DEPTH AUGMENTED STEREO PANORAMA FOR CINEMATIC VIRTUAL REALITY WITH FOCUS CUES
IEEE. 2016: 1569-1573
View details for Web of Science ID 000390782001131
-
Variable Aperture Light Field Photography: Overcoming the Diffraction-limited Spatio-angular Resolution Tradeoff
IEEE. 2016: 3737–45
View details for DOI 10.1109/CVPR.2016.406
View details for Web of Science ID 000400012303086
-
Extended field-of-view and increased-signal 3D holographic illumination with time-division multiplexing
OPTICS EXPRESS
2015; 23 (25): 32573-32581
Abstract
Phase spatial light modulators (SLMs) are widely used for generating multifocal three-dimensional (3D) illumination patterns, but these are limited to a field of view constrained by the pixel count or size of the SLM. Further, with two-photon SLM-based excitation, increasing the number of focal spots penalizes the total signal linearly--requiring more laser power than is available or can be tolerated by the sample. Here we analyze and demonstrate a method of using galvanometer mirrors to time-sequentially reposition multiple 3D holograms, both extending the field of view and increasing the total time-averaged two-photon signal. We apply our approach to 3D two-photon in vivo neuronal calcium imaging.
View details for DOI 10.1364/OE.23.032573
View details for Web of Science ID 000366687200093
View details for PubMedID 26699047
View details for PubMedCentralID PMC4775739
-
Adaptive Color Display via Perceptually-driven Factored Spectral Projection
ACM TRANSACTIONS ON GRAPHICS
2015; 34 (6)
View details for DOI 10.1145/2816795.2818070
View details for Web of Science ID 000363671200002
-
The Light Field Stereoscope Immersive Computer Graphics via Factored Near-Eye Light Field Displays with Focus Cues
ACM TRANSACTIONS ON GRAPHICS
2015; 34 (4)
View details for DOI 10.1145/2766922
View details for Web of Science ID 000358786600026
-
Doppler Time-of-Flight Imaging
ACM TRANSACTIONS ON GRAPHICS
2015; 34 (4)
View details for DOI 10.1145/2766953
View details for Web of Science ID 000358786600002
-
Fast and Flexible Convolutional Sparse Coding
IEEE. 2015: 5135-5143
View details for Web of Science ID 000387959205020
-
Vision Correcting Displays Based on Inverse Blurring and Aberration Compensation
SPRINGER-VERLAG BERLIN. 2015: 524-538
View details for DOI 10.1007/978-3-319-16199-0_37
View details for Web of Science ID 000361841100037
-
Toward BxDF Display using Multilayer Diffraction
ACM TRANSACTIONS ON GRAPHICS
2014; 33 (6)
View details for DOI 10.1145/2661229.2661246
View details for Web of Science ID 000345855600020
-
Ultra-fast Lensless Computational Imaging through 5D Frequency Analysis of Time-resolved Light Transport
INTERNATIONAL JOURNAL OF COMPUTER VISION
2014; 110 (2): 128-140
View details for DOI 10.1007/s11263-013-0686-0
View details for Web of Science ID 000347636400004
-
Computational Schlieren Photography with Light Field Probes
INTERNATIONAL JOURNAL OF COMPUTER VISION
2014; 110 (2): 113-127
View details for DOI 10.1007/s11263-013-0652-x
View details for Web of Science ID 000347636400003
-
Light Field Reconstruction Using Sparsity in the Continuous Fourier Domain
ACM TRANSACTIONS ON GRAPHICS
2014; 34 (1)
View details for DOI 10.1145/2682631
View details for Web of Science ID 000347029500012
-
Wide field of view compressive light field display using a multilayer architecture and tracked viewers
JOURNAL OF THE SOCIETY FOR INFORMATION DISPLAY
2014; 22 (10): 525-534
View details for DOI 10.1002/jsid.285
View details for Web of Science ID 000354201900005
-
Attenuation-corrected fluorescence spectra unmixing for spectroscopy and microscopy
OPTICS EXPRESS
2014; 22 (16)
Abstract
In fluorescence measurements, light is often absorbed and scattered by a sample both for excitation and emission, resulting in the measured spectra to be distorted. Conventional linear unmixing methods computationally separate overlapping spectra but do not account for these effects. We propose a new algorithm for fluorescence unmixing that accounts for the attenuation-related distortion effect on fluorescence spectra. Using a matrix representation, we derive forward measurement formation and a corresponding inverse method; the unmixing algorithm is based on nonnegative matrix factorization. We also demonstrate how this method can be extended to a higher-dimensional tensor form, which is useful for unmixing overlapping spectra observed under the attenuation effect in spectral imaging microscopy. We evaluate the proposed methods in simulation and experiments and show that it outperforms a conventional, linear unmixing method when absorption and scattering contributes to the measured signals, as in deep tissue imaging.
View details for DOI 10.1364/OE.22.019469
View details for Web of Science ID 000340714100058
View details for PubMedID 25321030
-
A Compressive Light Field Projection System
ACM TRANSACTIONS ON GRAPHICS
2014; 33 (4)
View details for DOI 10.1145/2601097.2601144
View details for Web of Science ID 000340000100025
-
Simultaneous whole-animal 3D imaging of neuronal activity using light-field microscopy
NATURE METHODS
2014; 11 (7): 727-U161
Abstract
High-speed, large-scale three-dimensional (3D) imaging of neuronal activity poses a major challenge in neuroscience. Here we demonstrate simultaneous functional imaging of neuronal activity at single-neuron resolution in an entire Caenorhabditis elegans and in larval zebrafish brain. Our technique captures the dynamics of spiking neurons in volumes of ∼700 μm × 700 μm × 200 μm at 20 Hz. Its simplicity makes it an attractive tool for high-speed volumetric calcium imaging.
View details for DOI 10.1038/NMETH.2964
View details for PubMedID 24836920
-
Eyeglasses-free Display: Towards Correcting Visual Aberrations with Computational Light Field Displays
ACM TRANSACTIONS ON GRAPHICS
2014; 33 (4)
View details for DOI 10.1145/2601097.2601122
View details for Web of Science ID 000340000100026
-
Compressive multi-mode superresolution display
OPTICS EXPRESS
2014; 22 (12): 14981-14992
Abstract
Compressive displays are an emerging technology exploring the co-design of new optical device configurations and compressive computation. Previously, research has shown how to improve the dynamic range of displays and facilitate high-quality light field or glasses-free 3D image synthesis. In this paper, we introduce a new multi-mode compressive display architecture that supports switching between 3D and high dynamic range (HDR) modes as well as a new super-resolution mode. The proposed hardware consists of readily-available components and is driven by a novel splitting algorithm that computes the pixel states from a target high-resolution image. In effect, the display pixels present a compressed representation of the target image that is perceived as a single, high resolution image.
View details for DOI 10.1364/OE.22.014981
View details for Web of Science ID 000338044300090
View details for PubMedID 24977592
-
Dual-coded compressive hyperspectral imaging
OPTICS LETTERS
2014; 39 (7): 2044-2047
Abstract
This Letter presents a new snapshot approach to hyperspectral imaging via dual-optical coding and compressive computational reconstruction. We demonstrate that two high-speed spatial light modulators, located conjugate to the image and spectral plane, respectively, can code the hyperspectral datacube into a single sensor image such that the high-resolution signal can be recovered in postprocessing. We show various applications by designing different optical modulation functions, including programmable spatially varying color filtering, multiplexed hyperspectral imaging, and high-resolution compressive hyperspectral imaging.
View details for DOI 10.1364/OL.39.002044
View details for Web of Science ID 000333887800086
View details for PubMedID 24686670
-
A Switchable Light Field Camera Architecture with Angle Sensitive Pixels and Dictionary-based Sparse Coding
IEEE. 2014
View details for Web of Science ID 000356494100015
-
Nonlinear Fluorescence Spectra Unmixing
IEEE. 2014
View details for Web of Science ID 000369908601232
-
Display adaptive 3D content remapping
COMPUTERS & GRAPHICS-UK
2013; 37 (8): 983-996
View details for DOI 10.1016/j.cag.2013.06.004
View details for Web of Science ID 000329541800006
-
A survey on computational displays: Pushing the boundaries of optics, computation, and perception
COMPUTERS & GRAPHICS-UK
2013; 37 (8): 1012-1038
View details for DOI 10.1016/j.cag.2013.10.003
View details for Web of Science ID 000329541800008
-
Focus 3D: Compressive Accommodation Display
ACM TRANSACTIONS ON GRAPHICS
2013; 32 (5)
View details for DOI 10.1145/2503144
View details for Web of Science ID 000326922900007
-
Adaptive Image Synthesis for Compressive Displays
ACM TRANSACTIONS ON GRAPHICS
2013; 32 (4)
View details for DOI 10.1145/2461912.2461925
View details for Web of Science ID 000321840100101
-
Compressive Light Field Photography using Overcomplete Dictionaries and Optimized Projections
ACM TRANSACTIONS ON GRAPHICS
2013; 32 (4)
View details for DOI 10.1145/2461912.2461914
View details for Web of Science ID 000321840100015
-
Depth of Field Analysis for Multilayer Automultiscopic Displays
9th International Symposium on Display Holography (ISDH)
IOP PUBLISHING LTD. 2013
View details for DOI 10.1088/1742-6596/415/1/012036
View details for Web of Science ID 000317123700036
-
Real-time Image Generation for Compressive Light Field Displays
9th International Symposium on Display Holography (ISDH)
IOP PUBLISHING LTD. 2013
View details for DOI 10.1088/1742-6596/415/1/012045
View details for Web of Science ID 000317123700045
-
Construction and Calibration of Optically Efficient LCD-based Multi-Layer Light Field Displays
9th International Symposium on Display Holography (ISDH)
IOP PUBLISHING LTD. 2013
View details for DOI 10.1088/1742-6596/415/1/012071
View details for Web of Science ID 000317123700071
-
On Plenoptic Multiplexing and Reconstruction
INTERNATIONAL JOURNAL OF COMPUTER VISION
2013; 101 (2): 384-400
View details for DOI 10.1007/s11263-012-0585-9
View details for Web of Science ID 000314291600009
-
Compressive Light Field Displays
IEEE COMPUTER GRAPHICS AND APPLICATIONS
2012; 32 (5): 6-11
Abstract
Light fields are the multiview extension of stereo image pairs: a collection of images showing a 3D scene from slightly different perspectives. Depicting high-resolution light fields usually requires an excessively large display bandwidth; compressive light field displays are enabled by the codesign of optical elements and computational-processing algorithms. Rather than pursuing a direct "optical" solution (for example, adding one more pixel to support the emission of one additional light ray), compressive displays aim to create flexible optical systems that can synthesize a compressed target light field. In effect, each pixel emits a superposition of light rays. Through compression and tailored optical designs, fewer display pixels are necessary to emit a given light field than a direct optical solution would require.
View details for Web of Science ID 000307910800003
View details for PubMedID 24806982
-
Tensor Displays: Compressive Light Field Synthesis using Multilayer Displays with Directional Backlighting
ACM TRANSACTIONS ON GRAPHICS
2012; 31 (4)
View details for DOI 10.1145/2185520.2185576
View details for Web of Science ID 000308250300056
-
Compressive Light Field Photography
Special-Interest-Group-on-Computer-Graphics-and-Interactive-Techniques Conference (SIGGRAPH)
ASSOC COMPUTING MACHINERY. 2012
View details for DOI 10.1145/2343045.2343101
View details for Web of Science ID 000325066900041
-
Beyond Parallax Barriers: Applying Formal Optimization Methods to Multi-Layer Automultiscopic Displays
SPIE-INT SOC OPTICAL ENGINEERING. 2012
View details for DOI 10.1117/12.907146
View details for Web of Science ID 000302558300008
-
Frequency Analysis of Transient Light Transport with Applications in Bare Sensor Imaging
SPRINGER-VERLAG BERLIN. 2012: 542-555
View details for Web of Science ID 000343418300039
-
Computational Plenoptic Imaging
COMPUTER GRAPHICS FORUM
2011; 30 (8): 2397-2426
View details for DOI 10.1111/j.1467-8659.2011.02073.x
View details for Web of Science ID 000297317200020
-
Polarization Fields: Dynamic Light Field Display using Multi-Layer LCDs
ACM TRANSACTIONS ON GRAPHICS
2011; 30 (6)
View details for DOI 10.1145/2024156.2024220
View details for Web of Science ID 000297681100064
-
Layered 3D: Tomographic Image Synthesis for Attenuation-based Light Field and High Dynamic Range Displays
ACM TRANSACTIONS ON GRAPHICS
2011; 30 (4)
View details for DOI 10.1145/1964921.1964990
View details for Web of Science ID 000297216400069
-
Refractive Shape from Light Field Distortion
IEEE. 2011: 1180-1186
View details for Web of Science ID 000300061900150