All Publications
-
Deep plug-and-play HIO approach for phase retrieval
APPLIED OPTICS
2025; 64 (5): A84-A94
Abstract
In the phase retrieval problem, the aim is the recovery of an unknown image from intensity-only measurements such as Fourier intensity. Although there are several solution approaches, solving this problem is challenging due to its nonlinear and ill-posed nature. Recently, learning-based approaches have emerged as powerful alternatives to the analytical methods for several inverse problems. In the context of phase retrieval, a novel plug-and-play approach, to our knowledge, that exploits learning-based prior and efficient update steps has been presented at the Computational Optical Sensing and Imaging topical meeting, with demonstrated state-of-the-art performance. The key idea was to incorporate learning-based prior to the Gerchberg-Saxton type algorithms through plug-and-play regularization. In this paper, we present the mathematical development of the method including the derivation of its analytical update steps based on half-quadratic splitting and comparatively evaluate its performance through extensive simulations on a large test dataset. The results show the effectiveness of the method in terms of image quality, computational efficiency, and robustness to initialization and noise.
View details for DOI 10.1364/AO.545152
View details for Web of Science ID 001424944100002
View details for PubMedID 40793019
-
Virtual Gram staining of label-free bacteria using dark-field microscopy and deep learning.
Science advances
2025; 11 (2): eads2757
Abstract
Gram staining has been a frequently used staining protocol in microbiology. It is vulnerable to staining artifacts due to, e.g., operator errors and chemical variations. Here, we introduce virtual Gram staining of label-free bacteria using a trained neural network that digitally transforms dark-field images of unstained bacteria into their Gram-stained equivalents matching bright-field image contrast. After a one-time training, the virtual Gram staining model processes an axial stack of dark-field microscopy images of label-free bacteria (never seen before) to rapidly generate Gram staining, bypassing several chemical steps involved in the conventional staining process. We demonstrated the success of virtual Gram staining on label-free bacteria samples containing Escherichia coli and Listeria innocua by quantifying the staining accuracy of the model and comparing the chromatic and morphological features of the virtually stained bacteria against their chemically stained counterparts. This virtual bacterial staining framework bypasses the traditional Gram staining protocol and its challenges, including stain standardization, operator errors, and sensitivity to chemical variations.
View details for DOI 10.1126/sciadv.ads2757
View details for PubMedID 39772690
View details for PubMedCentralID PMC11803577
-
Neural network-based processing and reconstruction of compromised biophotonic image data.
Light, science & applications
2024; 13 (1): 231
Abstract
In recent years, the integration of deep learning techniques with biophotonic setups has opened new horizons in bioimaging. A compelling trend in this field involves deliberately compromising certain measurement metrics to engineer better bioimaging tools in terms of e.g., cost, speed, and form-factor, followed by compensating for the resulting defects through the utilization of deep learning models trained on a large amount of ideal, superior or alternative data. This strategic approach has found increasing popularity due to its potential to enhance various aspects of biophotonic imaging. One of the primary motivations for employing this strategy is the pursuit of higher temporal resolution or increased imaging speed, critical for capturing fine dynamic biological processes. Additionally, this approach offers the prospect of simplifying hardware requirements and complexities, thereby making advanced imaging standards more accessible in terms of cost and/or size. This article provides an in-depth review of the diverse measurement aspects that researchers intentionally impair in their biophotonic setups, including the point spread function (PSF), signal-to-noise ratio (SNR), sampling density, and pixel resolution. By deliberately compromising these metrics, researchers aim to not only recuperate them through the application of deep learning networks, but also bolster in return other crucial parameters, such as the field of view (FOV), depth of field (DOF), and space-bandwidth product (SBP). Throughout this article, we discuss various biophotonic methods that have successfully employed this strategic approach. These techniques span a wide range of applications and showcase the versatility and effectiveness of deep learning in the context of compromised biophotonic data. Finally, by offering our perspectives on the exciting future possibilities of this rapidly evolving concept, we hope to motivate our readers from various disciplines to explore novel ways of balancing hardware compromises with compensation via artificial intelligence (AI).
View details for DOI 10.1038/s41377-024-01544-9
View details for PubMedID 39237561
View details for PubMedCentralID PMC11377739
-
All-optical image denoising using a diffractive visual processor.
Light, science & applications
2024; 13 (1): 43
Abstract
Image denoising, one of the essential inverse problems, targets to remove noise/artifacts from input images. In general, digital image denoising algorithms, executed on computers, present latency due to several iterations implemented in, e.g., graphics processing units (GPUs). While deep learning-enabled methods can operate non-iteratively, they also introduce latency and impose a significant computational burden, leading to increased power consumption. Here, we introduce an analog diffractive image denoiser to all-optically and non-iteratively clean various forms of noise and artifacts from input images - implemented at the speed of light propagation within a thin diffractive visual processor that axially spans <250 × λ, where λ is the wavelength of light. This all-optical image denoiser comprises passive transmissive layers optimized using deep learning to physically scatter the optical modes that represent various noise features, causing them to miss the output image Field-of-View (FoV) while retaining the object features of interest. Our results show that these diffractive denoisers can efficiently remove salt and pepper noise and image rendering-related spatial artifacts from input phase or intensity images while achieving an output power efficiency of ~30-40%. We experimentally demonstrated the effectiveness of this analog denoiser architecture using a 3D-printed diffractive visual processor operating at the terahertz spectrum. Owing to their speed, power-efficiency, and minimal computational overhead, all-optical diffractive denoisers can be transformative for various image display and projection systems, including, e.g., holographic displays.
View details for DOI 10.1038/s41377-024-01385-6
View details for PubMedID 38310118
View details for PubMedCentralID PMC10838318
-
Super-resolution image display using diffractive decoders.
Science advances
2022; 8 (48): eadd3433
Abstract
High-resolution image projection over a large field of view (FOV) is hindered by the restricted space-bandwidth product (SBP) of wavefront modulators. We report a deep learning-enabled diffractive display based on a jointly trained pair of an electronic encoder and a diffractive decoder to synthesize/project super-resolved images using low-resolution wavefront modulators. The digital encoder rapidly preprocesses the high-resolution images so that their spatial information is encoded into low-resolution patterns, projected via a low SBP wavefront modulator. The diffractive decoder processes these low-resolution patterns using transmissive layers structured using deep learning to all-optically synthesize/project super-resolved images at its output FOV. This diffractive image display can achieve a super-resolution factor of ~4, increasing the SBP by ~16-fold. We experimentally validate its success using 3D-printed diffractive decoders that operate at the terahertz spectrum. This diffractive image decoder can be scaled to operate at visible wavelengths and used to design large SBP displays that are compact, low power, and computationally efficient.
View details for DOI 10.1126/sciadv.add3433
View details for PubMedID 36459555
View details for PubMedCentralID PMC10936058
-
Phenotypic Analysis of Microalgae Populations Using Label-Free Imaging Flow Cytometry and Deep Learning
ACS PHOTONICS
2021; 8 (4): 1232-1242
View details for DOI 10.1021/acsphotonics.1c00220
View details for Web of Science ID 000643600400035
-
Deep iterative reconstruction for phase retrieval.
Applied optics
2019; 58 (20): 5422-5431
Abstract
The classical phase retrieval problem is the recovery of a constrained image from the magnitude of its Fourier transform. Although there are several well-known phase retrieval algorithms, including the hybrid input-output (HIO) method, the reconstruction performance is generally sensitive to initialization and measurement noise. Recently, deep neural networks (DNNs) have been shown to provide state-of-the-art performance in solving several inverse problems such as denoising, deconvolution, and superresolution. In this work, we develop a phase retrieval algorithm that utilizes two DNNs together with the model-based HIO method. First, a DNN is trained to remove the HIO artifacts, and is used iteratively with the HIO method to improve the reconstructions. After this iterative phase, a second DNN is trained to remove the remaining artifacts. Numerical results demonstrate the effectiveness of our approach, which has little additional computational cost compared to the HIO method. Our approach not only achieves state-of-the-art reconstruction performance but also is more robust to different initialization and noise levels.
View details for DOI 10.1364/AO.58.005422
View details for PubMedID 31504010
-
Resolution enhancement of wide-field interferometric microscopy by coupled deep autoencoders.
Applied optics
2018; 57 (10): 2545-2552
Abstract
Wide-field interferometric microscopy is a highly sensitive, label-free, and low-cost biosensing imaging technique capable of visualizing individual biological nanoparticles such as viral pathogens and exosomes. However, further resolution enhancement is necessary to increase detection and classification accuracy of subdiffraction-limited nanoparticles. In this study, we propose a deep-learning approach, based on coupled deep autoencoders, to improve resolution of images of L-shaped nanostructures. During training, our method utilizes microscope image patches and their corresponding manual truth image patches in order to learn the transformation between them. Following training, the designed network reconstructs denoised and resolution-enhanced image patches for unseen input.
View details for DOI 10.1364/AO.57.002545
View details for PubMedID 29714238
https://orcid.org/0000-0003-3367-1858