Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Superresolved image reconstruction in FZA lensless camera by color-channel synthesis

Open Access Open Access

Abstract

The Fresnel-zone-aperture lensless camera using a fringe-scanning technique allows non-iterative well-conditioned image reconstruction; however, the spatial resolution is limited by the mathematical reconstruction model that ignores diffraction. To solve this resolution problem, we propose a novel image-reconstruction algorithm using the wave-optics-based design of the deconvolution filter and color-channel image synthesis. We verify a two-fold improvement of the effective angular resolution by conducting numerical simulations and optical experiments with a prototype.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Introduction

The computational lensless camera allows the development of an ultra-thin imaging system [18]. The optical hardware of a lensless camera simply comprises an axial stack of a coded mask and an image sensor. The coded mask encodes the spatial information of a subject while the image sensor captures a coded image. After the capturing, signal processing decodes the coded image into a subject image. The lens system in a conventional camera is replaced with a mask, and the lensless camera can thus realize ultra-thin optical hardware. Furthermore, a coded mask encodes depth information through radial scaling of the coded point-spread function (PSF). This feature has been applied to digital refocusing [7], three-dimensional imaging [810], and extended depth-of-field imaging [11]. In addition, the lensless camera has been applied to a platform of compressive sensing due to the pseudo-random system [1214], which also allows multi-dimensional imaging [15,16].

Recently, a lensless camera with a Fresnel zone aperture (FZA) was proposed [7]. In the FZA lensless camera (FZALC), a PSF is a shadow of the FZA on a sensor as in the Fresnel-transformation-based imaging method [17]. Comparing with other lensless cameras, one merit of the FZALC is the ability to accelerate image reconstruction through Moiré-based decoding [7]. Another merit is the combinational use of the fringe-scanning (FS) technique for the quasi-flat-spectrum deconvolution filter [18,19]. Although multiple imaging is required for FS, well-conditioned (noise-suppressed) numerical inversion, which corresponds to image reconstruction, is possible without an iterative algorithm; e.g., $\ell _{1}$- or $\ell _{2}$-norm minimization [20].

One current problem with the FS-based FZALC (FS-FZALC) is the restriction of the spatial resolution due to diffraction. Past studies on FZALCs including the FS-FZALC treated the PSF as the geometrical shadow of the FZA [7,17,20]. In other words, past studies did not explicitly consider diffraction of the mathematical model for image reconstruction. However, with visible light, a PSF is not just the geometrical shadow of the FZA but the diffracted pattern of the FZA. There is thus a resolution limit and reconstruction error. The effect of diffraction becomes more dominant when the pitch of a coded mask is smaller and the mask–sensor interval is larger. Some past studies on mask-based lensless cameras addressed the diffraction issue by experimental calibration of the PSF and the use of it as a reconstruction filter. However, to our best knowledge, there have been no study to analytically formulate it and engineer it to overcome the resolution limit.

In this paper, we propose a novel image-reconstruction algorithm for the FS-FZALC that deals with diffraction to improve resolution. To realize this algorithm, we derive a novel mathematical imaging model that accounts for diffraction and elucidate that the frequency response when considering diffraction has wavelength-dependent zero-crossings in the frequency domain. To compensate for these zero-crossings, we also propose a method of color-channel image synthesis, in which the missing component at the zero-crossing frequency in a color channel is compensated by the frequency component in different color channels utilizing the correlation between different color channels. Section 2.1 introduces a model of conventional FS-FZALC imaging using a geometrical-optics model. Section 2.2 derives a novel model of proposed FS-FZALC imaging using the wave-optics model. Section 2.3 proposes a novel image reconstruction algorithm using the newly derived imaging model and the color-channel synthesis technique for superresolution. Section 3 verifies the proposed super-resolution FS-FZALC imaging through numerical simulation. Section 4 verifies the validity of the proposed concept through optical experiments using a prototype.

2. Method

2.1 Conventional FS-FZALC imaging using a geometrical-optics model

The sensing and reconstruction schemes of conventional and proposed FS-FZALC imaging techniques are shown in Fig. 1. The optical hardware of an FZALC is simply the axial stack of an image sensor and an FZA as illustrated in the top left of Fig. 1. Although the FZAs with different phases are arranged on a single mask in this figure, it is also possible to capture images with different-phase FZAs sequentially using a spatial light modulator as a mask. In analog sensing using an FZALC, objects are optically encoded by an FZA and captured by an image sensor. The amplitude transmittance of an FZA $t_\textrm {FZA}$ is modeled as

$$t_\textrm{FZA}(x_\textrm{p},y_\textrm{p};\phi)=\frac{1}{2}\left[ 1+\cos \left\{ \beta \left(x_\textrm{p}^{2} + y_\textrm{p}^{2}\right) - \phi \right\}\right],$$
where $(x_\textrm {p}, y_\textrm {p})$ denotes the spatial coordinates on the FZA plane, $\phi$ corresponds to the initial phase of an FZA, and $\beta$ is a constant coefficient that characterizes the spatial frequency of an FZA. Note that we approximate the intensity transmittance as the amplitude transmittance for denoting the mask function like past studies [7,17,20]. In conventional work, on the basis of the geometrical optics, a PSF is approximated to be the geometrical shadow of the FZA [7]:
$$\begin{aligned}h_\textrm{FZA}(x,y;\phi) &=\frac{1}{2} \left[1 +\cos \left\{\beta'\left(x^{2}+y^{2} \right)-\phi \right\}\right], \\ s.t.~~\beta'&=\frac{\beta}{\left(1+M\right)^{2}},~~ M=\frac{d}{z},\end{aligned}$$
where $(x,y)$ denotes the spatial coordinates on the sensor plane, $d$ is the axial distance between the FZA and a sensor, and $z$ is the distance of a subject from the FZA. $M$ is the magnification while $\beta '$ is the magnified version of $\beta$. Assuming shift-invariance of the PSF and a planar two-dimensional (2D) object plane at a certain distance, the process of analog sensing with an FZA is simply modeled by 2D convolution on the sensor plane:
$$\begin{aligned}g\left( x,y;\phi\right) &= \iint \frac{1}{2} \left[ 1+\cos\left\{\beta'\left(\left(x-Ms\right)^{2}+\left(y-Mt\right)^{2}\right)-\phi\right\}\right]f\left(Ms,Mt\right) \textrm{d}s\textrm{d}t \\ &=h_\textrm{FZA}\left(x,y;\phi\right) *f_\textrm{M}\left(x,y\right), \end{aligned}$$
where $g$ is the intensity of a captured coded image, $(s,t)$ denotes the spatial coordinates on the object plane, $f$ is the intensity of the subject, $*$ denotes 2D convolution, and $f_\textrm {M}$ is the intensity of the magnified subject on the sensor plane.

 figure: Fig. 1.

Fig. 1. Schematic diagram of conventional imaging and the proposed imaging with the FS-FZALC.

Download Full Size | PDF

An original subject image can be reconstructed from the captured coded image simply through deconvolution:

$$ \hat{f}_\textrm{geo}(x,y)=\mathcal{F}^{-1} \left[ H^\textrm{inv}_\textrm{notFS}(u,v){G(u,v)}\right],$$
$$ s.t.~~H^\textrm{inv}_\textrm{notFS}(u,v)=\begin{cases} H^{-1}_\textrm{notFS}(u,v) & \left|H_\textrm{notFS}(u,v) \neq 0 \right|\\ 0 & \left| H_\textrm{notFS}(u,v) = 0 \right|, \end{cases} $$
where $\hat {f}_\textrm {geo}$ is an image reconstructed under the geometrical-optics approximation, $G$ is the Fourier transform of a captured coded image ($g$), $H_\textrm {notFS}$ is an optical transfer function (OTF) that is a Fourier transform of the original PSF of an FZALC ($h_\textrm {FZA}$) whereas $H^{-1}_\textrm {notFS}$ is its reciprocal, $(u,v)$ denotes the frequency coordinates, and $\mathcal {F}[\cdot ]$ denotes the Fourier transform operator. $H^\textrm {inv}_\textrm {notFS}$ is a deconvolution filter in the Fourier domain. Note that the inverse filter in Eq. (5) can be replaced with any other kind of deconvolution filter, such as the least-square filter, Wigner filter, or pseudo-inverse matrix. As indicated in Eq. (4), the magnitude of noise amplification in image reconstruction is mathematically determined by the amplitude spectrum of the OTF. However, the spectrum of the OTF of an FZALC, which has a strong zero-frequency component and many zero-crossings, is severely degraded. The strong zero-frequency component is due to the bias signal of a captured coded image. The coded image is a result of the incoherent superposition of many coded intensity patterns, and large bias is thus unavoidable in the principle of the lensless camera. The many zero-crossings are simply due to the spatial code design of the FZA. Compressive sensing can reconstruct visually good images even with such a degraded OTF [20]; however, it has a large computational cost due to the requirement of the iterative algorithm.

To solve this problem, the FS technique [18], which is also referred as a phase-shifting technique, is applied to the FZALC [19]. To apply the FS technique, at first, four coded images obtained using different FZAs whose initial phases are $\phi =\{0, \pi /2, \pi , 3/2\pi \}$ are captured. Next, the four captured coded images are synthesized into a single image:

$$\begin{aligned}&g_\textrm{FS}(x,y)=-\left\{ g\left(x,y;\frac{\pi}{2}\right)-g\left(x,y;\frac{3\pi}{2}\right)\right\}+j\left\{ g(x,y;0)-g(x,y;\pi)\right\} \\ &=\left[\left\{ h_\textrm{FZA}(x,y;0)-h_\textrm{FZA}(x,y;\pi)\right\}+j\left\{h_\textrm{FZA}\left(x,y;\frac{\pi}{2}\right)-h_\textrm{FZA}\left(x,y;\frac{3\pi}{2}\right)\right\}\right]*f_\textrm{M}(x,y) \\ &=h_\textrm{geo}(x,y)*f_\textrm{M}(x,y), \end{aligned}$$
where $g_\textrm {FS}$ is a numerically synthesized complex-valued coded image, $j$ is the imaginary unit, and $h_\textrm {geo}$ is the PSF of FS-based imaging based on the geometrical-optics model. $h_\textrm {geo}$ is analytically formulated as
$$h_\textrm{geo}(x,y)= \exp\left\{j\beta'\left(x^{2}+y^{2}\right)\right\}.$$
The OTF based on geometrical optics (OTF-geo) of the FS-FZALC is analytically derived using the Fourier transform and normalization [21]:
$$H_\textrm{geo}(u,v)= \exp\left\{-\frac{j \pi^{2}}{\beta'}\left(u^{2}+v^{2}\right)\right\}.$$
As indicated in Eq. (8), the OTF-geo theoretically has a flat amplitude spectrum. Using this, the image reconstruction of Eq. (4) is rewritten as
$$\begin{aligned}\hat{f}_\textrm{geo}(x,y)&=\mathcal{F}^{-1} \left[ H^\textrm{inv}_\textrm{geo}(u,v) G_\textrm{FS}(u,v)\right], \\ s.t.~~H^\textrm{inv}_\textrm{geo}(u,v)&=H^{-1}_\textrm{geo}(u,v)=H^{*}_\textrm{geo}(u,v)=\exp \left\{ \frac{j \pi^{2}}{\beta'}\left( u^{2}+v^{2} \right)\right\}, \end{aligned}$$
where $G_\textrm {FS}$ is the Fourier transform of $g_\textrm {FS}$ while $\cdot ^{*}$ denotes the complex conjugate. Note that $\left | H^\textrm {inv}_\textrm {geo}(u,v) \right |=\left | H_\textrm {geo}(u,v) \right |=\left | H^{*}_\textrm {geo}(u,v) \right |=1$. In this paper, we refer to $H^\textrm {inv}_\textrm {geo}(u,v)$ in Eq. (9) as the deconvolution filter based on geometrical optics (DF-geo) as indicated in Fig. 1.

In conclusion, the conventional FS-FZALC allows non-iterative image reconstruction with a flat-amplitude-spectrum deconvolution filter in lensless imaging, which implements the well-conditioned inverse problem. However, the discussion has so far ignored the effect of diffraction in analog sensing. The gap between actual analog sensing and the digital reconstruction model results in degradation of the reconstructed image. In practice, the effect of diffraction cannot be ignored when imaging with visible light. The upper limit of the reconstructable spatial frequency in past studies was determined by the first zero of the OTF where a spatial pattern of the FZA disappears on a sensor plane. Such spatial frequency will be formulated in Eq. (21) in Sec. 2.2 after wave-optics-based modeling of the forward sensing process.

2.2 Deconvolution filter based on wave optics (DF-wave) for FS-FZALC imaging

The resolution limit due to diffraction in the FZALC, which is different from the resolution limit of conventional lens-based imaging, is the main drawback of the FS-FZALC imaging system. We thus improve the mathematical model of imaging for reconstruction by taking diffraction into account. In the following discussion, we assume the incidence of a planer wave on an FZA plane for a simple formulation, which means the object is at infinity. We also assume that the case of the incidence of a spherical wave, which corresponds to the imaging of a finite-distance object, can be approximated by replacing $\beta$ with $\beta '$ as in the discussion of Sec. 2.1. A wavefront after an FZA ($w_{\textrm {FZA}}$) is modeled as

$$w_\textrm{FZA}(x,y;\phi)=\frac{1}{2}\left[ 1+\cos\left\{\beta\left(x^{2}+y^{2}\right)-\phi\right\}\right].$$
Note that the effect of the finite aperture is omitted from the discussion to focus on the FZA-specific frequency characteristics. To calculate wavefront propagation, we here modify Eq. (10) by introducing the spherical wave expression with Fresnel approximation ($w_\textrm {sph}$) [?]:
$$w_\textrm{sph}\left(x,y,z_\textrm{prop}\right)=\exp\left\{ j\frac{\pi}{\lambda z_\textrm{prop}}\left(x^{2}+y^{2}\right) \right\},$$
where $\lambda$ is the wavelength and $z_\textrm {prop}$ is the distance of the propagation of the spherical wave. Note that the coefficient of the spherical wave is omitted for simplicity. Using the spherical wave expression, Eq. (10) can be reformulated as
$$w_\textrm{FZA}(x,y;\phi)=\frac{1}{2} \left[1+{\frac{1}{2}}\left\{w_\textrm{sph}\left(x,y,\frac{\pi}{\lambda\beta}\right)\exp\left(-j\phi\right)+w_\textrm{sph}\left(x,y,-\frac{\pi}{\lambda\beta}\right)\exp\left(j\phi\right)\right\}\right].$$
Note that the spherical wave expression in Eq. (12) is a mathematical expression, which does not represent a real wavefront. The wavefront becomes $w_\textrm {sens}$ after wavefront propagation onto the sensor plane:
$$\begin{aligned}w_\textrm{sens}(x,y; \phi, \lambda)&=\frac{1}{2} \left[1+{\frac{1}{2}}\left\{w_\textrm{sph}\left(x,y,\frac{\pi}{\lambda\beta}+d\right)\exp\left(-j\phi\right)+w_\textrm{sph}\left(x,y,-\frac{\pi}{\lambda\beta}+d\right)\exp\left(j\phi\right)\right\}\right] \\ &\approx \frac{1}{2}\left[ 1+\exp\left\{ -j\frac{\beta^{2} \lambda d }{\pi}\left(x^{2}+y^{2}\right) \right\} \cos\left\{ \beta \left(x^{2}+y^{2}\right)-\phi \right\} \right], \end{aligned}$$
where we used an approximation of the smallness of $\lambda \beta d/\pi$ (see Appendix A). The incoherent PSF of a single FZA based on wave optics ($h_\textrm {FZA}'$) is then derived as
$$\begin{aligned}&h_\textrm{FZA}'(x,y;\phi,\lambda)=w_\textrm{sens}(x,y;\phi,\lambda)w_\textrm{sens}^{*}(x,y;\phi,\lambda) \\ &=\frac{1}{4}\left[ 1+2\cos\left\{ \frac{\beta^{2}\lambda d}{\pi}\left(x^{2}+y^{2}\right)\right\}\cos\left\{\beta\left(x^{2}+y^{2}\right)-\phi \right\}+\cos^{2}\left\{\beta\left(x^{2}+y^{2}\right)-\phi\right\}\right]. \end{aligned}$$
After FS-data synthesis (see Appendix B), the PSF becomes
$$h_\textrm{wave}(x,y;\lambda)=\cos\left\{ \frac{\beta^{2} \lambda d}{\pi} \left(x^{2}+y^{2}\right)\right\}\exp\left\{ j \beta\left(x^{2}+y^{2}\right)\right\}.$$
Therefore, the OTF of the FS-FZALC based on wave optics taking diffraction into account (OTF-wave), which is the Fourier transform of the $h_{\textrm {wave}}$ and normalization, is
$$H_\textrm{wave}(u,v;\lambda) \approx \exp \left\{-\frac{j\pi^{2}}{\beta}\left(u^{2}+v^{2}\right)\right\}\cos\left\{\pi\lambda d\left(u^{2}+v^{2}\right)\right\},$$
where we used the approximation on the smallness of $\lambda \beta d/\pi$ again (see Appendix C). Note that, in practice, the magnitude of the OTF decreases from zero frequency to the cutoff frequency due to diffraction at the edge of the aperture. We here define an OTF only expressing the effect of diffraction (OTF-diff) as $H_\textrm {diff}$:
$$H_\textrm{diff}(u,v;\lambda)=\cos\left\{\pi\lambda d\left(u^{2}+v^{2}\right)\right\}.$$
We here define a new deconvolution filter ($H^\textrm {inv}_\textrm {diff}$) from the OTF-diff as
$$ H^\textrm{inv}_\textrm{diff}(u,v;\lambda)=\begin{cases} H^{-1}_\textrm{diff}(u,v;\lambda) & \left|H_\textrm{diff}(u,v;\lambda) \neq 0 \right|\\ 0 & \left| H_\textrm{diff}(u,v;\lambda) = 0 \right|, \end{cases} $$
where we refer to the filter as the deconvolution filter for diffraction (DF-diff). Note that this filter can also be replaced with any other kind of deconvolution filter. Finally, the OTF-wave given by Eq. (16) can simply be decomposed as the product of the OTF-geo and OTF-diff:
$$H_\textrm{wave}(u,v;\lambda)=H_\textrm{geo}(u,v)H_\textrm{diff}(u,v;\lambda).$$
Therefore, the deconvolution filter based on wave optics (DF-wave) is defined as follows:
$$ H^\textrm{inv}_\textrm{wave}(u,v;\lambda)=\begin{cases} H^{-1}_\textrm{wave}(u,v;\lambda)=H^{-1}_\textrm{geo}(u,v)H^{-1}_\textrm{diff}(u,v;\lambda) & \left|H_\textrm{wave}(u,v;\lambda) \neq 0 \right|\\ 0 & \left| H_\textrm{wave}(u,v;\lambda) = 0 \right|. \end{cases} $$
Note again that the deconvolution filter can be implemented as, e.g., the Wigner filter or the least-square filter, instead of just the inverse filter of Eq. (20).

As indicated in Eqs. (17) and (19), the OTF-diff and OTF-wave physically have zero-crossings through diffraction. Zero-valued frequencies ($u_\textrm {zero}$, $v_\textrm {zero}$) are formulated as

$$u^{2}_\textrm{zero}+v^{2}_\textrm{zero}=\left(m+\frac{1}{2}\right)\frac{1}{\lambda d},$$
where $m$ is an arbitrary integer. As indicated in Eq. (21), diffraction causes wavelength-dependent periodic zero-crossings in the OTF. Although $H_{\textrm {wave}}$ that includes the diffraction effect is a more representative expression of the real physical phenomenon than $H_\textrm {geo}$, the lost information cannot be restored even by the deconvolution filter due to zero-crossings.

2.3 Superresolved reconstruction using synthesized deconvolution filter based on DF-wave and color-channel synthesis

To make use of the DF-wave to reconstruct higher-resolution images, we propose a novel method of compensating zero-crossings of the OTF through color-channel synthesis. In this paper, a resolution beyond the limit of the conventional FS-FZALC is referred to as super-resolution. As indicated in Eq. (21), zero-crossings in the OTF-diff are wavelength dependent. The basic idea of the proposed method is the transformation of red (R), green (G), and blue (B) components into the luminance (Y) component to compensate for zero-crossings, as shown in Fig. 1. R, G, and B components have a high correlation in the images of a natural scene, and the lost information in a certain channel is thus effectively compensated for using the components from other channels. In our proposed algorithm, super-resolution is effective only for the luminance component, whereas chrominance components are not super-resolved. However, the method is meaningful for many practical applications because the high-frequency component of the color difference is relatively insensitive for visual perception comparing with the luminance difference. Note that it is also possible to apply the least-mean-square approach for the synthesis of RGB components [22,23].

We here define color channels. Considering the spectral radiance distribution of a subject ($f(x,y; \lambda$)) and the spectral sensitivities of the R, G, and B channels of the image sensor ($S^{k}(\lambda )~~(k=\textrm {R},\textrm {G},\textrm {B}$)), the captured color image after FS synthesis ($g^{k}_\textrm {FS}(x,y$)) can be represented as

$$\begin{aligned}g_\textrm{FS}^{k}(x,y) &= \int^{\lambda_\textrm{max}}_{\lambda_\textrm{min}}S^{k}(\lambda) \left\{ h_\textrm{wave}(x,y;\lambda)*f(x,y;\lambda) \right\} \textrm{d}\lambda \\ &{\approx}~h^{k}_\textrm{wave}(x,y)*f^{k}(x,y), \end{aligned}$$
where
$$ h^{k}_\textrm{wave}(x,y) = \int^{\lambda_\textrm{max}}_{\lambda_\textrm{min}}S^{k}(\lambda)h_\textrm{wave}(x,y;\lambda) \textrm{d}\lambda,$$
$$f^{k}(x,y) = {\int^{\lambda_\textrm{max}}_{\lambda_\textrm{min}}f(x,y;\lambda) \textrm{d}\lambda.} $$
Note that we assumed the smoothness on the spectrum of the subject for approximation in Eq. (22). Here, the symbols with superscript $k$ denote the $k$-channel components of the variables, and $\lambda _\textrm {max}$ and $\lambda _\textrm {min}$ denote the longest and the shortest wavelength that the imaging system can detect. The Fourier transform of Eq. (22) becomes
$$G^{k}_\textrm{FS}(u,v) = H^{k}_\textrm{wave}(u,v)F^{k}(u,v) = H_\textrm{geo}(u,v)H^{k}_\textrm{diff}(u,v)F^{k}(u,v),$$
and specifically,
$$H^{k}_\textrm{diff}(u,v) = \int^{\lambda_\textrm{max}}_{\lambda_\textrm{min}}S^{k}(\lambda)H_\textrm{diff}(u,v;\lambda) \textrm{d}\lambda.$$
Note that $H_\textrm {geo}(u,v)$ is wavelength independent.

In addition, we define the color-space transform matrix based on ITU-R BT.709 YCbCr model as

$$\begin{aligned}T_\textrm{R2Y} = \left( \begin{array}{ccc} 0.2126 & 0.7152 & 0.0722 \\ -0.1146 & -0.3854 & 0.5000 \\ 0.5000 & -0.4542 & -0.0458 \end{array} \right),~~ T_\textrm{Y2R} = \left( \begin{array}{ccc} 1 & 0 & 1.5748 \\ 1 & -0.1873 & -0.4681 \\ 1 & 1.8556 & 0 \end{array} \right), \end{aligned}$$
where $T_\textrm {R2Y}$ is a matrix that transforms a pixel composed of RGB values to a pixel composed of YCbCr values while $T_\textrm {Y2R}$ is the matrix for the inverse transform.

For the proposed image reconstruction, we first reconstruct the superresolved luminance component of the subject using a synthesized deconvolution filter as indicated in the bottom of Fig. 1. To realize this, we deconvolve a conventional reconstructed image $(\hat {f}_\textrm {geo})$ with the DF-diff $(H^\textrm {inv}_\textrm {diff})$ because the DF-wave is simply the product of the DF-diff and DF-geo as expressed by Eq. (19). Before such operation, we calculate the absolute values of $H^\textrm {inv}_\textrm {diff}$ and corresponding sign correction of $\hat {f}_\textrm {geo}$ to prevent contrast inversion in reconstruction and make the zero-crossing compensation through color-channel synthesis effective. Using $H_\textrm {diff}$ and $\hat {F}_\textrm {geo}$ as substitutes, the operation is written as

$$H_\textrm{|diff|}(u,v) = H_\textrm{diff}(u,v) \textrm{sign} \left[H_\textrm{diff}(u,v)\right]=\left|H_\textrm{diff}(u,v) \right|, $$
$$\hat{F}_\textrm{|geo|}(u,v) = \hat{F}_\textrm{geo}(u,v) \textrm{sign} \left[H_\textrm{diff}(u,v)\right], $$
where $\hat {F}_\textrm {geo}$ is the Fourier transform of $\hat {f}_\textrm {geo}$ and $\textrm {sign} [\cdot ]$ is the sign function. Using the processed data, we synthesize a new Y channel, which is referred to as the Y’ channel in this paper, to compensate for zero-crossings of OTF-diff for DF-diff. The synthesis is
$$\begin{aligned}\left( \begin{array}{cc} \hat{F}_\textrm{|geo|}^\textrm{Y'}(u,v) & H_\textrm{|diff|}^\textrm{Y'}(u,v)\\ - & -\\ - & - \end{array} \right) =T_\textrm{R2Y} \left( \begin{array}{cc} \hat{F}^\textrm{R}_\textrm{|geo|}(u,v) & H^\textrm{R}_\textrm{|diff|}(u,v)\\ \hat{F}^\textrm{G}_\textrm{|geo|}(u,v) & H^\textrm{G}_\textrm{|diff|}(u,v)\\ \hat{F}^\textrm{B}_\textrm{|geo|}(u,v) & H^\textrm{B}_\textrm{|diff|}(u,v) \end{array} \right). \end{aligned}$$
In this paper, we refer $H_\textrm {|diff|}^\textrm {Y' inv}$ to the synthesized deconvolution filter. The Y’-channel image is deconvolved using the synthesized deconvolution filter without zero-crossings-problem to numerically invert the diffraction effect:
$$\hat{f}^\textrm{Y'}_\textrm{wave}(x,y)=\mathcal{F}^{-1}\left[H_\textrm{|diff|}^\textrm{Y' inv}(u,v) \hat{F}_\textrm{|geo|}^\textrm{Y'}(u,v)\right],$$
where $\hat {f}_\textrm {wave}$ is the reconstructed subject image after numerical inversion of the diffraction effect, and $H_\textrm {|diff|}^\textrm {Y' inv}(u,v)$ is the synthesized deconvolution filter made from $H_\textrm {|diff|}^\textrm {Y'}(u,v)$ with the definition of Eq. (5).

We next reconstruct the non-superresolved two chrominance components simply using the color-space transform:

$$\begin{aligned}\left( \begin{array}{c} -\\ \hat{f}_\textrm{geo}^\textrm{Cb}(x,y)\\ \hat{f}_\textrm{geo}^\textrm{Cr}(x,y) \end{array} \right) =T_\textrm{R2Y} \left( \begin{array}{c} \hat{f}^\textrm{R}_\textrm{geo}(x,y)\\ \hat{f}^\textrm{G}_\textrm{geo}(x,y)\\ \hat{f}^\textrm{B}_\textrm{geo}(x,y) \end{array} \right). \end{aligned}$$
We finally convert Y’CbCr data of the superresolved subject image into an RGB image:
$$\begin{aligned}\left( \begin{array}{c} \hat{f}^\textrm{R}(x,y)\\ \hat{f}^\textrm{G}(x,y)\\ \hat{f}^\textrm{B}(x,y) \end{array} \right) =T_\textrm{Y2R} \left( \begin{array}{c} \hat{f}^\textrm{Y'}_\textrm{wave}(x,y)\\ \hat{f}^\textrm{Cb}_\textrm{geo}(x,y)\\ \hat{f}^\textrm{Cr}_\textrm{geo}(x,y) \end{array} \right), \end{aligned}$$
where $\hat {f}$ is the final reconstructed RGB image. The final image contains superresolved luminance information with non-superresolved chrominance information. The effect of the proposed method from the viewpoint of resolution and chromatic error will be addressed in the next section.

In this section, all the PSFs were derived analytically to investigate and discuss their mathematical characteristics. However, we can also obtain them simply by numerical simulation of wavefront propagation in practice. As an alternative way, the PSF can also be obtained by experimental calibration. Such PSFs can also be used for a deconvolution filter if the calibration error and calculation error are well suppressed.

3. Simulation

To verify the effectiveness of the proposed method, we numerically simulated FS-FZALC imaging using the proposed method. We used the geometrical setup shown in Fig. 2. We placed an RGB color image sensor, an FZA in front of the sensor, and a subject in front of the FZA. We assumed refreshable implementation of an FZA like amplitude modulation using a spatial light modulator, and four times image capturing while changing the initial phase of the FZA for realizing FS-data synthesis. The system parameters are summarized in Table 1.

 figure: Fig. 2.

Fig. 2. Geometrical setup of simulations.

Download Full Size | PDF

Tables Icon

Table 1. System parameters

Figure 3 shows the line profiles of the MTFs of RGB channels ($|H_\textrm {diff}^\textrm {}{R}|, |H_\textrm {diff}^\textrm {}{G}|, |H_\textrm {diff}^\textrm {}{B}|$) and the synthesized MTF of the Y’ channel ($H^\textrm {}{Y'}_\textrm {|diff|}$). This figure visualizes the wavelength-dependent zero-crossings of RGB MTFs and the compensation effect of the color-channel synthesis. Here, the spatial frequency is measured on a sensor plane up to the Nyquist frequency. In practice, the light transmitting the finite FZA is diffracted by the edge, and the amplitude therefore attenuates up to the cutoff frequency of the optical system.

 figure: Fig. 3.

Fig. 3. Line profiles of the MTFs of original RGB channels and the synthesized luminance channel. The spatial frequency is measured on a sensor plane up to its Nyquist limit.

Download Full Size | PDF

We simulated the lensless imaging of the FS-FZALC using conventional and proposed image-reconstruction methods. We assumed shift-invariance of the PSF and simulated forward coded sensing through the 2D convolution of a PSF and subject image. On the basis of the convolution theorem, we calculated convolution using the Hadamard product of the OTF and Fourier-transformed subject, where the inverse Fourier transform was also applied for reconstructing the subject in real space. We assumed ideal narrow-band illumination at three wavelengths of 660, 550, and 450 nm for RGB components.

Figure 4 shows the simulation results of conventional [7] and proposed FS-FZALC imaging. We used a color coral image as a subject. The scale bar at the bottom right of the reconstructed image indicates 100 mrad. The field of view was 1.03 rad (58.8 degrees). In the figure, complex-valued data are presented as absolute values for visualization. Note that the picture of a captured coded image is debiased by FS. Adopting the conventional method using only the DF-geo, the high-frequency components were not reconstructed. This is because the filter does not match the physical OTFs owing to the geometrical-optics-based approximation. The peak signal-to-noise ratio (PSNR) was 19.6 dB. Meanwhile, adopting the proposed method using the synthesized deconvolution filter with DF-diff, a higher-resolution image was successfully reconstructed. In fact, the Cb and Cr components still had low resolution for our method; however, this is hardly noticeable for human vision as in the result of a natural image. PSNR was 20.8 dB.

 figure: Fig. 4.

Fig. 4. Simulation results of conventional and proposed FS-FZALC imaging of a natural color object. In the simulation of analog sensing, a subject image is processed into a captured coded image with a numerically generated RGB OTF-wave. In digital reconstruction, the result of the conventional method is obtained by deconvolution with the DF-geo. The proposed method is obtained by an additional deconvolution with the synthesized deconvolution filter based on DF-diff. OTFs and DFs are visualized by corresponding MTFs, respectively. Complex-valued data are visualized using their absolute values. Green rectangles indicate close-ups. The scale bar indicates 100 mrad.

Download Full Size | PDF

Figure 5 shows additional results using additional subjects. At first, to evaluate the effect of noise, we simulated imaging by adding 20 dB additive white Gaussian noise (AWGN) to a coded image of a coral subject. Our method solves zero-crossing-problem of the DF-diff; therefore, the image reconstruction successfully worked even with noise. The inverse filter amplifies high-frequency components including AWGN; therefore, the PSNR of our method was 17.8 dB whereas that of the conventional method was 18.6 dB. However, the surface structure of a branching coral, which also has a high-frequency component, was successfully resolved using our method on the basis of visual assessment. Next, to verify the resolution and chromatic error, we simulated the imaging of a black-and-white Siemens star chart. The image comprised only white and black components, and the frequency-dependent chromatic error was thus visualized using the reconstruction results obtained as the resolution difference between Y and CbCr channels. This is one drawback of the current method, which should be overcome in future works. Finally, to clarify the resolution improvement more effectively, we also simulated a 1D resolution chart. The chart was digitally generated using a set of rectangular waves having a pixel-by-pixel varying period that linearly increased from the center to the right. Note that the maximum frequency of the chart corresponds to the Nyquist limit of the digitally generated subject. In the coded and reconstructed images, only the luminance component was visualized using pseudo color to focus on the resolution. Figure 5(b) shows the line profile of the dotted line in Fig. 5(a). The image and line profile show that our method resolved the lattice whose period was as little as 8 pixels (8.01 mrad angular resolution) whereas the conventional method resolved that whose period was as little as 16 pixels (16.0 mrad angular resolution). This result indicates that the proposed method increases spatial resolution by a factor of about 2 in numerical imaging simulation with the example parameters given in Table 1. We conclude that our method realizes super-resolved image reconstruction compared with the conventional reconstruction method even when using the same FZALC hardware.

 figure: Fig. 5.

Fig. 5. (a) Simulation results with a natural color object and 20 dB AWGN (left), a Siemens star chart (middle), and a one-dimensional resolution chart (right). The subject, captured image with FS, and reconstructed images are shown from top to bottom. Green rectangles show close-ups. Complex-valued data are visualized adopting absolute values. In the case of a resolution chart, only the luminance component is visualized using pseudo color. Yellow dotted lines are line profiles of intensity. The scale bar indicates 100 mrad. (b) Line profiles of reconstructed images with a resolution chart.

Download Full Size | PDF

By now, we assumed the ideal narrow-band spectral sensitivity of a sensor, which can be modeled by a set of delta functions. Although this assumption can be approximately implemented by using, e.g., bandpass filters or lasers, there are many practical applications that use broader spectral sensitivity. From this viewpoint, we also investigated the performance with the broader spectral sensitivity by simulating hyperspectral imaging. Figure 6 shows results. In this simulation, we modeled the broader spectral sensitivity as a set of Gaussian functions including channel crosstalk like the bottom left of the figure. The range of spectral sensitivity was limited from 400 nm to 700 nm. For simplicity, the spectrum of the subject’s light was assumed as flat. The second column indicates the MTFs corresponding to the synthesized deconvolution filters calculated with both spectral sensitivities. As indicated here, the broader spectra resulted in the attenuation of the MTF value, where the MTF monotonically decreases with the spatial frequency. This corresponds to the effect of chromatic aberration of the PSF in each channel at spatial domain. The third column shows the reconstruction result where the 30 dB AWGN was added on a captured image. Here the Wigner filter with the assumption of 30 dB SNR was used for deconvolution to reconstruct the visible image even with noise. As shown in the figure, our method still worked even with the Gaussian spectral sensitivity with crosstalk, though the magnitude of noise amplification became larger by the MTF attenuation. The PSNR of the reconstructed image with the Gaussian sensitivity was 13.4 dB whereas that with the delta one was 18.0 dB. Note that we assumed that the both spectral sensitivities of the sensor were perfectly calibrated in simulations, and we used them for reconstruction filters.

 figure: Fig. 6.

Fig. 6. Simulation results of proposed method using spectral sensitivities of (top) delta functions and (bottom) Gaussian functions. In the second column shows MTFs corresponding to synthesized deconvolution filters. From the third column, reconstructed images from noisy captured images using the Wigner filter. Green rectangles indicate close-ups. The scale bar indicates 100 mrad.

Download Full Size | PDF

4. Experiment

We experimentally verified the proposed concept. Figure 7 shows the experimental setup of an FS-FZALC prototype. We aligned four FZAs whose initial phases ($\phi$) were 0, 1/2$\pi$, $\pi$, and $3/2\pi$ as a 2 $\times$ 2 array like the one shown in the top left of Fig. 1. These were placed in front of a $2048 \times 2048$ pixel CMOS color image sensor with a Bayer color filter array. Note that the FZA also implemented 12 dummy patterns surrounding the four patterns to be actually used in experiments like in the bottom-right picture of Fig. 7. The barrier was not inserted between FZALC units. In this prototype, the FZA was implemented on a transparency film on a plastic plate. The other parameters of each FZA-sensor unit were the same as given in Table 1 for simulations. The sensor had a 12-bit analog–digital converter bit depth. An IR cut-off filter was added to a camera to eliminate undesired light from a subject. The subject was placed 280 mm in front of the FZA plane.

 figure: Fig. 7.

Fig. 7. Experimental setup (left) and close-up pictures of an FS-FZALC prototype (right).

Download Full Size | PDF

The captured signal after exposure was transferred to a connected computer, where the captured images were demosaiced and reconstructed using Matlab software. For FS, a captured image was divided into four images that correspond to each initial-phase FZA and computationally registered. For digital reconstruction filters, the DF-geo was analytically calculated as described in Sec. 2, and the DF-diff was experimentally measured by capturing a point source generated with a light-emitting diode light source and a pinhole.

Figure 8 shows experimental results obtained using a Siemens star chart as a subject displayed on an liquid-crystal display monitor and a natural diffusive object. Each result includes the appearance of the subject, the absolute data of the captured coded image after FS, and results of reconstruction using conventional and proposed methods. For image reconstruction, we performed the frequency-domain inverse filtering described in Sec. 2. As shown in the results of the star chart, comparing with the result of the conventional method, the higher-resolution pattern was successfully reconstructed using the proposed method. When using the contrast value of the half maximum of the peak intensity as a resolution criterion, the angular resolution achieved using the conventional method was 36.1 mrad whereas that achieved using the proposed method was 16.0 mrad. As a result, we experimentally confirmed a 2.25-factor improvement of the effective angular resolution. Not only a display but also a color three-dimensional diffuse object (i.e., a toy with a green leaf) was imaged. Although quantitative assessment is difficult in contrast with the case of the star chart, the image reconstructed using the proposed method appears sharper than that reconstructed using with the conventional method by visual assessment. The structures of the leaf vein and vine are more clearly observed in the image of the proposed method. The proposed method slightly enhanced ringing noise comparing with the conventional method. This is because the proposed method amplifies the high-frequency component, which includes noise. The chromatic error in color was hardly noticeable in the actual colored object as shown in the result, though it is noticeable in the case of the black-and-white star chart. By visual assessment, the effect of resolution improvement with a 3D natural scene seems to be worse than that with a 2D LCD monitor. The reason could be the imperfection of the PSF calibration for 3D natural scene compared with 2D LCD-monitor target, though the PSF itself is independent to scene spectrum in principle as indicated in Eq. (23).

 figure: Fig. 8.

Fig. 8. Experimental results obtained for a Siemens star target on an liquid-crystal display (left) and a natural object (right). Complex-valued captured images with FS are visualized using their absolute values. The scale bars indicate an angle of 100 mrad in the field of view of the FS-FZALC.

Download Full Size | PDF

Figure 9 shows the detailed analysis of the experimental result for the Siemens star chart using line profiles. To focus on the spatial change in intensity, the reconstructed images were grayscaled and the intensity of each image was normalized. From the images reconstructed with conventional and proposed methods, three line profiles corresponding to different spatial frequencies were measured and plotted. Line 1 corresponds to the spatial frequency that was successfully resolved by both methods. Even at this frequency, the proposed method had higher contrast because the proposed filter numerically inverted diffraction-based attenuation in the OTF correctly. Line 2 corresponds to the first zero in the OTF-diff. As indicated by the physical and analytical mathematical model, the conventional method was unable to reconstruct the information for this frequency. In contrast, the proposed method using the diffraction inverse filter with zero-crossing compensation was able to reconstruct the information. Line 3 corresponds to the negative-valued region in the OTF-diff. The conventional method returned a contrast-inverted image in this region, whereas the proposed method considering diffraction numerically corrected this inversion in reconstruction. In conclusion, we experimentally confirmed that the proposed method can reconstruct a higher-resolution correct image thanks to correct diffraction modeling and color-channel synthesis.

 figure: Fig. 9.

Fig. 9. Experimentally obtained reconstructed 2D images with a Siemens star chart, where the images were grayscaled and visualized using pseudo color to clarify the spatial change in intensity (top). Line profiles of the spatial frequency within the conventional resolution limit (bottom left), on the conventional limit (bottom middle), and beyond the conventional limit (bottom right). The intensity of each image was normalized.

Download Full Size | PDF

5. Conclusion

We proposed a novel image-reconstruction algorithm for FS-FZALC to reconstruct a higher-resolution image comparing with the conventional reconstruction method. In the case of the conventional method, the spatial resolution was limited owing to diffraction by the FZA, which was not numerically inverted. We designed a new inverse filter that also inverts the diffraction effect. In the inverse-filtering process, color-channel synthesis was also applied to compensate for zero-crossings in the original diffraction inverse filter. In the numerical simulation of imaging, we confirmed an improvement in spatial resolution at a factor of about 2 without changing the hardware of the FS-FZALC. We also experimentally verified a resolution improvement of about 2.25 times using a prototype and a star-chart target. The FS-FZALC was proposed as a thin and noise-robust (flat OTF) camera; however, the resolution limit has been a problem for practical applications. Our work improves the resolution without any change in the hardware of the FS-FZALC and contributes to making the camera more practical.

Appendix A: Detail of the derivation of the wavefront on a sensor plane

The wavefront on a sensor plane through an FZA is written as in Eq. (13):

$$w_\textrm{sens}(x,y; \phi, \lambda)=\frac{1}{2} \left[1+{\frac{1}{2}}\left\{w_\textrm{sph}\left(x,y,\frac{\pi}{\lambda\beta}+d\right)\exp\left(-j\phi\right)+w_\textrm{sph}\left(x,y,-\frac{\pi}{\lambda\beta}+d\right)\exp\left(j\phi\right)\right\}\right].$$
We here define $C$ as
$$C=\frac{\beta \lambda d}{\pi}.$$
Using $C$ and applying several transformations, the wavefront can be rewritten as
$$\begin{aligned}w_\textrm{sens}(x,y; \phi, \lambda)&=\frac{1}{2} \left[1+ \frac{1}{2} \left[ \exp\left\{ j\frac{\beta\left( 1-C\right)}{1-C^{2}} \left(x^{2}+y^{2}\right) \right\} \exp \left(-j\phi \right) + \exp\left\{ -j\frac{\beta\left( 1+C\right)}{1-C^{2}} \left(x^{2}+y^{2}\right) \right\} \exp \left(j\phi \right) \right] \right] \\ &=\frac{1}{2}\left[ 1+\exp \left\{ -j\frac{\beta C}{1-C^{2}}\left( x^{2}+y^{2} \right) \right\} \cos\left\{ \frac{\beta}{1-C^{2}}\left( x^{2}+y^{2} \right)-\phi \right\}\right]. \end{aligned}$$
When we make the approximation that $1-C^{2} \approx 1$, the wavefront becomes
$$w_\textrm{sens}(x,y; \phi, \lambda) \approx \frac{1}{2}\left[ 1+\exp\left\{ -j\frac{\beta^{2} \lambda d }{\pi}\left(x^{2}+y^{2}\right) \right\} \cos\left\{ \beta \left(x^{2}+y^{2}\right)-\phi \right\} \right].$$

Appendix B: Detail of FS-data synthesis using wave-optics-based PSFs

As indicated in Eq. (14), the wave-optics-based PSF is modeled as

$$h_\textrm{FZA}'(x,y;\phi,\lambda)=\frac{1}{4}\left[ 1+2\cos\left\{ \frac{\beta^{2}\lambda d}{\pi}\left(x^{2}+y^{2}\right)\right\}\cos\left\{\beta\left(x^{2}+y^{2}\right)-\phi \right\}+\cos^{2}\left\{\beta\left(x^{2}+y^{2}\right)-\phi\right\}\right].$$
FS-data synthesis corresponds to the following operation:
$$h_\textrm{FSFZA}'(x,y;\lambda) = \left\{ h_\textrm{FZA}'(x,y;0,\lambda) - h_\textrm{FZA}'(x,y;\pi,\lambda)\right\} + j \left\{ h_\textrm{FZA}'\left(x,y;\frac{\pi}{2},\lambda \right) - h_\textrm{FZA}'\left(x,y;\frac{3\pi}{2},\lambda \right)\right\}.$$
By calculating the operation, the PSF after FS-data synthesis is obtained as
$$\begin{aligned}h_\textrm{FSFZA}'(x,y;\lambda)&=\frac{1}{4} \left[ 2\cos\left\{ \frac{\beta^{2}\lambda d}{\pi} \left( x^{2} + y^{2}\right) \right\} \cos\left\{ \beta \left( x^{2} + y^{2}\right) \right\} + \cos^{2}\left\{ \beta \left( x^{2} + y^{2}\right) \right\}\right. \\ &\left.+2\cos\left\{ \frac{\beta^{2}\lambda d}{\pi} \left( x^{2} + y^{2}\right) \right\} \cos\left\{ \beta \left( x^{2} + y^{2}\right) \right\} - \cos^{2}\left\{ \beta \left( x^{2} + y^{2}\right) \right\}\right. \\ &\left.+j\left[ 2\cos\left\{ \frac{\beta^{2}\lambda d}{\pi} \left( x^{2} + y^{2}\right) \right\} \sin\left\{ \beta \left( x^{2} + y^{2}\right) \right\} + \sin^{2}\left\{ \beta \left( x^{2} + y^{2}\right) \right\}\right.\right. \\ &\left.\left.+2\cos\left\{ \frac{\beta^{2}\lambda d}{\pi} \left( x^{2} + y^{2}\right) \right\} \sin\left\{ \beta \left( x^{2} + y^{2}\right) \right\} - \sin^{2}\left\{ \beta \left( x^{2} + y^{2}\right) \right\}\right] \right] \\ &=\cos\left\{ \frac{\beta^{2} \lambda d}{\pi} \left(x^{2}+y^{2}\right)\right\}\exp\left\{ j \beta\left(x^{2}+y^{2}\right)\right\}. \end{aligned}$$

Appendix C: Fourier transform of the wave-optics-based PSF after FS

The wave-optics-based PSF after FS can be transformed from Eq. (15) as

$$\begin{aligned}h_\textrm{FSFZA}'(x,y;\lambda)&=\cos\left\{ \frac{\beta^{2} \lambda d}{\pi} \left(x^{2}+y^{2}\right)\right\}\exp\left\{ j \beta\left(x^{2}+y^{2}\right)\right\} \\ &= \frac{1}{2}\left[ \exp\left\{ j\pi \left( \frac{\beta}{\pi} + \frac{\beta^{2} \lambda d}{\pi^{2}} \right)\left( x^{2} + y^{2} \right) \right\} + \exp\left\{ j\pi \left( \frac{\beta}{\pi} - \frac{\beta^{2} \lambda d}{\pi^{2}} \right)\left( x^{2} + y^{2} \right) \right\}\right]. \end{aligned}$$
We here define $a_1$ and $a_2$ as
$$a_1 = \frac{\beta}{\pi}\left( 1+\frac{\beta \lambda d}{\pi}\right),~~a_2 = \frac{\beta}{\pi}\left( 1-\frac{\beta \lambda d}{\pi}\right).$$
The Fourier transformation of the PSF, which corresponds to the OTF before normalization, is [21]
$$\begin{aligned}H'_\textrm{wave}(u,v;\lambda) &= \mathcal{F}\left[ \frac{1}{2}\left[ \exp\left\{ j\pi a_{1} \left( x^{2} + y^{2} \right) \right\} + \exp\left\{ j\pi a_{2} \left( x^{2} + y^{2} \right) \right\} \right] \right] \\ &= \frac{j}{2} \left[ \frac{1}{|a_{1}|} \exp \left\{ -j\frac{\pi}{a_1} \left( u^{2} + v^{2} \right) \right\} + \frac{1}{|a_{2}|} \exp \left\{ -j\frac{\pi}{a_2} \left( u^{2} + v^{2} \right) \right\} \right] \\ &=\frac{j \pi}{2 \beta} \left[ \frac{1}{\left|1+C\right|} \exp \left\{ -j\frac{\pi}{a_1} \left( u^{2} + v^{2} \right) \right\} + \frac{1}{\left|1-C\right|} \exp \left\{ -j\frac{\pi}{a_2} \left( u^{2} + v^{2} \right) \right\} \right]. \end{aligned}$$
Taking the system parameters of Table 1 into account, we approximate the coefficients of the exponential functions of Eq. (43) as
$$\frac{1}{\left| 1 \pm C \right|} \approx 1.$$
Equation (43) then becomes
$$H'_\textrm{wave}(u,v;\lambda) \approx \frac{j \pi}{2 \beta} \left[ \exp\left\{ -j\frac{\pi}{a_1}\left( u^{2}+v^{2}\right)\right\} + \exp\left\{ -j\frac{\pi}{a_2}\left( u^{2}+v^{2}\right)\right\} \right].$$
Using sum-to-product identities of trigonometric functions, Eq. (45) can be simplified as
$$\begin{aligned}H'_\textrm{wave}(u,v;\lambda) &= \frac{j \pi}{2 \beta} \left[ \cos\left\{ -\frac{\pi}{a_1}\left( u^{2}+v^{2}\right)\right\} +\cos\left\{ -\frac{\pi}{a_2}\left( u^{2}+v^{2}\right)\right\}\right. \\ &\left.+j\sin\left\{ -\frac{\pi}{a_1}\left( u^{2}+v^{2}\right)\right\} +j\sin\left\{ -\frac{\pi}{a_2}\left( u^{2}+v^{2}\right)\right\} \right] \\ &=\frac{j \pi}{\beta} \exp \left\{- \frac{j\pi^{2}}{\beta} \left( u^{2}+v^{2} \right) \frac{1}{1-C^{2}} \right\} \cos \left\{ \pi \lambda d \left( u^{2}+v^{2} \right) \frac{1}{1-C^{2}} \right\}. \end{aligned}$$
We here apply the approximation that $1-C^{2} \approx 1$ to Eq. (46) again and normalization, then the final form of the wave-optics-based OTF after FS (Eq. (16)) is then derived as
$$H_\textrm{wave}(u,v;\lambda) \approx \exp \left\{ -\frac{j\pi^{2}}{\beta} \left( u^{2}+v^{2} \right) \right\} \cos \left\{ \pi \lambda d \left( u^{2}+v^{2} \right) \right\}.$$

Funding

Japan Society for the Promotion of Science (18H03257); Konica Minolta Imaging Science Foundation.

Disclosures

The authors declare no conflicts of interest.

References

1. D. J. Brady, Optical Imaging and Spectroscopy (Wiley, 2009).

2. R. Fergus, A. Torralba, and W. T. Freeman, “Random Lens Imaging,” MIT Computer Science and Artificial Intelligence Laboratory Technical Report (2006).

3. D. G. Stork and P. R. Gill, “Optical, Mathematical, and Computational Foundations of Lensless Ultra-Miniature Diffractive Imagers and Sensors,” Int. J. on Adv. Syst. Meas. 7, 201–208 (2014).

4. M. J. DeWeert and B. P. Farm, “Lensless coded aperture imaging with separable doubly Toeplitz masks,” Opt. Eng. 54(2), 023102 (2015). [CrossRef]  

5. M. S. Asif, A. Ayremlou, A. Sankaranarayanan, A. Veeraraghavan, and R. G. Baraniuk, “FlatCam: Thin, Lensless Cameras Using Coded Aperture and Computation,” IEEE Trans. Computat. Imaging 3(3), 384–397 (2017). [CrossRef]  

6. G. Kim and R. Menon, “Computational imaging enables a "see-through" lens-less camera,” Opt. Express 26(18), 22826–22836 (2018). [CrossRef]  

7. T. Shimano, Y. Nakamura, K. Tajima, M. Sao, and T. Hoshizawa, “Lensless light-field imaging with Fresnel zone aperture: quasi-coherent coding,” Appl. Opt. 57(11), 2841–2850 (2018). [CrossRef]  

8. N. Antipa, G. Kuo, H. Reinhard, B. Mildenhall, E. Bostan, R. Ng, and L. Waller, “DiffuserCam: lensless single-exposure 3D imaging,” Optica 5(1), 1–9 (2018). [CrossRef]  

9. P. Potuluri, M. Xu, and D. J. Brady, “Imaging with random 3D reference structures,” Opt. Express 11(18), 2134–2141 (2003). [CrossRef]  

10. M. S. Asif, “Lensless 3D imaging using mask-based cameras,” in IEEE International Conference on Acoustics, Speech and Signal Processing, (IEEE, 2018), pp. 6498–6502.

11. T. Nakamura, S. Igarashi, S. Torashima, and M. Yamaguchi, “Extended depth-of-field lensless camera using a radial amplitude mask,” in Imaging and Applied Optics Congress, (OSA, 2020), p. CW3B.2.

12. D. L. Donoho, “Compressed Sensing,” IEEE Trans. Inf. Theory 52(4), 1289–1306 (2006). [CrossRef]  

13. A. Stern, Optical Compressive Imaging (CRC Press, 2016).

14. T. Nakamura, K. Kagawa, S. Torashima, and M. Yamaguchi, “Super Field-of-View Lensless Camera by Coded Image Sensors,” Sensors 19(6), 1329 (2019). [CrossRef]  

15. S. K. Sahoo, D. Tang, and C. Dang, “Single-shot multispectral imaging with a monochromatic camera,” Optica 4(10), 1209–1213 (2017). [CrossRef]  

16. N. Antipa, P. Oare, E. Bostan, R. Ng, and L. Waller, “Video from Stills : Lensless Imaging with Rolling Shutter,” in International Conference on Computational Photography, (IEEE, 2019), pp. 55–62.

17. L. Mertz and N. O. Young, “Fresnel Transformations of Images,” in Optical Instruments and Techinques, (1962), p. 305.

18. I. Yamaguchi and T. Zhang, “Phase-shifting digital holography,” Opt. Lett. 22(16), 1268–1270 (1997). [CrossRef]  

19. K. Tajima, T. Shimano, Y. Nakamura, M. Sao, and T. Hoshizawa, “Lensless light-field imaging with multi-phased fresnel zone aperture,” in IEEE International Conference on Computational Photography, (IEEE, 2017), pp. 76–82.

20. J. Wu, H. Zhang, W. Zhang, G. Jin, L. Cao, and G. Barbastathis, “Single-shot lensless imaging with fresnel zone aperture and incoherent illumination,” Light: Sci. Appl. 9(1), 53 (2020). [CrossRef]  

21. J. W. Goodman, Introduction to Fourier Optics (McGraw-Hill, 1996).

22. N. P. Galatsanos and R. T. Chin, “Digital restoration of multichannel images,” IEEE Trans. Acoust., Speech, Signal Process. 37(3), 415–421 (1989). [CrossRef]  

23. M. Yachida, N. Ohyama, and T. Honda, “Color image restoration using synthetic method,” Opt. Commun. 72(1-2), 22–26 (1989). [CrossRef]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. Schematic diagram of conventional imaging and the proposed imaging with the FS-FZALC.
Fig. 2.
Fig. 2. Geometrical setup of simulations.
Fig. 3.
Fig. 3. Line profiles of the MTFs of original RGB channels and the synthesized luminance channel. The spatial frequency is measured on a sensor plane up to its Nyquist limit.
Fig. 4.
Fig. 4. Simulation results of conventional and proposed FS-FZALC imaging of a natural color object. In the simulation of analog sensing, a subject image is processed into a captured coded image with a numerically generated RGB OTF-wave. In digital reconstruction, the result of the conventional method is obtained by deconvolution with the DF-geo. The proposed method is obtained by an additional deconvolution with the synthesized deconvolution filter based on DF-diff. OTFs and DFs are visualized by corresponding MTFs, respectively. Complex-valued data are visualized using their absolute values. Green rectangles indicate close-ups. The scale bar indicates 100 mrad.
Fig. 5.
Fig. 5. (a) Simulation results with a natural color object and 20 dB AWGN (left), a Siemens star chart (middle), and a one-dimensional resolution chart (right). The subject, captured image with FS, and reconstructed images are shown from top to bottom. Green rectangles show close-ups. Complex-valued data are visualized adopting absolute values. In the case of a resolution chart, only the luminance component is visualized using pseudo color. Yellow dotted lines are line profiles of intensity. The scale bar indicates 100 mrad. (b) Line profiles of reconstructed images with a resolution chart.
Fig. 6.
Fig. 6. Simulation results of proposed method using spectral sensitivities of (top) delta functions and (bottom) Gaussian functions. In the second column shows MTFs corresponding to synthesized deconvolution filters. From the third column, reconstructed images from noisy captured images using the Wigner filter. Green rectangles indicate close-ups. The scale bar indicates 100 mrad.
Fig. 7.
Fig. 7. Experimental setup (left) and close-up pictures of an FS-FZALC prototype (right).
Fig. 8.
Fig. 8. Experimental results obtained for a Siemens star target on an liquid-crystal display (left) and a natural object (right). Complex-valued captured images with FS are visualized using their absolute values. The scale bars indicate an angle of 100 mrad in the field of view of the FS-FZALC.
Fig. 9.
Fig. 9. Experimentally obtained reconstructed 2D images with a Siemens star chart, where the images were grayscaled and visualized using pseudo color to clarify the spatial change in intensity (top). Line profiles of the spatial frequency within the conventional resolution limit (bottom left), on the conventional limit (bottom middle), and beyond the conventional limit (bottom right). The intensity of each image was normalized.

Tables (1)

Tables Icon

Table 1. System parameters

Equations (47)

Equations on this page are rendered with MathJax. Learn more.

t FZA ( x p , y p ; ϕ ) = 1 2 [ 1 + cos { β ( x p 2 + y p 2 ) ϕ } ] ,
h FZA ( x , y ; ϕ ) = 1 2 [ 1 + cos { β ( x 2 + y 2 ) ϕ } ] , s . t .     β = β ( 1 + M ) 2 ,     M = d z ,
g ( x , y ; ϕ ) = 1 2 [ 1 + cos { β ( ( x M s ) 2 + ( y M t ) 2 ) ϕ } ] f ( M s , M t ) d s d t = h FZA ( x , y ; ϕ ) f M ( x , y ) ,
f ^ geo ( x , y ) = F 1 [ H notFS inv ( u , v ) G ( u , v ) ] ,
s . t .     H notFS inv ( u , v ) = { H notFS 1 ( u , v ) | H notFS ( u , v ) 0 | 0 | H notFS ( u , v ) = 0 | ,
g FS ( x , y ) = { g ( x , y ; π 2 ) g ( x , y ; 3 π 2 ) } + j { g ( x , y ; 0 ) g ( x , y ; π ) } = [ { h FZA ( x , y ; 0 ) h FZA ( x , y ; π ) } + j { h FZA ( x , y ; π 2 ) h FZA ( x , y ; 3 π 2 ) } ] f M ( x , y ) = h geo ( x , y ) f M ( x , y ) ,
h geo ( x , y ) = exp { j β ( x 2 + y 2 ) } .
H geo ( u , v ) = exp { j π 2 β ( u 2 + v 2 ) } .
f ^ geo ( x , y ) = F 1 [ H geo inv ( u , v ) G FS ( u , v ) ] , s . t .     H geo inv ( u , v ) = H geo 1 ( u , v ) = H geo ( u , v ) = exp { j π 2 β ( u 2 + v 2 ) } ,
w FZA ( x , y ; ϕ ) = 1 2 [ 1 + cos { β ( x 2 + y 2 ) ϕ } ] .
w sph ( x , y , z prop ) = exp { j π λ z prop ( x 2 + y 2 ) } ,
w FZA ( x , y ; ϕ ) = 1 2 [ 1 + 1 2 { w sph ( x , y , π λ β ) exp ( j ϕ ) + w sph ( x , y , π λ β ) exp ( j ϕ ) } ] .
w sens ( x , y ; ϕ , λ ) = 1 2 [ 1 + 1 2 { w sph ( x , y , π λ β + d ) exp ( j ϕ ) + w sph ( x , y , π λ β + d ) exp ( j ϕ ) } ] 1 2 [ 1 + exp { j β 2 λ d π ( x 2 + y 2 ) } cos { β ( x 2 + y 2 ) ϕ } ] ,
h FZA ( x , y ; ϕ , λ ) = w sens ( x , y ; ϕ , λ ) w sens ( x , y ; ϕ , λ ) = 1 4 [ 1 + 2 cos { β 2 λ d π ( x 2 + y 2 ) } cos { β ( x 2 + y 2 ) ϕ } + cos 2 { β ( x 2 + y 2 ) ϕ } ] .
h wave ( x , y ; λ ) = cos { β 2 λ d π ( x 2 + y 2 ) } exp { j β ( x 2 + y 2 ) } .
H wave ( u , v ; λ ) exp { j π 2 β ( u 2 + v 2 ) } cos { π λ d ( u 2 + v 2 ) } ,
H diff ( u , v ; λ ) = cos { π λ d ( u 2 + v 2 ) } .
H diff inv ( u , v ; λ ) = { H diff 1 ( u , v ; λ ) | H diff ( u , v ; λ ) 0 | 0 | H diff ( u , v ; λ ) = 0 | ,
H wave ( u , v ; λ ) = H geo ( u , v ) H diff ( u , v ; λ ) .
H wave inv ( u , v ; λ ) = { H wave 1 ( u , v ; λ ) = H geo 1 ( u , v ) H diff 1 ( u , v ; λ ) | H wave ( u , v ; λ ) 0 | 0 | H wave ( u , v ; λ ) = 0 | .
u zero 2 + v zero 2 = ( m + 1 2 ) 1 λ d ,
g FS k ( x , y ) = λ min λ max S k ( λ ) { h wave ( x , y ; λ ) f ( x , y ; λ ) } d λ   h wave k ( x , y ) f k ( x , y ) ,
h wave k ( x , y ) = λ min λ max S k ( λ ) h wave ( x , y ; λ ) d λ ,
f k ( x , y ) = λ min λ max f ( x , y ; λ ) d λ .
G FS k ( u , v ) = H wave k ( u , v ) F k ( u , v ) = H geo ( u , v ) H diff k ( u , v ) F k ( u , v ) ,
H diff k ( u , v ) = λ min λ max S k ( λ ) H diff ( u , v ; λ ) d λ .
T R2Y = ( 0.2126 0.7152 0.0722 0.1146 0.3854 0.5000 0.5000 0.4542 0.0458 ) ,     T Y2R = ( 1 0 1.5748 1 0.1873 0.4681 1 1.8556 0 ) ,
H |diff| ( u , v ) = H diff ( u , v ) sign [ H diff ( u , v ) ] = | H diff ( u , v ) | ,
F ^ |geo| ( u , v ) = F ^ geo ( u , v ) sign [ H diff ( u , v ) ] ,
( F ^ |geo| Y' ( u , v ) H |diff| Y' ( u , v ) ) = T R2Y ( F ^ |geo| R ( u , v ) H |diff| R ( u , v ) F ^ |geo| G ( u , v ) H |diff| G ( u , v ) F ^ |geo| B ( u , v ) H |diff| B ( u , v ) ) .
f ^ wave Y' ( x , y ) = F 1 [ H |diff| Y' inv ( u , v ) F ^ |geo| Y' ( u , v ) ] ,
( f ^ geo Cb ( x , y ) f ^ geo Cr ( x , y ) ) = T R2Y ( f ^ geo R ( x , y ) f ^ geo G ( x , y ) f ^ geo B ( x , y ) ) .
( f ^ R ( x , y ) f ^ G ( x , y ) f ^ B ( x , y ) ) = T Y2R ( f ^ wave Y' ( x , y ) f ^ geo Cb ( x , y ) f ^ geo Cr ( x , y ) ) ,
w sens ( x , y ; ϕ , λ ) = 1 2 [ 1 + 1 2 { w sph ( x , y , π λ β + d ) exp ( j ϕ ) + w sph ( x , y , π λ β + d ) exp ( j ϕ ) } ] .
C = β λ d π .
w sens ( x , y ; ϕ , λ ) = 1 2 [ 1 + 1 2 [ exp { j β ( 1 C ) 1 C 2 ( x 2 + y 2 ) } exp ( j ϕ ) + exp { j β ( 1 + C ) 1 C 2 ( x 2 + y 2 ) } exp ( j ϕ ) ] ] = 1 2 [ 1 + exp { j β C 1 C 2 ( x 2 + y 2 ) } cos { β 1 C 2 ( x 2 + y 2 ) ϕ } ] .
w sens ( x , y ; ϕ , λ ) 1 2 [ 1 + exp { j β 2 λ d π ( x 2 + y 2 ) } cos { β ( x 2 + y 2 ) ϕ } ] .
h FZA ( x , y ; ϕ , λ ) = 1 4 [ 1 + 2 cos { β 2 λ d π ( x 2 + y 2 ) } cos { β ( x 2 + y 2 ) ϕ } + cos 2 { β ( x 2 + y 2 ) ϕ } ] .
h FSFZA ( x , y ; λ ) = { h FZA ( x , y ; 0 , λ ) h FZA ( x , y ; π , λ ) } + j { h FZA ( x , y ; π 2 , λ ) h FZA ( x , y ; 3 π 2 , λ ) } .
h FSFZA ( x , y ; λ ) = 1 4 [ 2 cos { β 2 λ d π ( x 2 + y 2 ) } cos { β ( x 2 + y 2 ) } + cos 2 { β ( x 2 + y 2 ) } + 2 cos { β 2 λ d π ( x 2 + y 2 ) } cos { β ( x 2 + y 2 ) } cos 2 { β ( x 2 + y 2 ) } + j [ 2 cos { β 2 λ d π ( x 2 + y 2 ) } sin { β ( x 2 + y 2 ) } + sin 2 { β ( x 2 + y 2 ) } + 2 cos { β 2 λ d π ( x 2 + y 2 ) } sin { β ( x 2 + y 2 ) } sin 2 { β ( x 2 + y 2 ) } ] ] = cos { β 2 λ d π ( x 2 + y 2 ) } exp { j β ( x 2 + y 2 ) } .
h FSFZA ( x , y ; λ ) = cos { β 2 λ d π ( x 2 + y 2 ) } exp { j β ( x 2 + y 2 ) } = 1 2 [ exp { j π ( β π + β 2 λ d π 2 ) ( x 2 + y 2 ) } + exp { j π ( β π β 2 λ d π 2 ) ( x 2 + y 2 ) } ] .
a 1 = β π ( 1 + β λ d π ) ,     a 2 = β π ( 1 β λ d π ) .
H wave ( u , v ; λ ) = F [ 1 2 [ exp { j π a 1 ( x 2 + y 2 ) } + exp { j π a 2 ( x 2 + y 2 ) } ] ] = j 2 [ 1 | a 1 | exp { j π a 1 ( u 2 + v 2 ) } + 1 | a 2 | exp { j π a 2 ( u 2 + v 2 ) } ] = j π 2 β [ 1 | 1 + C | exp { j π a 1 ( u 2 + v 2 ) } + 1 | 1 C | exp { j π a 2 ( u 2 + v 2 ) } ] .
1 | 1 ± C | 1.
H wave ( u , v ; λ ) j π 2 β [ exp { j π a 1 ( u 2 + v 2 ) } + exp { j π a 2 ( u 2 + v 2 ) } ] .
H wave ( u , v ; λ ) = j π 2 β [ cos { π a 1 ( u 2 + v 2 ) } + cos { π a 2 ( u 2 + v 2 ) } + j sin { π a 1 ( u 2 + v 2 ) } + j sin { π a 2 ( u 2 + v 2 ) } ] = j π β exp { j π 2 β ( u 2 + v 2 ) 1 1 C 2 } cos { π λ d ( u 2 + v 2 ) 1 1 C 2 } .
H wave ( u , v ; λ ) exp { j π 2 β ( u 2 + v 2 ) } cos { π λ d ( u 2 + v 2 ) } .
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.