## Abstract

An Airy beam can be used to implement a non-diffracting self-bending point-spread function (PSF), which can be utilized for computational 3D imaging. However, the parabolic depth-dependent spot trajectory limits the range and resolution in rangefinding. In this Letter, we propose a novel pupil-phase-modulation method to realize a non-diffracting linear-shift PSF. For the modulation, we use a focus-multiplexed computer-generated hologram, which is calculated by multiplexing multiple lens-function holograms with 2D sweeping of the foci. With this method, the depth-dependent trajectory of the non-diffracting spot is straightened, which improves the range and resolution in rangefinding. The proposed method was verified by numerical simulations and optical experiments. The method can be applied to laser-based microscopy, time-of-flight rangefinding, and so on.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Computational imaging is a cooperative imaging method involving optical encoding and computational decoding. In the field, 3D imaging by point-spread-function (PSF) engineering is an important topic for applications such as microscopy, rangefinding, and so on. PSF engineering is mostly implemented by pupil-phase modulation (PPM) to achieving high photon efficiency with simple optical hardware [1], although it can also be realized by amplitude masking [2], temporal coding [3], or constructing an unconventional imaging system [4].

The use of a depth-deforming PSF, in which the shape of the PSF changes depending on the depth, is one approach for PPM-based 3D imaging. Originally, a double-helix (DH) PSF was proposed and applied to super-resolution localization 3D microscopy [1]. A DH PSF consists of a double spot that rotates with depth. Since then, several improvements and applications of the DH PSF have been reported [5,6].

In addition, other types of PSFs whose center position laterally shifts in accordance with the depth of an object have also been used for depth sensing. To date, a corkscrew PSF [7], a self-bending (SB) PSF implemented with an Airy beam [8], and a shifting PSF implemented with a combination of a decentered lens and an electrically tunable lens [9] have been proposed and applied to 3D imaging. By applying depth encoding with such engineered PSFs, depth information can be decoded simply by analyzing the spatial information in an image. In particular, an Airy beam has a non-diffracting property, and therefore, depth encoding is applicable to deep-range and robust measurement.

As mentioned above, an Airy beam is a powerful tool for depth sensing due to its non-diffracting property. However, the parabolic trajectory of the center spot in Airy beam propagation destroys the uniqueness of a solution in depth decoding because the front focus and rear focus are indistinguishable. Furthermore, the axial resolution becomes worse around the extremum of the parabolic trajectory. To avoid these problems, the imaging depth range is limited in practice [8]. If the depth-dependent trajectory can be made linear while keeping the non-diffracting property, the limitations of Airy-beam-based depth encoding would be simply solved, in principle.

In this Letter, we report a novel PPM method for generating a non-diffracting spot whose depth-dependent trajectory is straight. We call this PSF a non-diffracting linear-shifting (NDLS) PSF. The concept of the NDLS PSF is illustrated in Fig. 1. Here, we assume that the imaging optics use a phase-only CGH (computer generated hologram) for PPM, an objective lens for focusing in object space, and an image-side lens to focus in the image space. Note that the lenses can also be implemented by the CGH if necessary. The NDLS PSF encodes depth information of an object by the linear lateral shift of a non-diffracting spot on an image plane. In the calculation of the CGH, multiple lens-function CGHs with sweeping of both lateral and axial foci are generated and numerically multiplexed into a single CGH. In this Letter, we refer to this CGH as a focus-multiplexed (FM) CGH.

Here, we formulate the calculation of the FMCGH. The assumed optical setup and the definition of the coordinates are illustrated in Fig. 1. In the following calculation, we only consider the relation of ${z}_{\mathrm{o}}$ with ${x}_{\mathrm{i}}$, and omit ${x}_{\mathrm{o}}$, ${y}_{\mathrm{o}}$, and ${y}_{\mathrm{i}}$ for the sake of simplicity. Here, ${x}_{\mathrm{i}}$ and ${z}_{\mathrm{o}}$ are parameters for focus sweeping and multiplexing in the subsequent CGH calculation process. We fix the sensor at a constant position ${z}_{\mathrm{i}}$. First, we define the complex wavefront ${\mathrm{\varphi}}_{\mathrm{div}}(x,y)[{z}^{\prime},{x}^{\prime}]$ diverging from a certain point $({x}^{\prime},{z}^{\prime})$ as follows:

*lens-function CGH*. Note that we set ${z}_{\mathrm{i}}={z}_{\mathrm{fi}}$ to form an in-focus spot. Next, using the lens-function CGH, we calculate the FMCGH in which the foci (${z}_{\mathrm{o}}$ and ${x}_{\mathrm{i}}$) are swept and the CGHs are numerically multiplexed. To ensure linearity of the depth-dependent lateral PSF shift, the foci are related by where $\alpha $ is a real-valued constant. By Eq. (4), depth-dependent trajectory of the proposed PSF becomes linear. Based on Eqs. (1)–(4), we formulate the FMCGH ${\mathrm{\varphi}}_{\mathrm{FMCGHc}}$ as follows:

${\mathrm{\varphi}}_{\mathrm{eCGH}}$ is composed of complex values; therefore, the operation of Eq. (5) is provided via coherent hologram multiplexing while the incoherent imaging is assumed as an application. The wavefront reconstructed by the CGH of Eq. (5) includes not only the designed in-focus spots but also crosstalks, which are defocused and shifted spots. Therefore, the PSF in the proposed system is a superimposed pattern of in-focus and shifted defocused spots. Since the PSF is generated by superposition of defocused spots, the PSF has a potential to implement similar features of the quasi depth-invariant (non-diffracting) PSF in focal-sweep imaging systems where multi-focus images are integrated into a single image [10–12]. Furthermore, incoherent intensity imaging with such PSFs can be simply modeled by convolution with a depth-invariant blur kernel, which enables blur compensation by deconvolution [10]. Note that the principle of depth-invariant PSF engineering is strictly different from conventional focal-sweep PSFs because the reconstructed defocused spots are coherently superimposed as indicated in Eq. (5). Therefore, we investigate the validity of the analogy-based assumption on depth-invariance of the PSF by simulations and experiments.

Here, we consider incoherent intensity imaging of point objects placed on the optical axis by using the FMCGH. We denote the intensity vectors of the object on the optical axis by ${\mathit{f}}_{\mathrm{z}}\in {\mathbb{R}}^{{N}_{\mathrm{zo}}\times 1}$ and the captured image on a 1D slice along ${x}_{\mathrm{i}}$ by ${\mathit{g}}_{\mathrm{x}}\in {\mathbb{R}}^{{N}_{\mathrm{xi}}\times 1}$, where ${N}_{\mathrm{zo}}$ and ${N}_{\mathrm{xi}}$ are real numbers that express the total number of elements of the corresponding vectors. We also define the transmission matrix of the proposed optical system as ${H}_{{\mathrm{\varphi}}_{\mathrm{FMCGH}}}\in {\mathbb{R}}^{{N}_{\mathrm{xi}}\times {N}_{\mathrm{zo}}}$. The imaging process with the FMCGH calculated based on Eq. (5) can be simply modeled as follows:

Assuming that the PSF implements quasi depth-invariance and depth-depending linear-shift property, Eq. (6) can be decomposed into the depth-dependent shift function ${H}_{{z}_{\mathrm{o}}\to {x}_{\mathrm{i}}}\in {\mathbb{R}}^{{N}_{\mathrm{xi}}\times {N}_{\mathrm{zo}}}$ and the convolution of the quasi depth-invariant blur kernel $\overline{\mathit{h}}\in {\mathbb{R}}^{{N}_{\mathrm{xi}}\times 1}$. Considering this, Eq. (6) can be approximated as follows:For simulation, we calculated an FMCGH for simulation based on Eqs. (1)–(5). In the simulations, the geometry of the optical system illustrated in Fig. 1 was assumed. As indicated in the figure, we defined the origin of ${x}_{\mathrm{i}}$ as the intersection point of the optical axis and the image sensor and the origin of ${z}_{\mathrm{o}}$ as the position at $z={z}_{\mathrm{fo}}$ in the object space, respectively. Specifically, we set ${z}_{\mathrm{fo}}={z}_{\mathrm{fi}}=550\text{\hspace{0.17em}}\mathrm{mm}$ as example parameters. In the CGH calculation, we swept the axial focusing depth ${z}_{\mathrm{o}}$ in the object space from $-600\text{\hspace{0.17em}}\mathrm{mm}$ to $+600\text{\hspace{0.17em}}\mathrm{mm}$. Simultaneously, we also swept the lateral focusing point ${x}_{\mathrm{i}}$ in the image space from $-6.00\text{\hspace{0.17em}}\mathrm{mm}$ to $+6.00\text{\hspace{0.17em}}\mathrm{mm}$. Within these ranges, we calculated 4096 lens-function CGHs and multiplexed them into a single FMCGH. To simulate imaging, we calculated diffraction with the Fresnel approximation. The wavelength was set to 526 nm.

The calculated phase-only FMCGH is shown in Fig. 2(a). From the result of the FMCGH calculation based on focus sweeping and multiplexing, the zone-like structures were distorted and decentered. Interestingly, the phase map of the FMCGH could be unwrapped into a continuous phase map, as shown in Fig. 2(b). This suggests the possibility of implementing the proposed CGH with a refractive or reflective optical element, including a deformable mirror.

We simulated imaging with the FMCGH for the NDLS PSF. For comparison, we also simulated imaging without PPM for the lens-type PSF and with cubic-phase modulation (CPM) for the SB PSF. The 2D images of the PSFs with changing object depths are shown in Fig. 3(a). Note that the intensities of the PSF images were normalized in each image for the purpose of visualizing their shapes. To suppress the effect of the horizontal sidelobe in the SB PSF for depth decoding, the CGH for CPM was rotated by 45 deg (similar to Ref. [8]) from the original one [13]. The coefficient of the CPM was chosen to implement approximately the same full width at half-maximum (FWHM) of the spot as that obtained by the simulated NDLS PSF at ${z}_{\mathrm{o}}=+57.6\text{\hspace{0.17em}}\mathrm{mm}$. As shown in Fig. 3(a), the focused spots in the SB PSFs and NDLS PSFs are quasi non-diffracting, whereas the lens PSF is obviously defocused at the out-of-focus depth. The focused spots of the SB PSF and the NDLS PSF were horizontally shifted with changing object depth, which indicates the ability to achieve depth encoding. The 1D line profiles of the horizontal slices of the PSFs are shown in Fig. 3(b). The horizontal direction corresponds to the focus sweep direction in the CGH calculation (${x}_{\mathrm{i}}$). The slice was obtained along the central horizontal line in each image [“slice” in Fig. 3(a)]. As shown in the plots, the positions of the peak intensity are shifted with depth for the SB PSF and NDLS PSF. Because of the parabolic $z\u2013x$ trajectory of the Airy beam, the position shift of the peak wraps at ${z}_{\mathrm{o}}=0.00\text{\hspace{0.17em}}\mathrm{mm}$. This effect makes depth decoding difficult, which results in degradation of resolution and robustness in depth sensing. By contrast, the depth-dependent peak shift in the NDLS PSF is linear with depth; thus, the captured signal can easily be decoded into the correct ${z}_{\mathrm{o}}$ over a deep range. Note that the FWHM of the spot-like NDLS PSF slightly changes with the depth, although the PSF implements a quasi non-diffracting property compared with the lens PSF.

Figure 4 shows the ${z}_{\mathrm{o}}$-stack image of the PSF slices obtained in the simulation. Figure 4(a) presents a stack of lens PSF images. Note that the intensity is normalized in each stack image, and the lens PSF stack image is oversaturated to visualize the defocus, as indicated in the scale bar. As shown in the figure, the shape of the defocused PSFs spatially spreads, and the central position of the PSF does not shift. Figures 4(b) and 4(c) are the ${z}_{\mathrm{o}}$-stack images for the SB PSF and the NDLS PSF, respectively. Comparatively, the depth-dependent trajectory of the spot in the NDLS PSF is linear while keeping approximately the same spot size, whereas the trajectory of the SB PSF for the Airy beam is parabolic, which has an extremum at ${z}_{\mathrm{o}}=0.00\text{\hspace{0.17em}}\mathrm{mm}$. In the NDLS PSF image stack, periodical artifacts due to vertical dark lines were confirmed. These were probably caused by the effects of interference in coherent multiplexing of the CGHs.

Using simulated PSFs, we verified the effectiveness of deconvolution on improvement of axial resolution for depth sensing. Here, we consider that three incoherent point sources are located at different depths (${z}_{\mathrm{o}}=-22.7,0.00,+22.7\text{\hspace{0.17em}}\mathrm{mm}$), and they are simultaneously captured at an image sensor after applying the proposed PSF engineering. The captured image is the incoherent superposition of multiple defocused PSFs. Figure 5 shows the intensity profile before and after deconvolution and the corresponding ground truth. Compared with the ground truth, the localization of three incoherent point sources by the captured signal before deconvolution is visually difficult due to the superimposition of the blur. In this simulation, the Richardson–Lucy (RL) deconvolution [14] was applied to the captured signal for blur compensation. The RL method is a Bayesian-based iterative deconvolution algorithm, which implements noise robustness. The filter was generated from the PSF at ${z}_{\mathrm{o}}=0.00\text{\hspace{0.17em}}\mathrm{mm}$. Note that Gaussian filtering was performed before deconvolution to eliminate the periodic artifacts in the PSFs, appearing in Fig. 4(c). Due to the resolution restoration by deconvolution, the three point sources were well decomposed after processing, where the average of the contrast of signals became 6.82 times higher. As indicated in the figure, the lateral positions of deconvolved peak signals coincide with the ground truth. Therefore, the axial resolution and robustness in depth sensing can be improved by deconvolution with our proposed method. The effectiveness of deconvolution in our method is attributed to the fact that the designed PSF has a quasi non-diffracting behavior, which maintains the PSF profile for different depth positions. Note that the use of multiple deconvolution filters instead of a single filter can improve the precision of ranging, although it sacrifices the computational cost.

The generation of the NDLS PSF was also verified by an optical experiment using a phase-only SLM. A photograph of the experimental setup is shown in Fig. 6(a). In the experiment, the lens function was also implemented by a CGH. A semiconductor laser with a wavelength of 526 nm and an objective lens with a focal length of 2.91 mm mounted on a motorized stage were used to implement a point source whose depth was controllable. The emitted light was diffracted by a reflection-type phase-only liquid-crystal-on-silicon (LCoS) SLM (PLUTO by HOLOEYE Photonics, $1080\times 1080$ pixels of the total of $1920\times 1080$ pixels with 8.0 μm pitch were used), and a CMOS image sensor (CM3-U3-13Y3C by FLIR, $1280\times 1021$ pixels with 4.8 μm pitch) was used for capturing the diffracted light via a half mirror. For the purpose of separating the reflected and diffracted light, a wavefront tilt function along the vertical direction was added on the FMCGH. Figure 6(b) shows the experimentally observed ${z}_{\mathrm{o}}$-stack of sliced PSFs. Comparing this result with Fig. 4, we confirmed the consistency of the simulation and experimental results. Note that the ${z}_{\mathrm{o}}$-axis was flipped because of the reflection geometry in the optical setup.

In this Letter, we proposed a method of realizing an NDLS PSF based on PPM using an FMCGH. The CGH was calculated by multiplexing multiple lens-function CGHs whose foci in object and image spaces were simultaneously swept. The NDLS property can be used for depth sensing with a deeper range and higher axial resolution than that achievable with an Airy beam because its $z-x$ profile is not bent but straight. Similarly to an Airy beam, the blur of the PSF can be reduced by deconvolution exploiting the depth invariance of a blur kernel. The proposed FMCGH and NDLS PSF were verified by numerical simulations and optical experiments by using a phase-only LCoS SLM.

As mentioned above, the features of the NDLS spot are promising for extending the axial range and resolution of PPM-based 3D sensing and imaging. The applications of the proposed method are not limited to performance improvement of Airy-beam-based 3D microscopy but also include laser-based time-of-flight ranging, stereoscopic 3D imaging for machine vision, and so on. In future work, the removal of the periodic artifact in the ${z}_{\mathrm{o}}$-stack image will be addressed by further optimization of the CGH. Implementing the proposed phase modulator with a refractive or reflective optical element to increase the photon efficiency is also an issue to be addressed in the future.

## Funding

Precursory Research for Embryonic Science and Technology (PRESTO) (JPM JPR15P8, JPM JPR1677); NJRC Mater. & Dev. (20181086).

## REFERENCES

**1. **S. R. P. Pavani, M. A. Thompson, J. S. Biteen, S. J. Lord, N. Liu, R. J. Twieg, R. Piestun, and W. E. Moerner, Proc. Natl. Acad. Sci. USA **106**, 2995 (2009). [CrossRef]

**2. **A. Levin, R. Fergus, F. Durand, and W. T. Freeman, ACM Trans. Graph. **26**, 70 (2007). [CrossRef]

**3. **P. Llull, X. Yuan, L. Carin, and D. J. Brady, Optica **2**, 822 (2015). [CrossRef]

**4. **B. Hajj, M. El Beheiry, and M. Dahan, Biomed. Opt. Express **7**, 726 (2016). [CrossRef]

**5. **H. Li, D. Chen, G. Xu, B. Yu, and H. Niu, Opt. Express **23**, 787 (2015). [CrossRef]

**6. **Y. Zhou, P. Zammit, G. Carles, and A. R. Harvey, Opt. Express **26**, 7563 (2018). [CrossRef]

**7. **M. D. Lew, S. F. Lee, M. Badieirostami, and W. E. Moerner, Opt. Lett. **36**, 202 (2011). [CrossRef]

**8. **S. Jia, J. C. Vaughan, and X. Zhuang, Nat. Photonics **8**, 302 (2014). [CrossRef]

**9. **G. Sancataldo, L. Scipioni, T. Ravasenga, L. Lanzanò, A. Diaspro, A. Barberis, and M. Duocastella, Optica **4**, 367 (2017). [CrossRef]

**10. **G. Häusler, Opt. Commun. **6**, 38 (1972). [CrossRef]

**11. **S. Kuthirummal, H. Nagahara, C. Zhou, and S. K. Nayar, IEEE Trans. Pattern Anal. Mach. Intell. **33**, 58 (2011). [CrossRef]

**12. **A. Levin and F. Durand, *IEEE Conference on Computer Vision and Pattern Recognition* (IEEE, 2010), pp. 1831–1838.

**13. **E. R. Dowski and W. T. Cathey, Appl. Opt. **34**, 1859 (1995). [CrossRef]

**14. **W. H. Richardson, J. Opt. Soc. Am. **62**, 55 (1972). [CrossRef]