Abstract

An Airy beam can be used to implement a non-diffracting self-bending point-spread function (PSF), which can be utilized for computational 3D imaging. However, the parabolic depth-dependent spot trajectory limits the range and resolution in rangefinding. In this Letter, we propose a novel pupil-phase-modulation method to realize a non-diffracting linear-shift PSF. For the modulation, we use a focus-multiplexed computer-generated hologram, which is calculated by multiplexing multiple lens-function holograms with 2D sweeping of the foci. With this method, the depth-dependent trajectory of the non-diffracting spot is straightened, which improves the range and resolution in rangefinding. The proposed method was verified by numerical simulations and optical experiments. The method can be applied to laser-based microscopy, time-of-flight rangefinding, and so on.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Computational imaging is a cooperative imaging method involving optical encoding and computational decoding. In the field, 3D imaging by point-spread-function (PSF) engineering is an important topic for applications such as microscopy, rangefinding, and so on. PSF engineering is mostly implemented by pupil-phase modulation (PPM) to achieving high photon efficiency with simple optical hardware [1], although it can also be realized by amplitude masking [2], temporal coding [3], or constructing an unconventional imaging system [4].

The use of a depth-deforming PSF, in which the shape of the PSF changes depending on the depth, is one approach for PPM-based 3D imaging. Originally, a double-helix (DH) PSF was proposed and applied to super-resolution localization 3D microscopy [1]. A DH PSF consists of a double spot that rotates with depth. Since then, several improvements and applications of the DH PSF have been reported [5,6].

In addition, other types of PSFs whose center position laterally shifts in accordance with the depth of an object have also been used for depth sensing. To date, a corkscrew PSF [7], a self-bending (SB) PSF implemented with an Airy beam [8], and a shifting PSF implemented with a combination of a decentered lens and an electrically tunable lens [9] have been proposed and applied to 3D imaging. By applying depth encoding with such engineered PSFs, depth information can be decoded simply by analyzing the spatial information in an image. In particular, an Airy beam has a non-diffracting property, and therefore, depth encoding is applicable to deep-range and robust measurement.

As mentioned above, an Airy beam is a powerful tool for depth sensing due to its non-diffracting property. However, the parabolic trajectory of the center spot in Airy beam propagation destroys the uniqueness of a solution in depth decoding because the front focus and rear focus are indistinguishable. Furthermore, the axial resolution becomes worse around the extremum of the parabolic trajectory. To avoid these problems, the imaging depth range is limited in practice [8]. If the depth-dependent trajectory can be made linear while keeping the non-diffracting property, the limitations of Airy-beam-based depth encoding would be simply solved, in principle.

In this Letter, we report a novel PPM method for generating a non-diffracting spot whose depth-dependent trajectory is straight. We call this PSF a non-diffracting linear-shifting (NDLS) PSF. The concept of the NDLS PSF is illustrated in Fig. 1. Here, we assume that the imaging optics use a phase-only CGH (computer generated hologram) for PPM, an objective lens for focusing in object space, and an image-side lens to focus in the image space. Note that the lenses can also be implemented by the CGH if necessary. The NDLS PSF encodes depth information of an object by the linear lateral shift of a non-diffracting spot on an image plane. In the calculation of the CGH, multiple lens-function CGHs with sweeping of both lateral and axial foci are generated and numerically multiplexed into a single CGH. In this Letter, we refer to this CGH as a focus-multiplexed (FM) CGH.

 figure: Fig. 1.

Fig. 1. Concept of the proposed optical system generating the NDLS PSF using a focus-multiplexed CGH. The color represents the depth of the object.

Download Full Size | PPT Slide | PDF

Here, we formulate the calculation of the FMCGH. The assumed optical setup and the definition of the coordinates are illustrated in Fig. 1. In the following calculation, we only consider the relation of zo with xi, and omit xo, yo, and yi for the sake of simplicity. Here, xi and zo are parameters for focus sweeping and multiplexing in the subsequent CGH calculation process. We fix the sensor at a constant position zi. First, we define the complex wavefront ϕdiv(x,y)[z,x] diverging from a certain point (x,z) as follows:

ϕdiv(x,y)[z,x]=exp(jk2(xx)2+y2z),
where j is the imaginary unit, and k is the wave number. Equation (1) was an analytical solution of a spherical wave under the Fresnel approximation. Next, using ϕdiv, we formulate the complex-amplitude of a holographic lens ϕlens that forms a spot image of a point source at (xo,zo) to (xi,zi) as follows:
ϕlens(x,y;zo,xi)=ϕdiv(x,y)[zo,0]ϕdiv(x,y)[zi,xi].
In Eq. (2), we assumed xo=0 to design an optical system optimized for on-axis imaging. Next, to implement the lens function with an optical system including two focusing lenses, like Fig. 1, we derive the CGH pattern ϕeCGH for PPM based on the holographic lens ϕlens as follows:
ϕeCGH(x,y;zo,xi)=ϕlens(x,y;zo,xi)ϕdiv(x,y)[zfo,0]ϕdiv(x,y)[zfi,0],
where zfo is the axial focusing depth in the object space of an objective lens, and zfi is the lateral focusing position of an image-side lens, respectively. We call the CGH ϕeCGH a lens-function CGH. Note that we set zi=zfi to form an in-focus spot. Next, using the lens-function CGH, we calculate the FMCGH in which the foci (zo and xi) are swept and the CGHs are numerically multiplexed. To ensure linearity of the depth-dependent lateral PSF shift, the foci are related by
xi=αzo,
where α is a real-valued constant. By Eq. (4), depth-dependent trajectory of the proposed PSF becomes linear. Based on Eqs. (1)–(4), we formulate the FMCGH ϕFMCGHc as follows:
ϕFMCGHc(x,y)=zo=zminzmaxϕeCGH(x,y;zo),
where zmin and zmax denote the range of the axial focal sweep. Finally, we approximate the phase-only CGH ϕFMCGH from ϕFMCGHc by omitting amplitude to make use of a phase-only spatial light modulator (SLM). Note that the difference between these CGHs is not so significant because the information for the PPMs is mostly the phase angle information in ϕFMCGHc. In addition, since the aperture of the optical system is generally circular, we truncate the CGH into a circular one.

ϕeCGH is composed of complex values; therefore, the operation of Eq. (5) is provided via coherent hologram multiplexing while the incoherent imaging is assumed as an application. The wavefront reconstructed by the CGH of Eq. (5) includes not only the designed in-focus spots but also crosstalks, which are defocused and shifted spots. Therefore, the PSF in the proposed system is a superimposed pattern of in-focus and shifted defocused spots. Since the PSF is generated by superposition of defocused spots, the PSF has a potential to implement similar features of the quasi depth-invariant (non-diffracting) PSF in focal-sweep imaging systems where multi-focus images are integrated into a single image [1012]. Furthermore, incoherent intensity imaging with such PSFs can be simply modeled by convolution with a depth-invariant blur kernel, which enables blur compensation by deconvolution [10]. Note that the principle of depth-invariant PSF engineering is strictly different from conventional focal-sweep PSFs because the reconstructed defocused spots are coherently superimposed as indicated in Eq. (5). Therefore, we investigate the validity of the analogy-based assumption on depth-invariance of the PSF by simulations and experiments.

Here, we consider incoherent intensity imaging of point objects placed on the optical axis by using the FMCGH. We denote the intensity vectors of the object on the optical axis by fzRNzo×1 and the captured image on a 1D slice along xi by gxRNxi×1, where Nzo and Nxi are real numbers that express the total number of elements of the corresponding vectors. We also define the transmission matrix of the proposed optical system as HϕFMCGHRNxi×Nzo. The imaging process with the FMCGH calculated based on Eq. (5) can be simply modeled as follows:

gx=HϕFMCGH[fz].
Assuming that the PSF implements quasi depth-invariance and depth-depending linear-shift property, Eq. (6) can be decomposed into the depth-dependent shift function HzoxiRNxi×Nzo and the convolution of the quasi depth-invariant blur kernel h¯RNxi×1. Considering this, Eq. (6) can be approximated as follows:
gxh¯*Hzoxi[fz],
where * is the convolution operator. The decoding of depth information f^z from the captured data gx is simply the numerical inversion of Eq. (7) with known Hzoxi and h¯ as follows:
f^z=Hzoxi1[h¯1*gx],
where h¯1 is an inverse filter obtained by using the blur kernel.

For simulation, we calculated an FMCGH for simulation based on Eqs. (1)–(5). In the simulations, the geometry of the optical system illustrated in Fig. 1 was assumed. As indicated in the figure, we defined the origin of xi as the intersection point of the optical axis and the image sensor and the origin of zo as the position at z=zfo in the object space, respectively. Specifically, we set zfo=zfi=550mm as example parameters. In the CGH calculation, we swept the axial focusing depth zo in the object space from 600mm to +600mm. Simultaneously, we also swept the lateral focusing point xi in the image space from 6.00mm to +6.00mm. Within these ranges, we calculated 4096 lens-function CGHs and multiplexed them into a single FMCGH. To simulate imaging, we calculated diffraction with the Fresnel approximation. The wavelength was set to 526 nm.

The calculated phase-only FMCGH is shown in Fig. 2(a). From the result of the FMCGH calculation based on focus sweeping and multiplexing, the zone-like structures were distorted and decentered. Interestingly, the phase map of the FMCGH could be unwrapped into a continuous phase map, as shown in Fig. 2(b). This suggests the possibility of implementing the proposed CGH with a refractive or reflective optical element, including a deformable mirror.

 figure: Fig. 2.

Fig. 2. Calculated phase-only FMCGH. Phase maps (a) without and (b) with phase unwrapping.

Download Full Size | PPT Slide | PDF

We simulated imaging with the FMCGH for the NDLS PSF. For comparison, we also simulated imaging without PPM for the lens-type PSF and with cubic-phase modulation (CPM) for the SB PSF. The 2D images of the PSFs with changing object depths are shown in Fig. 3(a). Note that the intensities of the PSF images were normalized in each image for the purpose of visualizing their shapes. To suppress the effect of the horizontal sidelobe in the SB PSF for depth decoding, the CGH for CPM was rotated by 45 deg (similar to Ref. [8]) from the original one [13]. The coefficient of the CPM was chosen to implement approximately the same full width at half-maximum (FWHM) of the spot as that obtained by the simulated NDLS PSF at zo=+57.6mm. As shown in Fig. 3(a), the focused spots in the SB PSFs and NDLS PSFs are quasi non-diffracting, whereas the lens PSF is obviously defocused at the out-of-focus depth. The focused spots of the SB PSF and the NDLS PSF were horizontally shifted with changing object depth, which indicates the ability to achieve depth encoding. The 1D line profiles of the horizontal slices of the PSFs are shown in Fig. 3(b). The horizontal direction corresponds to the focus sweep direction in the CGH calculation (xi). The slice was obtained along the central horizontal line in each image [“slice” in Fig. 3(a)]. As shown in the plots, the positions of the peak intensity are shifted with depth for the SB PSF and NDLS PSF. Because of the parabolic zx trajectory of the Airy beam, the position shift of the peak wraps at zo=0.00mm. This effect makes depth decoding difficult, which results in degradation of resolution and robustness in depth sensing. By contrast, the depth-dependent peak shift in the NDLS PSF is linear with depth; thus, the captured signal can easily be decoded into the correct zo over a deep range. Note that the FWHM of the spot-like NDLS PSF slightly changes with the depth, although the PSF implements a quasi non-diffracting property compared with the lens PSF.

 figure: Fig. 3.

Fig. 3. (a) 2D PSFs with defocus by non-PPM (lens PSF), CPM (SB PSF, also called Airy beam), and FMCGH (NDLS PSF). (b) 1D line profiles on the “slice” line of the PSFs. Scale bar indicates 300 μm.

Download Full Size | PPT Slide | PDF

Figure 4 shows the zo-stack image of the PSF slices obtained in the simulation. Figure 4(a) presents a stack of lens PSF images. Note that the intensity is normalized in each stack image, and the lens PSF stack image is oversaturated to visualize the defocus, as indicated in the scale bar. As shown in the figure, the shape of the defocused PSFs spatially spreads, and the central position of the PSF does not shift. Figures 4(b) and 4(c) are the zo-stack images for the SB PSF and the NDLS PSF, respectively. Comparatively, the depth-dependent trajectory of the spot in the NDLS PSF is linear while keeping approximately the same spot size, whereas the trajectory of the SB PSF for the Airy beam is parabolic, which has an extremum at zo=0.00mm. In the NDLS PSF image stack, periodical artifacts due to vertical dark lines were confirmed. These were probably caused by the effects of interference in coherent multiplexing of the CGHs.

 figure: Fig. 4.

Fig. 4. Simulation results of the zo-stack of 1D slices of the (a) lens, (b) SB, and (c) NDLS PSFs.

Download Full Size | PPT Slide | PDF

Using simulated PSFs, we verified the effectiveness of deconvolution on improvement of axial resolution for depth sensing. Here, we consider that three incoherent point sources are located at different depths (zo=22.7,0.00,+22.7mm), and they are simultaneously captured at an image sensor after applying the proposed PSF engineering. The captured image is the incoherent superposition of multiple defocused PSFs. Figure 5 shows the intensity profile before and after deconvolution and the corresponding ground truth. Compared with the ground truth, the localization of three incoherent point sources by the captured signal before deconvolution is visually difficult due to the superimposition of the blur. In this simulation, the Richardson–Lucy (RL) deconvolution [14] was applied to the captured signal for blur compensation. The RL method is a Bayesian-based iterative deconvolution algorithm, which implements noise robustness. The filter was generated from the PSF at zo=0.00mm. Note that Gaussian filtering was performed before deconvolution to eliminate the periodic artifacts in the PSFs, appearing in Fig. 4(c). Due to the resolution restoration by deconvolution, the three point sources were well decomposed after processing, where the average of the contrast of signals became 6.82 times higher. As indicated in the figure, the lateral positions of deconvolved peak signals coincide with the ground truth. Therefore, the axial resolution and robustness in depth sensing can be improved by deconvolution with our proposed method. The effectiveness of deconvolution in our method is attributed to the fact that the designed PSF has a quasi non-diffracting behavior, which maintains the PSF profile for different depth positions. Note that the use of multiple deconvolution filters instead of a single filter can improve the precision of ranging, although it sacrifices the computational cost.

 figure: Fig. 5.

Fig. 5. Line profile of a captured image generated from multi-depth point sources and deconvolution result.

Download Full Size | PPT Slide | PDF

The generation of the NDLS PSF was also verified by an optical experiment using a phase-only SLM. A photograph of the experimental setup is shown in Fig. 6(a). In the experiment, the lens function was also implemented by a CGH. A semiconductor laser with a wavelength of 526 nm and an objective lens with a focal length of 2.91 mm mounted on a motorized stage were used to implement a point source whose depth was controllable. The emitted light was diffracted by a reflection-type phase-only liquid-crystal-on-silicon (LCoS) SLM (PLUTO by HOLOEYE Photonics, 1080×1080 pixels of the total of 1920×1080 pixels with 8.0 μm pitch were used), and a CMOS image sensor (CM3-U3-13Y3C by FLIR, 1280×1021 pixels with 4.8 μm pitch) was used for capturing the diffracted light via a half mirror. For the purpose of separating the reflected and diffracted light, a wavefront tilt function along the vertical direction was added on the FMCGH. Figure 6(b) shows the experimentally observed zo-stack of sliced PSFs. Comparing this result with Fig. 4, we confirmed the consistency of the simulation and experimental results. Note that the zo-axis was flipped because of the reflection geometry in the optical setup.

 figure: Fig. 6.

Fig. 6. (a) Setup for optical experiments and (b) the experimentally observed zo-stack of the 1D slices of the NDLS PSFs.

Download Full Size | PPT Slide | PDF

In this Letter, we proposed a method of realizing an NDLS PSF based on PPM using an FMCGH. The CGH was calculated by multiplexing multiple lens-function CGHs whose foci in object and image spaces were simultaneously swept. The NDLS property can be used for depth sensing with a deeper range and higher axial resolution than that achievable with an Airy beam because its zx profile is not bent but straight. Similarly to an Airy beam, the blur of the PSF can be reduced by deconvolution exploiting the depth invariance of a blur kernel. The proposed FMCGH and NDLS PSF were verified by numerical simulations and optical experiments by using a phase-only LCoS SLM.

As mentioned above, the features of the NDLS spot are promising for extending the axial range and resolution of PPM-based 3D sensing and imaging. The applications of the proposed method are not limited to performance improvement of Airy-beam-based 3D microscopy but also include laser-based time-of-flight ranging, stereoscopic 3D imaging for machine vision, and so on. In future work, the removal of the periodic artifact in the zo-stack image will be addressed by further optimization of the CGH. Implementing the proposed phase modulator with a refractive or reflective optical element to increase the photon efficiency is also an issue to be addressed in the future.

Funding

Precursory Research for Embryonic Science and Technology (PRESTO) (JPM JPR15P8, JPM JPR1677); NJRC Mater. & Dev. (20181086).

REFERENCES

1. S. R. P. Pavani, M. A. Thompson, J. S. Biteen, S. J. Lord, N. Liu, R. J. Twieg, R. Piestun, and W. E. Moerner, Proc. Natl. Acad. Sci. USA 106, 2995 (2009). [CrossRef]  

2. A. Levin, R. Fergus, F. Durand, and W. T. Freeman, ACM Trans. Graph. 26, 70 (2007). [CrossRef]  

3. P. Llull, X. Yuan, L. Carin, and D. J. Brady, Optica 2, 822 (2015). [CrossRef]  

4. B. Hajj, M. El Beheiry, and M. Dahan, Biomed. Opt. Express 7, 726 (2016). [CrossRef]  

5. H. Li, D. Chen, G. Xu, B. Yu, and H. Niu, Opt. Express 23, 787 (2015). [CrossRef]  

6. Y. Zhou, P. Zammit, G. Carles, and A. R. Harvey, Opt. Express 26, 7563 (2018). [CrossRef]  

7. M. D. Lew, S. F. Lee, M. Badieirostami, and W. E. Moerner, Opt. Lett. 36, 202 (2011). [CrossRef]  

8. S. Jia, J. C. Vaughan, and X. Zhuang, Nat. Photonics 8, 302 (2014). [CrossRef]  

9. G. Sancataldo, L. Scipioni, T. Ravasenga, L. Lanzanò, A. Diaspro, A. Barberis, and M. Duocastella, Optica 4, 367 (2017). [CrossRef]  

10. G. Häusler, Opt. Commun. 6, 38 (1972). [CrossRef]  

11. S. Kuthirummal, H. Nagahara, C. Zhou, and S. K. Nayar, IEEE Trans. Pattern Anal. Mach. Intell. 33, 58 (2011). [CrossRef]  

12. A. Levin and F. Durand, IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2010), pp. 1831–1838.

13. E. R. Dowski and W. T. Cathey, Appl. Opt. 34, 1859 (1995). [CrossRef]  

14. W. H. Richardson, J. Opt. Soc. Am. 62, 55 (1972). [CrossRef]  

References

  • View by:
  • |
  • |
  • |

  1. S. R. P. Pavani, M. A. Thompson, J. S. Biteen, S. J. Lord, N. Liu, R. J. Twieg, R. Piestun, and W. E. Moerner, Proc. Natl. Acad. Sci. USA 106, 2995 (2009).
    [Crossref]
  2. A. Levin, R. Fergus, F. Durand, and W. T. Freeman, ACM Trans. Graph. 26, 70 (2007).
    [Crossref]
  3. P. Llull, X. Yuan, L. Carin, and D. J. Brady, Optica 2, 822 (2015).
    [Crossref]
  4. B. Hajj, M. El Beheiry, and M. Dahan, Biomed. Opt. Express 7, 726 (2016).
    [Crossref]
  5. H. Li, D. Chen, G. Xu, B. Yu, and H. Niu, Opt. Express 23, 787 (2015).
    [Crossref]
  6. Y. Zhou, P. Zammit, G. Carles, and A. R. Harvey, Opt. Express 26, 7563 (2018).
    [Crossref]
  7. M. D. Lew, S. F. Lee, M. Badieirostami, and W. E. Moerner, Opt. Lett. 36, 202 (2011).
    [Crossref]
  8. S. Jia, J. C. Vaughan, and X. Zhuang, Nat. Photonics 8, 302 (2014).
    [Crossref]
  9. G. Sancataldo, L. Scipioni, T. Ravasenga, L. Lanzanò, A. Diaspro, A. Barberis, and M. Duocastella, Optica 4, 367 (2017).
    [Crossref]
  10. G. Häusler, Opt. Commun. 6, 38 (1972).
    [Crossref]
  11. S. Kuthirummal, H. Nagahara, C. Zhou, and S. K. Nayar, IEEE Trans. Pattern Anal. Mach. Intell. 33, 58 (2011).
    [Crossref]
  12. A. Levin and F. Durand, IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2010), pp. 1831–1838.
  13. E. R. Dowski and W. T. Cathey, Appl. Opt. 34, 1859 (1995).
    [Crossref]
  14. W. H. Richardson, J. Opt. Soc. Am. 62, 55 (1972).
    [Crossref]

2018 (1)

2017 (1)

2016 (1)

2015 (2)

2014 (1)

S. Jia, J. C. Vaughan, and X. Zhuang, Nat. Photonics 8, 302 (2014).
[Crossref]

2011 (2)

M. D. Lew, S. F. Lee, M. Badieirostami, and W. E. Moerner, Opt. Lett. 36, 202 (2011).
[Crossref]

S. Kuthirummal, H. Nagahara, C. Zhou, and S. K. Nayar, IEEE Trans. Pattern Anal. Mach. Intell. 33, 58 (2011).
[Crossref]

2009 (1)

S. R. P. Pavani, M. A. Thompson, J. S. Biteen, S. J. Lord, N. Liu, R. J. Twieg, R. Piestun, and W. E. Moerner, Proc. Natl. Acad. Sci. USA 106, 2995 (2009).
[Crossref]

2007 (1)

A. Levin, R. Fergus, F. Durand, and W. T. Freeman, ACM Trans. Graph. 26, 70 (2007).
[Crossref]

1995 (1)

1972 (2)

Badieirostami, M.

Barberis, A.

Biteen, J. S.

S. R. P. Pavani, M. A. Thompson, J. S. Biteen, S. J. Lord, N. Liu, R. J. Twieg, R. Piestun, and W. E. Moerner, Proc. Natl. Acad. Sci. USA 106, 2995 (2009).
[Crossref]

Brady, D. J.

Carin, L.

Carles, G.

Cathey, W. T.

Chen, D.

Dahan, M.

Diaspro, A.

Dowski, E. R.

Duocastella, M.

Durand, F.

A. Levin, R. Fergus, F. Durand, and W. T. Freeman, ACM Trans. Graph. 26, 70 (2007).
[Crossref]

A. Levin and F. Durand, IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2010), pp. 1831–1838.

El Beheiry, M.

Fergus, R.

A. Levin, R. Fergus, F. Durand, and W. T. Freeman, ACM Trans. Graph. 26, 70 (2007).
[Crossref]

Freeman, W. T.

A. Levin, R. Fergus, F. Durand, and W. T. Freeman, ACM Trans. Graph. 26, 70 (2007).
[Crossref]

Hajj, B.

Harvey, A. R.

Häusler, G.

G. Häusler, Opt. Commun. 6, 38 (1972).
[Crossref]

Jia, S.

S. Jia, J. C. Vaughan, and X. Zhuang, Nat. Photonics 8, 302 (2014).
[Crossref]

Kuthirummal, S.

S. Kuthirummal, H. Nagahara, C. Zhou, and S. K. Nayar, IEEE Trans. Pattern Anal. Mach. Intell. 33, 58 (2011).
[Crossref]

Lanzanò, L.

Lee, S. F.

Levin, A.

A. Levin, R. Fergus, F. Durand, and W. T. Freeman, ACM Trans. Graph. 26, 70 (2007).
[Crossref]

A. Levin and F. Durand, IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2010), pp. 1831–1838.

Lew, M. D.

Li, H.

Liu, N.

S. R. P. Pavani, M. A. Thompson, J. S. Biteen, S. J. Lord, N. Liu, R. J. Twieg, R. Piestun, and W. E. Moerner, Proc. Natl. Acad. Sci. USA 106, 2995 (2009).
[Crossref]

Llull, P.

Lord, S. J.

S. R. P. Pavani, M. A. Thompson, J. S. Biteen, S. J. Lord, N. Liu, R. J. Twieg, R. Piestun, and W. E. Moerner, Proc. Natl. Acad. Sci. USA 106, 2995 (2009).
[Crossref]

Moerner, W. E.

M. D. Lew, S. F. Lee, M. Badieirostami, and W. E. Moerner, Opt. Lett. 36, 202 (2011).
[Crossref]

S. R. P. Pavani, M. A. Thompson, J. S. Biteen, S. J. Lord, N. Liu, R. J. Twieg, R. Piestun, and W. E. Moerner, Proc. Natl. Acad. Sci. USA 106, 2995 (2009).
[Crossref]

Nagahara, H.

S. Kuthirummal, H. Nagahara, C. Zhou, and S. K. Nayar, IEEE Trans. Pattern Anal. Mach. Intell. 33, 58 (2011).
[Crossref]

Nayar, S. K.

S. Kuthirummal, H. Nagahara, C. Zhou, and S. K. Nayar, IEEE Trans. Pattern Anal. Mach. Intell. 33, 58 (2011).
[Crossref]

Niu, H.

Pavani, S. R. P.

S. R. P. Pavani, M. A. Thompson, J. S. Biteen, S. J. Lord, N. Liu, R. J. Twieg, R. Piestun, and W. E. Moerner, Proc. Natl. Acad. Sci. USA 106, 2995 (2009).
[Crossref]

Piestun, R.

S. R. P. Pavani, M. A. Thompson, J. S. Biteen, S. J. Lord, N. Liu, R. J. Twieg, R. Piestun, and W. E. Moerner, Proc. Natl. Acad. Sci. USA 106, 2995 (2009).
[Crossref]

Ravasenga, T.

Richardson, W. H.

Sancataldo, G.

Scipioni, L.

Thompson, M. A.

S. R. P. Pavani, M. A. Thompson, J. S. Biteen, S. J. Lord, N. Liu, R. J. Twieg, R. Piestun, and W. E. Moerner, Proc. Natl. Acad. Sci. USA 106, 2995 (2009).
[Crossref]

Twieg, R. J.

S. R. P. Pavani, M. A. Thompson, J. S. Biteen, S. J. Lord, N. Liu, R. J. Twieg, R. Piestun, and W. E. Moerner, Proc. Natl. Acad. Sci. USA 106, 2995 (2009).
[Crossref]

Vaughan, J. C.

S. Jia, J. C. Vaughan, and X. Zhuang, Nat. Photonics 8, 302 (2014).
[Crossref]

Xu, G.

Yu, B.

Yuan, X.

Zammit, P.

Zhou, C.

S. Kuthirummal, H. Nagahara, C. Zhou, and S. K. Nayar, IEEE Trans. Pattern Anal. Mach. Intell. 33, 58 (2011).
[Crossref]

Zhou, Y.

Zhuang, X.

S. Jia, J. C. Vaughan, and X. Zhuang, Nat. Photonics 8, 302 (2014).
[Crossref]

ACM Trans. Graph. (1)

A. Levin, R. Fergus, F. Durand, and W. T. Freeman, ACM Trans. Graph. 26, 70 (2007).
[Crossref]

Appl. Opt. (1)

Biomed. Opt. Express (1)

IEEE Trans. Pattern Anal. Mach. Intell. (1)

S. Kuthirummal, H. Nagahara, C. Zhou, and S. K. Nayar, IEEE Trans. Pattern Anal. Mach. Intell. 33, 58 (2011).
[Crossref]

J. Opt. Soc. Am. (1)

Nat. Photonics (1)

S. Jia, J. C. Vaughan, and X. Zhuang, Nat. Photonics 8, 302 (2014).
[Crossref]

Opt. Commun. (1)

G. Häusler, Opt. Commun. 6, 38 (1972).
[Crossref]

Opt. Express (2)

Opt. Lett. (1)

Optica (2)

Proc. Natl. Acad. Sci. USA (1)

S. R. P. Pavani, M. A. Thompson, J. S. Biteen, S. J. Lord, N. Liu, R. J. Twieg, R. Piestun, and W. E. Moerner, Proc. Natl. Acad. Sci. USA 106, 2995 (2009).
[Crossref]

Other (1)

A. Levin and F. Durand, IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2010), pp. 1831–1838.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. Concept of the proposed optical system generating the NDLS PSF using a focus-multiplexed CGH. The color represents the depth of the object.
Fig. 2.
Fig. 2. Calculated phase-only FMCGH. Phase maps (a) without and (b) with phase unwrapping.
Fig. 3.
Fig. 3. (a) 2D PSFs with defocus by non-PPM (lens PSF), CPM (SB PSF, also called Airy beam), and FMCGH (NDLS PSF). (b) 1D line profiles on the “slice” line of the PSFs. Scale bar indicates 300 μm.
Fig. 4.
Fig. 4. Simulation results of the z o -stack of 1D slices of the (a) lens, (b) SB, and (c) NDLS PSFs.
Fig. 5.
Fig. 5. Line profile of a captured image generated from multi-depth point sources and deconvolution result.
Fig. 6.
Fig. 6. (a) Setup for optical experiments and (b) the experimentally observed z o -stack of the 1D slices of the NDLS PSFs.

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

ϕ div ( x , y ) [ z , x ] = exp ( j k 2 ( x x ) 2 + y 2 z ) ,
ϕ lens ( x , y ; z o , x i ) = ϕ div ( x , y ) [ z o , 0 ] ϕ div ( x , y ) [ z i , x i ] .
ϕ eCGH ( x , y ; z o , x i ) = ϕ lens ( x , y ; z o , x i ) ϕ div ( x , y ) [ z fo , 0 ] ϕ div ( x , y ) [ z fi , 0 ] ,
x i = α z o ,
ϕ FMCGHc ( x , y ) = z o = z min z max ϕ eCGH ( x , y ; z o ) ,
g x = H ϕ FMCGH [ f z ] .
g x h ¯ * H z o x i [ f z ] ,
f ^ z = H z o x i 1 [ h ¯ 1 * g x ] ,

Metrics