Abstract

The use of complementary engineered point spread functions is proposed for the joint tasks of depth estimation and image recovery over an extended depth of field. A digital imaging system with a dynamically adjustable pupil is demonstrated experimentally. The implementation of a broadband, passive camera is demonstrated with a fractional ranging error of 4/104 at a working distance of 1 m. Once the depth and brightness information of a scene are obtained, a synthetic camera is defined and images rendered computationally to emphasize particular features such as image focusing at different depths.

© 2012 Optical Society of America

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. G. Lippman, “Épreuves réversibles donnant la sensation du relief,” J. Phys. Théor. Appl. 7, 821–825 (1908).
    [CrossRef]
  2. E. H. Adelson and J. Y. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Machine Intell. 14, 99–106 (1992).
    [CrossRef]
  3. R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford University Computer Science Tech Report CSTR 2005-02 (Stanford University, 2005).
  4. H. Arimoto and B. Javidi, “Integral three-dimensional imaging with digital reconstruction,” Opt. Lett. 26, 157–159 (2001).
    [CrossRef]
  5. Y. Y. Schechner and N. Kiryati, “Depth from defocus vs. stereo: how different really are they?” Int. J. Comput. Vis. 39, 141–162 (2000).
    [CrossRef]
  6. S. Chaudhuri and A. N. Rajagopalan, Depth from Defocus: A Real Aperture Imaging Approach (Springer-Verlag, 1999).
  7. Y. Xiong and S. A. Shafer, “Depth from focusing and defocusing,” Technical report CMU-RI-TR-93-07 (Robotics Institute, Carnegie Mellon University, 1993).
  8. T. Darrell and K. Wohn, “Pyramid based depth from focus,” in Proceedings of Computer Vision and Pattern Recognition (IEEE, 1988), pp. 504–509.
  9. T. Georgeiv, K. C. Zheng, B. Curless, D. Salesin, S. Nayar, and C. Intwala, “Spatio-angular resolution tradeoff in integral photography,” in Proceedings of the 17th Eurographics Conference on Rendering Techniques (Eurographics Association, 2006), pp. 263–272.
  10. S. R. P. Pavani, J. G. DeLuca, and R. Piestun, “Polarization sensitive, three-dimensional, single-molecule imaging of cells with a double-helix system,” Opt. Express 17, 19644–19655 (2009).
    [CrossRef]
  11. S. R. P. Pavani, A. Greengard, and R. Piestun, “Three-dimensional localization with nanometer accuracy using a detector-limited double-helix point spread function system,” Appl. Phys. Lett. 95, 021103 (2009).
    [CrossRef]
  12. A. Greengard, Y. Y. Schechner, and R. Piestun, “Depth from diffracted rotation,” Opt. Lett. 31, 181–183 (2006).
    [CrossRef]
  13. S. R. P. Pavani, M. A. Thompson, J. S. Biteen, S. J. Lord, N. Liu, R. J. Twieg, R. Piestun, and W. E. Moerner, “Three-dimensional, single-molecule fluorescence imaging beyond the diffraction limit by using a double-helix point spread function,” Proc. Natl. Acad. Sci. USA 106, 2995–2999 (2009).
    [CrossRef]
  14. B. Huang, W. Wang, M. Bates, and X. Zhuang, “Three-dimensional super-resolution imaging by stochastic optical reconstruction microscopy,” Science 319, 810–813 (2008).
    [CrossRef]
  15. A. Medina, F. Gaya, and F. del Pozo, “Compact laser radar and three-dimensional camera,” J. Opt. Soc. Am. A 23, 800–805 (2006).
    [CrossRef]
  16. M. A. A. Neil, R. Juskaitis, and T. Wilson, “Method of obtaining optical sectioning by using structured light in a conventional microscope,” Opt. Lett. 22, 1905–1907 (1997).
    [CrossRef]
  17. A. P. Pentland, “A new sense for depth of field,” IEEE Trans. Pattern Anal. PAMI-9, 523–531 (1987).
    [CrossRef]
  18. S. K. Nayar and Y. Nakagawa, “Shape from Focus,” IEEE Trans. Pattern Anal. 16, 824–831 (1994).
    [CrossRef]
  19. P. Favaro and S. Soatto, 3-D Shape Estimation and Image Restoration—Exploiting Defocus and Motion Blur (Springer-Verlag, 2007).
  20. S. Ram, J. Chao, P. Prabhat, R. J. Ober, and E. S. Ward, “A novel approach to determining the three-dimensional location of microscopic objects with applications to 3D particle tracking,” Proc. SPIE 6443, 64430D (2007).
    [CrossRef]
  21. G. E. Johnson, E. R. Dowski, and W. T. Cathey, “Passive ranging through wave-front coding: information and application,” Appl. Opt. 39, 1700–1710 (2000).
    [CrossRef]
  22. A. Levin, R. Fergus, F. Durand, and B. Freeman, “Image and depth from a conventional camera with a coded aperture,” SIGGRAPH 2007 (ACM, 2007), article 70.
  23. C. Zhou, S. Lin, and S. Nayar, “Coded aperture pairs for depth from defocus,” IEEE International Conference on Computer Vision (IEEE, 2009), pp. 325–332.
  24. E. Dowski and W. T. Cathey, “Extended depth of field through wave-front coding,” Appl. Opt. 34, 1859–1866 (1995).
    [CrossRef]
  25. J. van der Gracht, E. R. Dowski, M. G. Taylor, and D. M. Deaver, “Broadband behavior of an optical-digital focus-invariant system,” Opt. Lett. 21, 919–921 (1996).
    [CrossRef]
  26. S. R. P. Pavani and R. Piestun, “High-efficiency rotating point spread functions,” Opt. Express 16, 3484–3489 (2008).
    [CrossRef]
  27. G. Grover, S. R. P. Pavani, and R. Piestun, “Performance limits on three-dimensional particle localization in photon-limited microscopy,” Opt. Lett. 35, 3306–3308 (2010).
    [CrossRef]
  28. R. Piestun, Y. Y. Schechner, and J. Shamir, “Propagation-invariant wave fields with finite energy,” J. Opt. Soc. Am. A 17, 294–303 (2000).
    [CrossRef]
  29. S. Bagheri, P. E. X. Silveira, R. Narayanswamy, and D. Pucci de Farias, “Analytical optical solution of the extension of the depth of field using cubic-phase wavefront coding. Part II. Design and optimization of the cubic phase,” J. Opt. Soc. Am. A 25, 1064–1074 (2008).
    [CrossRef]
  30. S. Bagheri, P. E. X. Silveira, and G. Barbastathis, “Signal-to-noise-ratio limit to the depth-of-field extension for imaging systems with an arbitrary pupil function,” J. Opt. Soc. Am. A 26, 895–908 (2009).
    [CrossRef]
  31. S. M. Kay, Fundamentals of Statistical Signal Processing, Volume I: Estimation Theory (Prentice Hall, 1993).
  32. H. Barrett, C. Dainty, and D. Lara, “Maximum-likelihood methods in wavefront sensing: stochastic models and likelihood functions,” J. Opt. Soc. Am. A 24, 391–414(2007).
    [CrossRef]
  33. S. Quirin, S. R. P. Pavani, and R. Piestun, “Optimal 3D single-molecule localization for super-resolution microscopy with engineered point spread functions,” Proc. Natl. Acad. Sci. USA 109, 675–679 (2011).
    [CrossRef]
  34. W. T. Cathey and E. R. Dowski, “New paradigm for imaging systems,” Appl. Opt. 41, 6080–6092 (2002).
    [CrossRef]

2011 (1)

S. Quirin, S. R. P. Pavani, and R. Piestun, “Optimal 3D single-molecule localization for super-resolution microscopy with engineered point spread functions,” Proc. Natl. Acad. Sci. USA 109, 675–679 (2011).
[CrossRef]

2010 (1)

2009 (4)

S. R. P. Pavani, J. G. DeLuca, and R. Piestun, “Polarization sensitive, three-dimensional, single-molecule imaging of cells with a double-helix system,” Opt. Express 17, 19644–19655 (2009).
[CrossRef]

S. R. P. Pavani, A. Greengard, and R. Piestun, “Three-dimensional localization with nanometer accuracy using a detector-limited double-helix point spread function system,” Appl. Phys. Lett. 95, 021103 (2009).
[CrossRef]

S. R. P. Pavani, M. A. Thompson, J. S. Biteen, S. J. Lord, N. Liu, R. J. Twieg, R. Piestun, and W. E. Moerner, “Three-dimensional, single-molecule fluorescence imaging beyond the diffraction limit by using a double-helix point spread function,” Proc. Natl. Acad. Sci. USA 106, 2995–2999 (2009).
[CrossRef]

S. Bagheri, P. E. X. Silveira, and G. Barbastathis, “Signal-to-noise-ratio limit to the depth-of-field extension for imaging systems with an arbitrary pupil function,” J. Opt. Soc. Am. A 26, 895–908 (2009).
[CrossRef]

2008 (3)

2007 (2)

S. Ram, J. Chao, P. Prabhat, R. J. Ober, and E. S. Ward, “A novel approach to determining the three-dimensional location of microscopic objects with applications to 3D particle tracking,” Proc. SPIE 6443, 64430D (2007).
[CrossRef]

H. Barrett, C. Dainty, and D. Lara, “Maximum-likelihood methods in wavefront sensing: stochastic models and likelihood functions,” J. Opt. Soc. Am. A 24, 391–414(2007).
[CrossRef]

2006 (2)

2002 (1)

2001 (1)

2000 (3)

1997 (1)

1996 (1)

1995 (1)

1994 (1)

S. K. Nayar and Y. Nakagawa, “Shape from Focus,” IEEE Trans. Pattern Anal. 16, 824–831 (1994).
[CrossRef]

1992 (1)

E. H. Adelson and J. Y. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Machine Intell. 14, 99–106 (1992).
[CrossRef]

1987 (1)

A. P. Pentland, “A new sense for depth of field,” IEEE Trans. Pattern Anal. PAMI-9, 523–531 (1987).
[CrossRef]

1908 (1)

G. Lippman, “Épreuves réversibles donnant la sensation du relief,” J. Phys. Théor. Appl. 7, 821–825 (1908).
[CrossRef]

Adelson, E. H.

E. H. Adelson and J. Y. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Machine Intell. 14, 99–106 (1992).
[CrossRef]

Arimoto, H.

Bagheri, S.

Barbastathis, G.

Barrett, H.

Bates, M.

B. Huang, W. Wang, M. Bates, and X. Zhuang, “Three-dimensional super-resolution imaging by stochastic optical reconstruction microscopy,” Science 319, 810–813 (2008).
[CrossRef]

Biteen, J. S.

S. R. P. Pavani, M. A. Thompson, J. S. Biteen, S. J. Lord, N. Liu, R. J. Twieg, R. Piestun, and W. E. Moerner, “Three-dimensional, single-molecule fluorescence imaging beyond the diffraction limit by using a double-helix point spread function,” Proc. Natl. Acad. Sci. USA 106, 2995–2999 (2009).
[CrossRef]

Bredif, M.

R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford University Computer Science Tech Report CSTR 2005-02 (Stanford University, 2005).

Cathey, W. T.

Chao, J.

S. Ram, J. Chao, P. Prabhat, R. J. Ober, and E. S. Ward, “A novel approach to determining the three-dimensional location of microscopic objects with applications to 3D particle tracking,” Proc. SPIE 6443, 64430D (2007).
[CrossRef]

Chaudhuri, S.

S. Chaudhuri and A. N. Rajagopalan, Depth from Defocus: A Real Aperture Imaging Approach (Springer-Verlag, 1999).

Curless, B.

T. Georgeiv, K. C. Zheng, B. Curless, D. Salesin, S. Nayar, and C. Intwala, “Spatio-angular resolution tradeoff in integral photography,” in Proceedings of the 17th Eurographics Conference on Rendering Techniques (Eurographics Association, 2006), pp. 263–272.

Dainty, C.

Darrell, T.

T. Darrell and K. Wohn, “Pyramid based depth from focus,” in Proceedings of Computer Vision and Pattern Recognition (IEEE, 1988), pp. 504–509.

de Farias, D. Pucci

Deaver, D. M.

del Pozo, F.

DeLuca, J. G.

Dowski, E.

Dowski, E. R.

Durand, F.

A. Levin, R. Fergus, F. Durand, and B. Freeman, “Image and depth from a conventional camera with a coded aperture,” SIGGRAPH 2007 (ACM, 2007), article 70.

Duval, G.

R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford University Computer Science Tech Report CSTR 2005-02 (Stanford University, 2005).

Favaro, P.

P. Favaro and S. Soatto, 3-D Shape Estimation and Image Restoration—Exploiting Defocus and Motion Blur (Springer-Verlag, 2007).

Fergus, R.

A. Levin, R. Fergus, F. Durand, and B. Freeman, “Image and depth from a conventional camera with a coded aperture,” SIGGRAPH 2007 (ACM, 2007), article 70.

Freeman, B.

A. Levin, R. Fergus, F. Durand, and B. Freeman, “Image and depth from a conventional camera with a coded aperture,” SIGGRAPH 2007 (ACM, 2007), article 70.

Gaya, F.

Georgeiv, T.

T. Georgeiv, K. C. Zheng, B. Curless, D. Salesin, S. Nayar, and C. Intwala, “Spatio-angular resolution tradeoff in integral photography,” in Proceedings of the 17th Eurographics Conference on Rendering Techniques (Eurographics Association, 2006), pp. 263–272.

Greengard, A.

S. R. P. Pavani, A. Greengard, and R. Piestun, “Three-dimensional localization with nanometer accuracy using a detector-limited double-helix point spread function system,” Appl. Phys. Lett. 95, 021103 (2009).
[CrossRef]

A. Greengard, Y. Y. Schechner, and R. Piestun, “Depth from diffracted rotation,” Opt. Lett. 31, 181–183 (2006).
[CrossRef]

Grover, G.

Hanrahan, P.

R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford University Computer Science Tech Report CSTR 2005-02 (Stanford University, 2005).

Horowitz, M.

R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford University Computer Science Tech Report CSTR 2005-02 (Stanford University, 2005).

Huang, B.

B. Huang, W. Wang, M. Bates, and X. Zhuang, “Three-dimensional super-resolution imaging by stochastic optical reconstruction microscopy,” Science 319, 810–813 (2008).
[CrossRef]

Intwala, C.

T. Georgeiv, K. C. Zheng, B. Curless, D. Salesin, S. Nayar, and C. Intwala, “Spatio-angular resolution tradeoff in integral photography,” in Proceedings of the 17th Eurographics Conference on Rendering Techniques (Eurographics Association, 2006), pp. 263–272.

Javidi, B.

Johnson, G. E.

Juskaitis, R.

Kay, S. M.

S. M. Kay, Fundamentals of Statistical Signal Processing, Volume I: Estimation Theory (Prentice Hall, 1993).

Kiryati, N.

Y. Y. Schechner and N. Kiryati, “Depth from defocus vs. stereo: how different really are they?” Int. J. Comput. Vis. 39, 141–162 (2000).
[CrossRef]

Lara, D.

Levin, A.

A. Levin, R. Fergus, F. Durand, and B. Freeman, “Image and depth from a conventional camera with a coded aperture,” SIGGRAPH 2007 (ACM, 2007), article 70.

Levoy, M.

R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford University Computer Science Tech Report CSTR 2005-02 (Stanford University, 2005).

Lin, S.

C. Zhou, S. Lin, and S. Nayar, “Coded aperture pairs for depth from defocus,” IEEE International Conference on Computer Vision (IEEE, 2009), pp. 325–332.

Lippman, G.

G. Lippman, “Épreuves réversibles donnant la sensation du relief,” J. Phys. Théor. Appl. 7, 821–825 (1908).
[CrossRef]

Liu, N.

S. R. P. Pavani, M. A. Thompson, J. S. Biteen, S. J. Lord, N. Liu, R. J. Twieg, R. Piestun, and W. E. Moerner, “Three-dimensional, single-molecule fluorescence imaging beyond the diffraction limit by using a double-helix point spread function,” Proc. Natl. Acad. Sci. USA 106, 2995–2999 (2009).
[CrossRef]

Lord, S. J.

S. R. P. Pavani, M. A. Thompson, J. S. Biteen, S. J. Lord, N. Liu, R. J. Twieg, R. Piestun, and W. E. Moerner, “Three-dimensional, single-molecule fluorescence imaging beyond the diffraction limit by using a double-helix point spread function,” Proc. Natl. Acad. Sci. USA 106, 2995–2999 (2009).
[CrossRef]

Medina, A.

Moerner, W. E.

S. R. P. Pavani, M. A. Thompson, J. S. Biteen, S. J. Lord, N. Liu, R. J. Twieg, R. Piestun, and W. E. Moerner, “Three-dimensional, single-molecule fluorescence imaging beyond the diffraction limit by using a double-helix point spread function,” Proc. Natl. Acad. Sci. USA 106, 2995–2999 (2009).
[CrossRef]

Nakagawa, Y.

S. K. Nayar and Y. Nakagawa, “Shape from Focus,” IEEE Trans. Pattern Anal. 16, 824–831 (1994).
[CrossRef]

Narayanswamy, R.

Nayar, S.

C. Zhou, S. Lin, and S. Nayar, “Coded aperture pairs for depth from defocus,” IEEE International Conference on Computer Vision (IEEE, 2009), pp. 325–332.

T. Georgeiv, K. C. Zheng, B. Curless, D. Salesin, S. Nayar, and C. Intwala, “Spatio-angular resolution tradeoff in integral photography,” in Proceedings of the 17th Eurographics Conference on Rendering Techniques (Eurographics Association, 2006), pp. 263–272.

Nayar, S. K.

S. K. Nayar and Y. Nakagawa, “Shape from Focus,” IEEE Trans. Pattern Anal. 16, 824–831 (1994).
[CrossRef]

Neil, M. A. A.

Ng, R.

R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford University Computer Science Tech Report CSTR 2005-02 (Stanford University, 2005).

Ober, R. J.

S. Ram, J. Chao, P. Prabhat, R. J. Ober, and E. S. Ward, “A novel approach to determining the three-dimensional location of microscopic objects with applications to 3D particle tracking,” Proc. SPIE 6443, 64430D (2007).
[CrossRef]

Pavani, S. R. P.

S. Quirin, S. R. P. Pavani, and R. Piestun, “Optimal 3D single-molecule localization for super-resolution microscopy with engineered point spread functions,” Proc. Natl. Acad. Sci. USA 109, 675–679 (2011).
[CrossRef]

G. Grover, S. R. P. Pavani, and R. Piestun, “Performance limits on three-dimensional particle localization in photon-limited microscopy,” Opt. Lett. 35, 3306–3308 (2010).
[CrossRef]

S. R. P. Pavani, J. G. DeLuca, and R. Piestun, “Polarization sensitive, three-dimensional, single-molecule imaging of cells with a double-helix system,” Opt. Express 17, 19644–19655 (2009).
[CrossRef]

S. R. P. Pavani, A. Greengard, and R. Piestun, “Three-dimensional localization with nanometer accuracy using a detector-limited double-helix point spread function system,” Appl. Phys. Lett. 95, 021103 (2009).
[CrossRef]

S. R. P. Pavani, M. A. Thompson, J. S. Biteen, S. J. Lord, N. Liu, R. J. Twieg, R. Piestun, and W. E. Moerner, “Three-dimensional, single-molecule fluorescence imaging beyond the diffraction limit by using a double-helix point spread function,” Proc. Natl. Acad. Sci. USA 106, 2995–2999 (2009).
[CrossRef]

S. R. P. Pavani and R. Piestun, “High-efficiency rotating point spread functions,” Opt. Express 16, 3484–3489 (2008).
[CrossRef]

Pentland, A. P.

A. P. Pentland, “A new sense for depth of field,” IEEE Trans. Pattern Anal. PAMI-9, 523–531 (1987).
[CrossRef]

Piestun, R.

S. Quirin, S. R. P. Pavani, and R. Piestun, “Optimal 3D single-molecule localization for super-resolution microscopy with engineered point spread functions,” Proc. Natl. Acad. Sci. USA 109, 675–679 (2011).
[CrossRef]

G. Grover, S. R. P. Pavani, and R. Piestun, “Performance limits on three-dimensional particle localization in photon-limited microscopy,” Opt. Lett. 35, 3306–3308 (2010).
[CrossRef]

S. R. P. Pavani, J. G. DeLuca, and R. Piestun, “Polarization sensitive, three-dimensional, single-molecule imaging of cells with a double-helix system,” Opt. Express 17, 19644–19655 (2009).
[CrossRef]

S. R. P. Pavani, A. Greengard, and R. Piestun, “Three-dimensional localization with nanometer accuracy using a detector-limited double-helix point spread function system,” Appl. Phys. Lett. 95, 021103 (2009).
[CrossRef]

S. R. P. Pavani, M. A. Thompson, J. S. Biteen, S. J. Lord, N. Liu, R. J. Twieg, R. Piestun, and W. E. Moerner, “Three-dimensional, single-molecule fluorescence imaging beyond the diffraction limit by using a double-helix point spread function,” Proc. Natl. Acad. Sci. USA 106, 2995–2999 (2009).
[CrossRef]

S. R. P. Pavani and R. Piestun, “High-efficiency rotating point spread functions,” Opt. Express 16, 3484–3489 (2008).
[CrossRef]

A. Greengard, Y. Y. Schechner, and R. Piestun, “Depth from diffracted rotation,” Opt. Lett. 31, 181–183 (2006).
[CrossRef]

R. Piestun, Y. Y. Schechner, and J. Shamir, “Propagation-invariant wave fields with finite energy,” J. Opt. Soc. Am. A 17, 294–303 (2000).
[CrossRef]

Prabhat, P.

S. Ram, J. Chao, P. Prabhat, R. J. Ober, and E. S. Ward, “A novel approach to determining the three-dimensional location of microscopic objects with applications to 3D particle tracking,” Proc. SPIE 6443, 64430D (2007).
[CrossRef]

Quirin, S.

S. Quirin, S. R. P. Pavani, and R. Piestun, “Optimal 3D single-molecule localization for super-resolution microscopy with engineered point spread functions,” Proc. Natl. Acad. Sci. USA 109, 675–679 (2011).
[CrossRef]

Rajagopalan, A. N.

S. Chaudhuri and A. N. Rajagopalan, Depth from Defocus: A Real Aperture Imaging Approach (Springer-Verlag, 1999).

Ram, S.

S. Ram, J. Chao, P. Prabhat, R. J. Ober, and E. S. Ward, “A novel approach to determining the three-dimensional location of microscopic objects with applications to 3D particle tracking,” Proc. SPIE 6443, 64430D (2007).
[CrossRef]

Salesin, D.

T. Georgeiv, K. C. Zheng, B. Curless, D. Salesin, S. Nayar, and C. Intwala, “Spatio-angular resolution tradeoff in integral photography,” in Proceedings of the 17th Eurographics Conference on Rendering Techniques (Eurographics Association, 2006), pp. 263–272.

Schechner, Y. Y.

Shafer, S. A.

Y. Xiong and S. A. Shafer, “Depth from focusing and defocusing,” Technical report CMU-RI-TR-93-07 (Robotics Institute, Carnegie Mellon University, 1993).

Shamir, J.

Silveira, P. E. X.

Soatto, S.

P. Favaro and S. Soatto, 3-D Shape Estimation and Image Restoration—Exploiting Defocus and Motion Blur (Springer-Verlag, 2007).

Taylor, M. G.

Thompson, M. A.

S. R. P. Pavani, M. A. Thompson, J. S. Biteen, S. J. Lord, N. Liu, R. J. Twieg, R. Piestun, and W. E. Moerner, “Three-dimensional, single-molecule fluorescence imaging beyond the diffraction limit by using a double-helix point spread function,” Proc. Natl. Acad. Sci. USA 106, 2995–2999 (2009).
[CrossRef]

Twieg, R. J.

S. R. P. Pavani, M. A. Thompson, J. S. Biteen, S. J. Lord, N. Liu, R. J. Twieg, R. Piestun, and W. E. Moerner, “Three-dimensional, single-molecule fluorescence imaging beyond the diffraction limit by using a double-helix point spread function,” Proc. Natl. Acad. Sci. USA 106, 2995–2999 (2009).
[CrossRef]

van der Gracht, J.

Wang, J. Y.

E. H. Adelson and J. Y. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Machine Intell. 14, 99–106 (1992).
[CrossRef]

Wang, W.

B. Huang, W. Wang, M. Bates, and X. Zhuang, “Three-dimensional super-resolution imaging by stochastic optical reconstruction microscopy,” Science 319, 810–813 (2008).
[CrossRef]

Ward, E. S.

S. Ram, J. Chao, P. Prabhat, R. J. Ober, and E. S. Ward, “A novel approach to determining the three-dimensional location of microscopic objects with applications to 3D particle tracking,” Proc. SPIE 6443, 64430D (2007).
[CrossRef]

Wilson, T.

Wohn, K.

T. Darrell and K. Wohn, “Pyramid based depth from focus,” in Proceedings of Computer Vision and Pattern Recognition (IEEE, 1988), pp. 504–509.

Xiong, Y.

Y. Xiong and S. A. Shafer, “Depth from focusing and defocusing,” Technical report CMU-RI-TR-93-07 (Robotics Institute, Carnegie Mellon University, 1993).

Zheng, K. C.

T. Georgeiv, K. C. Zheng, B. Curless, D. Salesin, S. Nayar, and C. Intwala, “Spatio-angular resolution tradeoff in integral photography,” in Proceedings of the 17th Eurographics Conference on Rendering Techniques (Eurographics Association, 2006), pp. 263–272.

Zhou, C.

C. Zhou, S. Lin, and S. Nayar, “Coded aperture pairs for depth from defocus,” IEEE International Conference on Computer Vision (IEEE, 2009), pp. 325–332.

Zhuang, X.

B. Huang, W. Wang, M. Bates, and X. Zhuang, “Three-dimensional super-resolution imaging by stochastic optical reconstruction microscopy,” Science 319, 810–813 (2008).
[CrossRef]

Appl. Opt. (3)

Appl. Phys. Lett. (1)

S. R. P. Pavani, A. Greengard, and R. Piestun, “Three-dimensional localization with nanometer accuracy using a detector-limited double-helix point spread function system,” Appl. Phys. Lett. 95, 021103 (2009).
[CrossRef]

IEEE Trans. Pattern Anal. (2)

A. P. Pentland, “A new sense for depth of field,” IEEE Trans. Pattern Anal. PAMI-9, 523–531 (1987).
[CrossRef]

S. K. Nayar and Y. Nakagawa, “Shape from Focus,” IEEE Trans. Pattern Anal. 16, 824–831 (1994).
[CrossRef]

IEEE Trans. Pattern Anal. Machine Intell. (1)

E. H. Adelson and J. Y. Wang, “Single lens stereo with a plenoptic camera,” IEEE Trans. Pattern Anal. Machine Intell. 14, 99–106 (1992).
[CrossRef]

Int. J. Comput. Vis. (1)

Y. Y. Schechner and N. Kiryati, “Depth from defocus vs. stereo: how different really are they?” Int. J. Comput. Vis. 39, 141–162 (2000).
[CrossRef]

J. Opt. Soc. Am. A (5)

J. Phys. Théor. Appl. (1)

G. Lippman, “Épreuves réversibles donnant la sensation du relief,” J. Phys. Théor. Appl. 7, 821–825 (1908).
[CrossRef]

Opt. Express (2)

Opt. Lett. (5)

Proc. Natl. Acad. Sci. USA (2)

S. R. P. Pavani, M. A. Thompson, J. S. Biteen, S. J. Lord, N. Liu, R. J. Twieg, R. Piestun, and W. E. Moerner, “Three-dimensional, single-molecule fluorescence imaging beyond the diffraction limit by using a double-helix point spread function,” Proc. Natl. Acad. Sci. USA 106, 2995–2999 (2009).
[CrossRef]

S. Quirin, S. R. P. Pavani, and R. Piestun, “Optimal 3D single-molecule localization for super-resolution microscopy with engineered point spread functions,” Proc. Natl. Acad. Sci. USA 109, 675–679 (2011).
[CrossRef]

Proc. SPIE (1)

S. Ram, J. Chao, P. Prabhat, R. J. Ober, and E. S. Ward, “A novel approach to determining the three-dimensional location of microscopic objects with applications to 3D particle tracking,” Proc. SPIE 6443, 64430D (2007).
[CrossRef]

Science (1)

B. Huang, W. Wang, M. Bates, and X. Zhuang, “Three-dimensional super-resolution imaging by stochastic optical reconstruction microscopy,” Science 319, 810–813 (2008).
[CrossRef]

Other (9)

P. Favaro and S. Soatto, 3-D Shape Estimation and Image Restoration—Exploiting Defocus and Motion Blur (Springer-Verlag, 2007).

R. Ng, M. Levoy, M. Bredif, G. Duval, M. Horowitz, and P. Hanrahan, “Light field photography with a hand-held plenoptic camera,” Stanford University Computer Science Tech Report CSTR 2005-02 (Stanford University, 2005).

S. Chaudhuri and A. N. Rajagopalan, Depth from Defocus: A Real Aperture Imaging Approach (Springer-Verlag, 1999).

Y. Xiong and S. A. Shafer, “Depth from focusing and defocusing,” Technical report CMU-RI-TR-93-07 (Robotics Institute, Carnegie Mellon University, 1993).

T. Darrell and K. Wohn, “Pyramid based depth from focus,” in Proceedings of Computer Vision and Pattern Recognition (IEEE, 1988), pp. 504–509.

T. Georgeiv, K. C. Zheng, B. Curless, D. Salesin, S. Nayar, and C. Intwala, “Spatio-angular resolution tradeoff in integral photography,” in Proceedings of the 17th Eurographics Conference on Rendering Techniques (Eurographics Association, 2006), pp. 263–272.

A. Levin, R. Fergus, F. Durand, and B. Freeman, “Image and depth from a conventional camera with a coded aperture,” SIGGRAPH 2007 (ACM, 2007), article 70.

C. Zhou, S. Lin, and S. Nayar, “Coded aperture pairs for depth from defocus,” IEEE International Conference on Computer Vision (IEEE, 2009), pp. 325–332.

S. M. Kay, Fundamentals of Statistical Signal Processing, Volume I: Estimation Theory (Prentice Hall, 1993).

Supplementary Material (1)

» Media 1: MOV (271 KB)     

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1.
Fig. 1.

Axial dependence of three PSFs. A, The typical camera PSF using a clear, circular aperture; B, the DH-PSF; and C, the cubic-phase PSF (CP-PSF) are compared through an equivalent range of defocus. The standard camera (clear aperture) PSF is symmetric about focus, the DH-PSF has a unique rotation angle at each position, and the CP-PSF has an axially invariant pattern.

Fig. 2.
Fig. 2.

A, Rotation angle is associated with the estimated h ^ DH by calculating the angle subtended by the centroids of each lobe and a frame-of-reference on the detector—here the horizontal axis. B, The DH-PSF rotation angle varies accordingly as a function of axial position and can be found experimentally to account for the presence of possible aberrations.

Fig. 3.
Fig. 3.

Block diagram of the dual-channel complementary PSF engineering digital optical system.

Fig. 4.
Fig. 4.

CRLB analysis of the axial estimation precision for a point source (top panel) and a die-cast car (bottom panel). The performance of the DH-PSF and standard camera is reported as the object is translated through focus at z = 1 m . The relative performance gains from using DH-PSF optics are seen to depend on the spatial frequency content of the object.

Fig. 5.
Fig. 5.

Experimental configuration of serialized engineered point spread function optical imager. The imaging lens (L1) is paired with a color filter (F) and polarization analyzer (P). The intermediate image (ii) is reimaged through a Fourier transform lens (L2) such that the phase of the spatial frequency components can be modulated with the SLM. An iris (I) is placed before the SLM to confine the incident light to the modulated region of the liquid-crystal device. The final Fourier transform lens (L3) is used to complete the optical encoding and form a signal on the detector (D) that is related to the object and encoded with the desired engineered PSF.

Fig. 6.
Fig. 6.

A, Experimental DH-PSF image; B, experimental CP-PSF. Both are shown as measured after L3 in the engineered PSF imaging system.

Fig. 7.
Fig. 7.

A, Restored image contains spatial frequency information that is used to provide local confidence measures with regard to the axial estimation and B, the confidence measures for the restored CP image.

Fig. 8.
Fig. 8.

Experimental images from both the DH-PSF channel (A) and the CP-PSF channel (B) are used to calculate depth estimation results (C). Histograms for the dense collection of axial position estimates for each of the two cars are shown in (D). Two broad distributions, indicative of the locations of the two cars, are readily apparent (Media 1).

Fig. 9.
Fig. 9.

Restored object estimate of the scene from the cubic phase channel (A) is used for image segmentation. After segmenting the objects within the scene, an average axial distance taken from the depth estimation channel is assigned to each car (B).

Fig. 10.
Fig. 10.

Digital refocusing of the scene as a postprocessing step. The proposed system returns a grid of axial estimates associated with the diffraction-limited information content (A) to define the surface and brightness of the scene. Images can then be generated synthetically, emphasizing focus to convey visual information. (B) and (C) alternatively focus on either object in the scene as a demonstration.

Equations (8)

Equations on this page are rendered with MathJax. Learn more.

i ( u , v ) = C z d i + h ( u x , v y , z ) o ( x , y , z ) d x d y d z + n ,
p CP ( m , n ) = e i α ( m 3 + n 3 ) ,
h CP ( x , y ; z ) = | F { p CP ( m , n ) e i π λ ( m 2 + n 2 ) ( 1 f 1 z 1 d i ) } | 2 ,
Ψ = π λ ( m 2 + n 2 ) ( 1 f 1 z 1 d i ) ,
o ^ ( x , y ) = F 1 { F { i CP } F { h CP } [ | F { h CP } | 2 | F { h CP } | 2 + SNR ( f ) 1 ] } ,
h ^ DH ( x , y ; z ) = F 1 { F { i CP } * F { i DH } | F { i CP } | 2 + SNR ( f ) 1 F { h CP } } ,
σ CRLB 2 ( z ) = m = 1 M n = 1 N 1 σ N ( z i [ m , n ] ) 2 ,
CM = ϵ x ϵ y | δ ( x , y ) | 2 d x d y 0 0 | δ ( x , y ) | 2 d x d y .

Metrics