Abstract

In this paper, we propose a new method for passive depth estimation based on the combination of a camera with longitudinal chromatic aberration and an original depth from defocus (DFD) algorithm. Indeed a chromatic lens, combined with an RGB sensor, produces three images with spectrally variable in-focus planes, which eases the task of depth extraction with DFD. We first propose an original DFD algorithm dedicated to color images having spectrally varying defocus blurs. Then we describe the design of a prototype chromatic camera so as to evaluate experimentally the effectiveness of the proposed approach for depth estimation. We provide comparisons with results of an active ranging sensor and real indoor/outdoor scene reconstructions.

© 2013 Optical Society of America

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. A. Pentland, “A new sense for depth of field,” IEEE Trans. Pattern Anal. Mach. Intell. PAMI-9, 523–531 (1987).
    [CrossRef]
  2. Y. Bando, B. Y. Chen, and T. Nishita, “Extracting depth and matte using a color-filtered aperture,” ACM Trans. Graph. 27, 1–9 (2008).
  3. A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in Proceedings of IEEE International Conference on Computational Photography (IEEE, 2009), pp. 1–8.
  4. M. Subbarao, “Parallel depth recovery by changing camera parameters,” in Proceedings of the IEEE International Conference on Computer Vision (IEEE, 1988), pp. 149–155.
  5. C. Zhou and S. Nayar, “Coded aperture pairs for depth from defocus,” in Proceedings of the IEEE International Conference on Computer Vision (IEEE, 2009), pp. 325–332.
  6. P. Favaro and S. Soatto, 3D Shape Estimation and Image Restoration (Springer, 2007).
  7. P. Green, W. Sun, W. Matusik, and F. Durand, “Multi-aperture photography,” ACM Trans. Graph. 26, 1–7 (2007).
  8. H. Nagahara, C. Zhou, C. T. Watanabe, H. Ishiguro, and S. Nayar, “Programmable aperture camera using LCoS,” in Proceedings of the IEEE European Conference on Computer Vision (IEEE, 2010), pp. 337–350.
  9. S. Quirin and R. Piestun, “Depth estimation and image recovery using broadband, incoherent illumination with engineered point spread functions,” Appl. Opt. 52, A367–A376 (2013).
    [CrossRef]
  10. S. Zhuo and T. Sim, “On the recovery of depth from a single defocused image,” in International Conference on Computer Analysis of Images and Patterns (Springer2009), pp. 889–897.
  11. A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and J. Tumblin, “Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocusing,” ACM Trans. Graph. 26, 1–12 (2007).
  12. A. Levin, R. Fergus, F. Durand, and W. T. Freeman, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graph. 26, 1–9 (2007).
  13. M. Martinello and P. Favaro, “Single image blind deconvolution with higher-order texture statistics,” Lect. Notes Comput. Sci. 7082, 124–151 (2011).
    [CrossRef]
  14. P. Trouvé, F. Champagnat, G. Le Besnerais, and J. Idier, “Single image local blur identification,” in Proceedings of IEEE Conference on Image Processing (IEEE, 2011), pp. 613–616.
  15. M. Martinello, T. Bishop, and P. Favaro, “A Bayesian approach to shape from coded aperture,” in Proceedings of IEEE Conference on Image Processing (IEEE, 2010), pp. 3521–3524.
  16. A. Chakrabarti and T. Zickler, “Depth and deblurring from a spectrally varying depth of field,” in Proceedings of IEEE European Conference on Computer Vision (IEEE, 2012), pp. 648–661.
  17. J. Garcia, J. Sánchez, X. Orriols, and X. Binefa, “Chromatic aberration and depth extraction,” in Proceedings of IEEE International Conference on Pattern Recognition (IEEE, 2000), pp. 762–765.
  18. M. Robinson and D. Stork, “Joint digital-optical design of imaging systems for grayscale objects,” Proc. SPIE 7100, 710011 (2008).
  19. B. Milgrom, N. Konforti, M. Golub, and E. Marom, “Novel approach for extending the depth of field of Barcode decoders by using RGB channels of information,” Opt. Express 18, 17027–17039 (2010).
    [CrossRef]
  20. O. Cossairt and S. Nayar, “Spectral focal sweep: extended depth of field from chromatic aberrations,” in Proceedings of IEEE Conference on Computational Photography (IEEE, 2010), p. 1–8.
  21. F. Guichard, H. P. Nguyen, R. Tessières, M. Pyanet, I. Tarchouna, and F. C. Cao, “Extended depth-of-field using sharpness transport across color channels,” Proc. SPIE 7250, 72500N (2009).
  22. J. Lim, J. Kang, and H. Ok, “Robust local restoration of space-variant blur image,” Proc. SPIE 6817, 68170S (2008).
  23. L. Waller, S. S. Kou, C. J. R. Sheppard, and G. Barbastathis, “Phase from chromatic aberrations,” Opt. Express 18, 22817–22825 (2010).
    [CrossRef]
  24. S. Kebin, L. Peng, Y. Shizhuo, and L. Zhiwen, “Chromatic confocal microscopy using supercontinuum light,” Opt. Express 12, 2096–2101 (2004).
    [CrossRef]
  25. P. Trouvé, F. Champagnat, G. Le Besnerais, and J. Idier, “Chromatic depth from defocus, an theoretical and experimental study,” in Computational Optical Sensing and Imaging Conference, Imaging and Applied Optics Technical Papers (2012), paper CM3B.3.
  26. J. Idier, Bayesian Approach to Inverse Problems (Wiley, 2008).
  27. A. Levin, Y. Weiss, F. Durand, and W. Freeman, “Understanding and evaluating blind deconvolution algorithms,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2009), pp. 88–101.
  28. G. Wahba, “A comparison of GCV and GML for choosing the smoothing parameter in the generalized spline smoothing problem,” Ann. Stat. 13, 1378–1402 (1985).
    [CrossRef]
  29. A. Neumaier, “Solving ill-conditioned and singular linear systems: a tutorial on regularization,” SIAM Rev. 40, 636–666 (1998).
    [CrossRef]
  30. F. Champagnat, “Inference with gaussian improper distributions,” Internal Onera Report No.  (2012).
  31. L. Condat, “Color filter array design using random patterns with blue noise chromatic spectra,” Image Vis. Comput. 28, 1196–1202 (2010).
  32. D. L. Gilblom, K. Sang, and P. Ventura, “Operation and performance of a color image sensor with layered photodiodes,” Proc. SPIE 5074, 318–331 (2003).
  33. J. W. Goodman, Introduction to Fourier Optics (McGraw-Hill, 1996).
  34. N. Joshi, R. Szeliski, and D. J. Kriegman, “PSF estimation using sharp edge prediction,” in Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition (IEEE2008), pp. 1–8.
  35. M. Delbracio, P. Musé, A. Almansa, and J. Morel, “The non-parametric sub-pixel local point spread function estimation is a well posed problem,” Int. J. Comput. Vis. 96, 175–194 (2012).
    [CrossRef]
  36. Y. Shih, B. Guenter, and N. Joshi, “Image enhancement using calibrated lens simulations,” in Proceedings of IEEE European Conference on Computer Vision (IEEE, 2012), p. 42.
  37. H. Tang and K. N. Kutulakos, “What does an aberrated photo tell us about the lens and the scene?” in Proceedings of IEEE International Conference on Computational Photography (IEEE, 2013), p. 86.
  38. J. Chow, K. Ang, D. Lichti, and W. Teskey, “Performance analysis of low cost triangulation-based 3D camera: microsoft kinect system,” Int. Arc. Photogramm. Remote Sens. Spatial Inf. Sci. 39, 175–180 (2012).
    [CrossRef]
  39. J. Z. Wang, J. Li, R. M. Gray, and G. Wiederhold, “Unsupervised multiresolution segmentation for images with low depth of field,” IEEE Trans. Pattern Anal. Mach. Intell. 23, 85–90 (2001).
    [CrossRef]
  40. P. Trouvé, F. Champagnat, G. Le Besnerais, G. Druart, and J. Idier, “Design of a chromatic 3D camera with an end-to-end performance model approach,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition workshops (IEEE, 2013), pp. 953–960.

2013 (1)

2012 (2)

M. Delbracio, P. Musé, A. Almansa, and J. Morel, “The non-parametric sub-pixel local point spread function estimation is a well posed problem,” Int. J. Comput. Vis. 96, 175–194 (2012).
[CrossRef]

J. Chow, K. Ang, D. Lichti, and W. Teskey, “Performance analysis of low cost triangulation-based 3D camera: microsoft kinect system,” Int. Arc. Photogramm. Remote Sens. Spatial Inf. Sci. 39, 175–180 (2012).
[CrossRef]

2011 (1)

M. Martinello and P. Favaro, “Single image blind deconvolution with higher-order texture statistics,” Lect. Notes Comput. Sci. 7082, 124–151 (2011).
[CrossRef]

2010 (3)

2009 (1)

F. Guichard, H. P. Nguyen, R. Tessières, M. Pyanet, I. Tarchouna, and F. C. Cao, “Extended depth-of-field using sharpness transport across color channels,” Proc. SPIE 7250, 72500N (2009).

2008 (3)

J. Lim, J. Kang, and H. Ok, “Robust local restoration of space-variant blur image,” Proc. SPIE 6817, 68170S (2008).

M. Robinson and D. Stork, “Joint digital-optical design of imaging systems for grayscale objects,” Proc. SPIE 7100, 710011 (2008).

Y. Bando, B. Y. Chen, and T. Nishita, “Extracting depth and matte using a color-filtered aperture,” ACM Trans. Graph. 27, 1–9 (2008).

2007 (3)

P. Green, W. Sun, W. Matusik, and F. Durand, “Multi-aperture photography,” ACM Trans. Graph. 26, 1–7 (2007).

A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and J. Tumblin, “Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocusing,” ACM Trans. Graph. 26, 1–12 (2007).

A. Levin, R. Fergus, F. Durand, and W. T. Freeman, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graph. 26, 1–9 (2007).

2004 (1)

2003 (1)

D. L. Gilblom, K. Sang, and P. Ventura, “Operation and performance of a color image sensor with layered photodiodes,” Proc. SPIE 5074, 318–331 (2003).

2001 (1)

J. Z. Wang, J. Li, R. M. Gray, and G. Wiederhold, “Unsupervised multiresolution segmentation for images with low depth of field,” IEEE Trans. Pattern Anal. Mach. Intell. 23, 85–90 (2001).
[CrossRef]

1998 (1)

A. Neumaier, “Solving ill-conditioned and singular linear systems: a tutorial on regularization,” SIAM Rev. 40, 636–666 (1998).
[CrossRef]

1987 (1)

A. Pentland, “A new sense for depth of field,” IEEE Trans. Pattern Anal. Mach. Intell. PAMI-9, 523–531 (1987).
[CrossRef]

1985 (1)

G. Wahba, “A comparison of GCV and GML for choosing the smoothing parameter in the generalized spline smoothing problem,” Ann. Stat. 13, 1378–1402 (1985).
[CrossRef]

Agrawal, A.

A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and J. Tumblin, “Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocusing,” ACM Trans. Graph. 26, 1–12 (2007).

Almansa, A.

M. Delbracio, P. Musé, A. Almansa, and J. Morel, “The non-parametric sub-pixel local point spread function estimation is a well posed problem,” Int. J. Comput. Vis. 96, 175–194 (2012).
[CrossRef]

Ang, K.

J. Chow, K. Ang, D. Lichti, and W. Teskey, “Performance analysis of low cost triangulation-based 3D camera: microsoft kinect system,” Int. Arc. Photogramm. Remote Sens. Spatial Inf. Sci. 39, 175–180 (2012).
[CrossRef]

Bando, Y.

Y. Bando, B. Y. Chen, and T. Nishita, “Extracting depth and matte using a color-filtered aperture,” ACM Trans. Graph. 27, 1–9 (2008).

Barbastathis, G.

Binefa, X.

J. Garcia, J. Sánchez, X. Orriols, and X. Binefa, “Chromatic aberration and depth extraction,” in Proceedings of IEEE International Conference on Pattern Recognition (IEEE, 2000), pp. 762–765.

Bishop, T.

M. Martinello, T. Bishop, and P. Favaro, “A Bayesian approach to shape from coded aperture,” in Proceedings of IEEE Conference on Image Processing (IEEE, 2010), pp. 3521–3524.

Cao, F. C.

F. Guichard, H. P. Nguyen, R. Tessières, M. Pyanet, I. Tarchouna, and F. C. Cao, “Extended depth-of-field using sharpness transport across color channels,” Proc. SPIE 7250, 72500N (2009).

Chakrabarti, A.

A. Chakrabarti and T. Zickler, “Depth and deblurring from a spectrally varying depth of field,” in Proceedings of IEEE European Conference on Computer Vision (IEEE, 2012), pp. 648–661.

Champagnat, F.

F. Champagnat, “Inference with gaussian improper distributions,” Internal Onera Report No.  (2012).

P. Trouvé, F. Champagnat, G. Le Besnerais, G. Druart, and J. Idier, “Design of a chromatic 3D camera with an end-to-end performance model approach,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition workshops (IEEE, 2013), pp. 953–960.

P. Trouvé, F. Champagnat, G. Le Besnerais, and J. Idier, “Single image local blur identification,” in Proceedings of IEEE Conference on Image Processing (IEEE, 2011), pp. 613–616.

P. Trouvé, F. Champagnat, G. Le Besnerais, and J. Idier, “Chromatic depth from defocus, an theoretical and experimental study,” in Computational Optical Sensing and Imaging Conference, Imaging and Applied Optics Technical Papers (2012), paper CM3B.3.

Chen, B. Y.

Y. Bando, B. Y. Chen, and T. Nishita, “Extracting depth and matte using a color-filtered aperture,” ACM Trans. Graph. 27, 1–9 (2008).

Chow, J.

J. Chow, K. Ang, D. Lichti, and W. Teskey, “Performance analysis of low cost triangulation-based 3D camera: microsoft kinect system,” Int. Arc. Photogramm. Remote Sens. Spatial Inf. Sci. 39, 175–180 (2012).
[CrossRef]

Condat, L.

L. Condat, “Color filter array design using random patterns with blue noise chromatic spectra,” Image Vis. Comput. 28, 1196–1202 (2010).

Cossairt, O.

O. Cossairt and S. Nayar, “Spectral focal sweep: extended depth of field from chromatic aberrations,” in Proceedings of IEEE Conference on Computational Photography (IEEE, 2010), p. 1–8.

Delbracio, M.

M. Delbracio, P. Musé, A. Almansa, and J. Morel, “The non-parametric sub-pixel local point spread function estimation is a well posed problem,” Int. J. Comput. Vis. 96, 175–194 (2012).
[CrossRef]

Druart, G.

P. Trouvé, F. Champagnat, G. Le Besnerais, G. Druart, and J. Idier, “Design of a chromatic 3D camera with an end-to-end performance model approach,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition workshops (IEEE, 2013), pp. 953–960.

Durand, F.

A. Levin, R. Fergus, F. Durand, and W. T. Freeman, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graph. 26, 1–9 (2007).

P. Green, W. Sun, W. Matusik, and F. Durand, “Multi-aperture photography,” ACM Trans. Graph. 26, 1–7 (2007).

A. Levin, Y. Weiss, F. Durand, and W. Freeman, “Understanding and evaluating blind deconvolution algorithms,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2009), pp. 88–101.

Favaro, P.

M. Martinello and P. Favaro, “Single image blind deconvolution with higher-order texture statistics,” Lect. Notes Comput. Sci. 7082, 124–151 (2011).
[CrossRef]

P. Favaro and S. Soatto, 3D Shape Estimation and Image Restoration (Springer, 2007).

M. Martinello, T. Bishop, and P. Favaro, “A Bayesian approach to shape from coded aperture,” in Proceedings of IEEE Conference on Image Processing (IEEE, 2010), pp. 3521–3524.

Fergus, R.

A. Levin, R. Fergus, F. Durand, and W. T. Freeman, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graph. 26, 1–9 (2007).

Freeman, W.

A. Levin, Y. Weiss, F. Durand, and W. Freeman, “Understanding and evaluating blind deconvolution algorithms,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2009), pp. 88–101.

Freeman, W. T.

A. Levin, R. Fergus, F. Durand, and W. T. Freeman, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graph. 26, 1–9 (2007).

Garcia, J.

J. Garcia, J. Sánchez, X. Orriols, and X. Binefa, “Chromatic aberration and depth extraction,” in Proceedings of IEEE International Conference on Pattern Recognition (IEEE, 2000), pp. 762–765.

Georgiev, T.

A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in Proceedings of IEEE International Conference on Computational Photography (IEEE, 2009), pp. 1–8.

Gilblom, D. L.

D. L. Gilblom, K. Sang, and P. Ventura, “Operation and performance of a color image sensor with layered photodiodes,” Proc. SPIE 5074, 318–331 (2003).

Golub, M.

Goodman, J. W.

J. W. Goodman, Introduction to Fourier Optics (McGraw-Hill, 1996).

Gray, R. M.

J. Z. Wang, J. Li, R. M. Gray, and G. Wiederhold, “Unsupervised multiresolution segmentation for images with low depth of field,” IEEE Trans. Pattern Anal. Mach. Intell. 23, 85–90 (2001).
[CrossRef]

Green, P.

P. Green, W. Sun, W. Matusik, and F. Durand, “Multi-aperture photography,” ACM Trans. Graph. 26, 1–7 (2007).

Guenter, B.

Y. Shih, B. Guenter, and N. Joshi, “Image enhancement using calibrated lens simulations,” in Proceedings of IEEE European Conference on Computer Vision (IEEE, 2012), p. 42.

Guichard, F.

F. Guichard, H. P. Nguyen, R. Tessières, M. Pyanet, I. Tarchouna, and F. C. Cao, “Extended depth-of-field using sharpness transport across color channels,” Proc. SPIE 7250, 72500N (2009).

Idier, J.

P. Trouvé, F. Champagnat, G. Le Besnerais, and J. Idier, “Single image local blur identification,” in Proceedings of IEEE Conference on Image Processing (IEEE, 2011), pp. 613–616.

P. Trouvé, F. Champagnat, G. Le Besnerais, and J. Idier, “Chromatic depth from defocus, an theoretical and experimental study,” in Computational Optical Sensing and Imaging Conference, Imaging and Applied Optics Technical Papers (2012), paper CM3B.3.

P. Trouvé, F. Champagnat, G. Le Besnerais, G. Druart, and J. Idier, “Design of a chromatic 3D camera with an end-to-end performance model approach,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition workshops (IEEE, 2013), pp. 953–960.

J. Idier, Bayesian Approach to Inverse Problems (Wiley, 2008).

Ishiguro, H.

H. Nagahara, C. Zhou, C. T. Watanabe, H. Ishiguro, and S. Nayar, “Programmable aperture camera using LCoS,” in Proceedings of the IEEE European Conference on Computer Vision (IEEE, 2010), pp. 337–350.

Joshi, N.

Y. Shih, B. Guenter, and N. Joshi, “Image enhancement using calibrated lens simulations,” in Proceedings of IEEE European Conference on Computer Vision (IEEE, 2012), p. 42.

N. Joshi, R. Szeliski, and D. J. Kriegman, “PSF estimation using sharp edge prediction,” in Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition (IEEE2008), pp. 1–8.

Kang, J.

J. Lim, J. Kang, and H. Ok, “Robust local restoration of space-variant blur image,” Proc. SPIE 6817, 68170S (2008).

Kebin, S.

Konforti, N.

Kou, S. S.

Kriegman, D. J.

N. Joshi, R. Szeliski, and D. J. Kriegman, “PSF estimation using sharp edge prediction,” in Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition (IEEE2008), pp. 1–8.

Kutulakos, K. N.

H. Tang and K. N. Kutulakos, “What does an aberrated photo tell us about the lens and the scene?” in Proceedings of IEEE International Conference on Computational Photography (IEEE, 2013), p. 86.

Le Besnerais, G.

P. Trouvé, F. Champagnat, G. Le Besnerais, G. Druart, and J. Idier, “Design of a chromatic 3D camera with an end-to-end performance model approach,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition workshops (IEEE, 2013), pp. 953–960.

P. Trouvé, F. Champagnat, G. Le Besnerais, and J. Idier, “Chromatic depth from defocus, an theoretical and experimental study,” in Computational Optical Sensing and Imaging Conference, Imaging and Applied Optics Technical Papers (2012), paper CM3B.3.

P. Trouvé, F. Champagnat, G. Le Besnerais, and J. Idier, “Single image local blur identification,” in Proceedings of IEEE Conference on Image Processing (IEEE, 2011), pp. 613–616.

Levin, A.

A. Levin, R. Fergus, F. Durand, and W. T. Freeman, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graph. 26, 1–9 (2007).

A. Levin, Y. Weiss, F. Durand, and W. Freeman, “Understanding and evaluating blind deconvolution algorithms,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2009), pp. 88–101.

Li, J.

J. Z. Wang, J. Li, R. M. Gray, and G. Wiederhold, “Unsupervised multiresolution segmentation for images with low depth of field,” IEEE Trans. Pattern Anal. Mach. Intell. 23, 85–90 (2001).
[CrossRef]

Lichti, D.

J. Chow, K. Ang, D. Lichti, and W. Teskey, “Performance analysis of low cost triangulation-based 3D camera: microsoft kinect system,” Int. Arc. Photogramm. Remote Sens. Spatial Inf. Sci. 39, 175–180 (2012).
[CrossRef]

Lim, J.

J. Lim, J. Kang, and H. Ok, “Robust local restoration of space-variant blur image,” Proc. SPIE 6817, 68170S (2008).

Lumsdaine, A.

A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in Proceedings of IEEE International Conference on Computational Photography (IEEE, 2009), pp. 1–8.

Marom, E.

Martinello, M.

M. Martinello and P. Favaro, “Single image blind deconvolution with higher-order texture statistics,” Lect. Notes Comput. Sci. 7082, 124–151 (2011).
[CrossRef]

M. Martinello, T. Bishop, and P. Favaro, “A Bayesian approach to shape from coded aperture,” in Proceedings of IEEE Conference on Image Processing (IEEE, 2010), pp. 3521–3524.

Matusik, W.

P. Green, W. Sun, W. Matusik, and F. Durand, “Multi-aperture photography,” ACM Trans. Graph. 26, 1–7 (2007).

Milgrom, B.

Mohan, A.

A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and J. Tumblin, “Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocusing,” ACM Trans. Graph. 26, 1–12 (2007).

Morel, J.

M. Delbracio, P. Musé, A. Almansa, and J. Morel, “The non-parametric sub-pixel local point spread function estimation is a well posed problem,” Int. J. Comput. Vis. 96, 175–194 (2012).
[CrossRef]

Musé, P.

M. Delbracio, P. Musé, A. Almansa, and J. Morel, “The non-parametric sub-pixel local point spread function estimation is a well posed problem,” Int. J. Comput. Vis. 96, 175–194 (2012).
[CrossRef]

Nagahara, H.

H. Nagahara, C. Zhou, C. T. Watanabe, H. Ishiguro, and S. Nayar, “Programmable aperture camera using LCoS,” in Proceedings of the IEEE European Conference on Computer Vision (IEEE, 2010), pp. 337–350.

Nayar, S.

H. Nagahara, C. Zhou, C. T. Watanabe, H. Ishiguro, and S. Nayar, “Programmable aperture camera using LCoS,” in Proceedings of the IEEE European Conference on Computer Vision (IEEE, 2010), pp. 337–350.

C. Zhou and S. Nayar, “Coded aperture pairs for depth from defocus,” in Proceedings of the IEEE International Conference on Computer Vision (IEEE, 2009), pp. 325–332.

O. Cossairt and S. Nayar, “Spectral focal sweep: extended depth of field from chromatic aberrations,” in Proceedings of IEEE Conference on Computational Photography (IEEE, 2010), p. 1–8.

Neumaier, A.

A. Neumaier, “Solving ill-conditioned and singular linear systems: a tutorial on regularization,” SIAM Rev. 40, 636–666 (1998).
[CrossRef]

Nguyen, H. P.

F. Guichard, H. P. Nguyen, R. Tessières, M. Pyanet, I. Tarchouna, and F. C. Cao, “Extended depth-of-field using sharpness transport across color channels,” Proc. SPIE 7250, 72500N (2009).

Nishita, T.

Y. Bando, B. Y. Chen, and T. Nishita, “Extracting depth and matte using a color-filtered aperture,” ACM Trans. Graph. 27, 1–9 (2008).

Ok, H.

J. Lim, J. Kang, and H. Ok, “Robust local restoration of space-variant blur image,” Proc. SPIE 6817, 68170S (2008).

Orriols, X.

J. Garcia, J. Sánchez, X. Orriols, and X. Binefa, “Chromatic aberration and depth extraction,” in Proceedings of IEEE International Conference on Pattern Recognition (IEEE, 2000), pp. 762–765.

Peng, L.

Pentland, A.

A. Pentland, “A new sense for depth of field,” IEEE Trans. Pattern Anal. Mach. Intell. PAMI-9, 523–531 (1987).
[CrossRef]

Piestun, R.

Pyanet, M.

F. Guichard, H. P. Nguyen, R. Tessières, M. Pyanet, I. Tarchouna, and F. C. Cao, “Extended depth-of-field using sharpness transport across color channels,” Proc. SPIE 7250, 72500N (2009).

Quirin, S.

Raskar, R.

A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and J. Tumblin, “Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocusing,” ACM Trans. Graph. 26, 1–12 (2007).

Robinson, M.

M. Robinson and D. Stork, “Joint digital-optical design of imaging systems for grayscale objects,” Proc. SPIE 7100, 710011 (2008).

Sánchez, J.

J. Garcia, J. Sánchez, X. Orriols, and X. Binefa, “Chromatic aberration and depth extraction,” in Proceedings of IEEE International Conference on Pattern Recognition (IEEE, 2000), pp. 762–765.

Sang, K.

D. L. Gilblom, K. Sang, and P. Ventura, “Operation and performance of a color image sensor with layered photodiodes,” Proc. SPIE 5074, 318–331 (2003).

Sheppard, C. J. R.

Shih, Y.

Y. Shih, B. Guenter, and N. Joshi, “Image enhancement using calibrated lens simulations,” in Proceedings of IEEE European Conference on Computer Vision (IEEE, 2012), p. 42.

Shizhuo, Y.

Sim, T.

S. Zhuo and T. Sim, “On the recovery of depth from a single defocused image,” in International Conference on Computer Analysis of Images and Patterns (Springer2009), pp. 889–897.

Soatto, S.

P. Favaro and S. Soatto, 3D Shape Estimation and Image Restoration (Springer, 2007).

Stork, D.

M. Robinson and D. Stork, “Joint digital-optical design of imaging systems for grayscale objects,” Proc. SPIE 7100, 710011 (2008).

Subbarao, M.

M. Subbarao, “Parallel depth recovery by changing camera parameters,” in Proceedings of the IEEE International Conference on Computer Vision (IEEE, 1988), pp. 149–155.

Sun, W.

P. Green, W. Sun, W. Matusik, and F. Durand, “Multi-aperture photography,” ACM Trans. Graph. 26, 1–7 (2007).

Szeliski, R.

N. Joshi, R. Szeliski, and D. J. Kriegman, “PSF estimation using sharp edge prediction,” in Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition (IEEE2008), pp. 1–8.

Tang, H.

H. Tang and K. N. Kutulakos, “What does an aberrated photo tell us about the lens and the scene?” in Proceedings of IEEE International Conference on Computational Photography (IEEE, 2013), p. 86.

Tarchouna, I.

F. Guichard, H. P. Nguyen, R. Tessières, M. Pyanet, I. Tarchouna, and F. C. Cao, “Extended depth-of-field using sharpness transport across color channels,” Proc. SPIE 7250, 72500N (2009).

Teskey, W.

J. Chow, K. Ang, D. Lichti, and W. Teskey, “Performance analysis of low cost triangulation-based 3D camera: microsoft kinect system,” Int. Arc. Photogramm. Remote Sens. Spatial Inf. Sci. 39, 175–180 (2012).
[CrossRef]

Tessières, R.

F. Guichard, H. P. Nguyen, R. Tessières, M. Pyanet, I. Tarchouna, and F. C. Cao, “Extended depth-of-field using sharpness transport across color channels,” Proc. SPIE 7250, 72500N (2009).

Trouvé, P.

P. Trouvé, F. Champagnat, G. Le Besnerais, and J. Idier, “Single image local blur identification,” in Proceedings of IEEE Conference on Image Processing (IEEE, 2011), pp. 613–616.

P. Trouvé, F. Champagnat, G. Le Besnerais, and J. Idier, “Chromatic depth from defocus, an theoretical and experimental study,” in Computational Optical Sensing and Imaging Conference, Imaging and Applied Optics Technical Papers (2012), paper CM3B.3.

P. Trouvé, F. Champagnat, G. Le Besnerais, G. Druart, and J. Idier, “Design of a chromatic 3D camera with an end-to-end performance model approach,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition workshops (IEEE, 2013), pp. 953–960.

Tumblin, J.

A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and J. Tumblin, “Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocusing,” ACM Trans. Graph. 26, 1–12 (2007).

Veeraraghavan, A.

A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and J. Tumblin, “Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocusing,” ACM Trans. Graph. 26, 1–12 (2007).

Ventura, P.

D. L. Gilblom, K. Sang, and P. Ventura, “Operation and performance of a color image sensor with layered photodiodes,” Proc. SPIE 5074, 318–331 (2003).

Wahba, G.

G. Wahba, “A comparison of GCV and GML for choosing the smoothing parameter in the generalized spline smoothing problem,” Ann. Stat. 13, 1378–1402 (1985).
[CrossRef]

Waller, L.

Wang, J. Z.

J. Z. Wang, J. Li, R. M. Gray, and G. Wiederhold, “Unsupervised multiresolution segmentation for images with low depth of field,” IEEE Trans. Pattern Anal. Mach. Intell. 23, 85–90 (2001).
[CrossRef]

Watanabe, C. T.

H. Nagahara, C. Zhou, C. T. Watanabe, H. Ishiguro, and S. Nayar, “Programmable aperture camera using LCoS,” in Proceedings of the IEEE European Conference on Computer Vision (IEEE, 2010), pp. 337–350.

Weiss, Y.

A. Levin, Y. Weiss, F. Durand, and W. Freeman, “Understanding and evaluating blind deconvolution algorithms,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2009), pp. 88–101.

Wiederhold, G.

J. Z. Wang, J. Li, R. M. Gray, and G. Wiederhold, “Unsupervised multiresolution segmentation for images with low depth of field,” IEEE Trans. Pattern Anal. Mach. Intell. 23, 85–90 (2001).
[CrossRef]

Zhiwen, L.

Zhou, C.

H. Nagahara, C. Zhou, C. T. Watanabe, H. Ishiguro, and S. Nayar, “Programmable aperture camera using LCoS,” in Proceedings of the IEEE European Conference on Computer Vision (IEEE, 2010), pp. 337–350.

C. Zhou and S. Nayar, “Coded aperture pairs for depth from defocus,” in Proceedings of the IEEE International Conference on Computer Vision (IEEE, 2009), pp. 325–332.

Zhuo, S.

S. Zhuo and T. Sim, “On the recovery of depth from a single defocused image,” in International Conference on Computer Analysis of Images and Patterns (Springer2009), pp. 889–897.

Zickler, T.

A. Chakrabarti and T. Zickler, “Depth and deblurring from a spectrally varying depth of field,” in Proceedings of IEEE European Conference on Computer Vision (IEEE, 2012), pp. 648–661.

ACM Trans. Graph. (4)

Y. Bando, B. Y. Chen, and T. Nishita, “Extracting depth and matte using a color-filtered aperture,” ACM Trans. Graph. 27, 1–9 (2008).

A. Veeraraghavan, R. Raskar, A. Agrawal, A. Mohan, and J. Tumblin, “Dappled photography: mask enhanced cameras for heterodyned light fields and coded aperture refocusing,” ACM Trans. Graph. 26, 1–12 (2007).

A. Levin, R. Fergus, F. Durand, and W. T. Freeman, “Image and depth from a conventional camera with a coded aperture,” ACM Trans. Graph. 26, 1–9 (2007).

P. Green, W. Sun, W. Matusik, and F. Durand, “Multi-aperture photography,” ACM Trans. Graph. 26, 1–7 (2007).

Ann. Stat. (1)

G. Wahba, “A comparison of GCV and GML for choosing the smoothing parameter in the generalized spline smoothing problem,” Ann. Stat. 13, 1378–1402 (1985).
[CrossRef]

Appl. Opt. (1)

IEEE Trans. Pattern Anal. Mach. Intell. (2)

J. Z. Wang, J. Li, R. M. Gray, and G. Wiederhold, “Unsupervised multiresolution segmentation for images with low depth of field,” IEEE Trans. Pattern Anal. Mach. Intell. 23, 85–90 (2001).
[CrossRef]

A. Pentland, “A new sense for depth of field,” IEEE Trans. Pattern Anal. Mach. Intell. PAMI-9, 523–531 (1987).
[CrossRef]

Image Vis. Comput. (1)

L. Condat, “Color filter array design using random patterns with blue noise chromatic spectra,” Image Vis. Comput. 28, 1196–1202 (2010).

Int. Arc. Photogramm. Remote Sens. Spatial Inf. Sci. (1)

J. Chow, K. Ang, D. Lichti, and W. Teskey, “Performance analysis of low cost triangulation-based 3D camera: microsoft kinect system,” Int. Arc. Photogramm. Remote Sens. Spatial Inf. Sci. 39, 175–180 (2012).
[CrossRef]

Int. J. Comput. Vis. (1)

M. Delbracio, P. Musé, A. Almansa, and J. Morel, “The non-parametric sub-pixel local point spread function estimation is a well posed problem,” Int. J. Comput. Vis. 96, 175–194 (2012).
[CrossRef]

Lect. Notes Comput. Sci. (1)

M. Martinello and P. Favaro, “Single image blind deconvolution with higher-order texture statistics,” Lect. Notes Comput. Sci. 7082, 124–151 (2011).
[CrossRef]

Opt. Express (3)

Proc. SPIE (4)

F. Guichard, H. P. Nguyen, R. Tessières, M. Pyanet, I. Tarchouna, and F. C. Cao, “Extended depth-of-field using sharpness transport across color channels,” Proc. SPIE 7250, 72500N (2009).

J. Lim, J. Kang, and H. Ok, “Robust local restoration of space-variant blur image,” Proc. SPIE 6817, 68170S (2008).

D. L. Gilblom, K. Sang, and P. Ventura, “Operation and performance of a color image sensor with layered photodiodes,” Proc. SPIE 5074, 318–331 (2003).

M. Robinson and D. Stork, “Joint digital-optical design of imaging systems for grayscale objects,” Proc. SPIE 7100, 710011 (2008).

SIAM Rev. (1)

A. Neumaier, “Solving ill-conditioned and singular linear systems: a tutorial on regularization,” SIAM Rev. 40, 636–666 (1998).
[CrossRef]

Other (20)

F. Champagnat, “Inference with gaussian improper distributions,” Internal Onera Report No.  (2012).

H. Nagahara, C. Zhou, C. T. Watanabe, H. Ishiguro, and S. Nayar, “Programmable aperture camera using LCoS,” in Proceedings of the IEEE European Conference on Computer Vision (IEEE, 2010), pp. 337–350.

S. Zhuo and T. Sim, “On the recovery of depth from a single defocused image,” in International Conference on Computer Analysis of Images and Patterns (Springer2009), pp. 889–897.

P. Trouvé, F. Champagnat, G. Le Besnerais, and J. Idier, “Chromatic depth from defocus, an theoretical and experimental study,” in Computational Optical Sensing and Imaging Conference, Imaging and Applied Optics Technical Papers (2012), paper CM3B.3.

J. Idier, Bayesian Approach to Inverse Problems (Wiley, 2008).

A. Levin, Y. Weiss, F. Durand, and W. Freeman, “Understanding and evaluating blind deconvolution algorithms,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2009), pp. 88–101.

P. Trouvé, F. Champagnat, G. Le Besnerais, and J. Idier, “Single image local blur identification,” in Proceedings of IEEE Conference on Image Processing (IEEE, 2011), pp. 613–616.

M. Martinello, T. Bishop, and P. Favaro, “A Bayesian approach to shape from coded aperture,” in Proceedings of IEEE Conference on Image Processing (IEEE, 2010), pp. 3521–3524.

A. Chakrabarti and T. Zickler, “Depth and deblurring from a spectrally varying depth of field,” in Proceedings of IEEE European Conference on Computer Vision (IEEE, 2012), pp. 648–661.

J. Garcia, J. Sánchez, X. Orriols, and X. Binefa, “Chromatic aberration and depth extraction,” in Proceedings of IEEE International Conference on Pattern Recognition (IEEE, 2000), pp. 762–765.

A. Lumsdaine and T. Georgiev, “The focused plenoptic camera,” in Proceedings of IEEE International Conference on Computational Photography (IEEE, 2009), pp. 1–8.

M. Subbarao, “Parallel depth recovery by changing camera parameters,” in Proceedings of the IEEE International Conference on Computer Vision (IEEE, 1988), pp. 149–155.

C. Zhou and S. Nayar, “Coded aperture pairs for depth from defocus,” in Proceedings of the IEEE International Conference on Computer Vision (IEEE, 2009), pp. 325–332.

P. Favaro and S. Soatto, 3D Shape Estimation and Image Restoration (Springer, 2007).

J. W. Goodman, Introduction to Fourier Optics (McGraw-Hill, 1996).

N. Joshi, R. Szeliski, and D. J. Kriegman, “PSF estimation using sharp edge prediction,” in Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition (IEEE2008), pp. 1–8.

Y. Shih, B. Guenter, and N. Joshi, “Image enhancement using calibrated lens simulations,” in Proceedings of IEEE European Conference on Computer Vision (IEEE, 2012), p. 42.

H. Tang and K. N. Kutulakos, “What does an aberrated photo tell us about the lens and the scene?” in Proceedings of IEEE International Conference on Computational Photography (IEEE, 2013), p. 86.

P. Trouvé, F. Champagnat, G. Le Besnerais, G. Druart, and J. Idier, “Design of a chromatic 3D camera with an end-to-end performance model approach,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition workshops (IEEE, 2013), pp. 953–960.

O. Cossairt and S. Nayar, “Spectral focal sweep: extended depth of field from chromatic aberrations,” in Proceedings of IEEE Conference on Computational Photography (IEEE, 2010), p. 1–8.

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (18)

Fig. 1.
Fig. 1.

Illustration of the DFD principle.

Fig. 2.
Fig. 2.

Theoretical blur variation ϵ given by Eq. (1) with respect to depth, for a conventional imaging system with a focal length of 25 mm, an f-number of 3, and a sensor pixel size of 5 μm, and with the in-focus plane put at 2 m. (a) Illustration of depth estimation ambiguity. (b) Illustration of the dead zone in the DoF region.

Fig. 3.
Fig. 3.

Example of theoretical blur variation ε given by Eq. (1), with respect to depth for RGB channels: (a) in the case of the chromatic aperture of [16], and (b) in the case of a chromatic lens. In (a) the focal length is 25 mm, and the f-number of the green channel is 4.5, while it is 3 for the red and blue channels. For the chromatic lens (b), the green channel focal length is 25 mm, and the RGB channels are respectively focused at 1.5, 2, and 3 m, with an f-number of 3. In both cases the sensor pixel size is 5 μm.

Fig. 4.
Fig. 4.

Generic flowchart of the DFD algorithm for one image patch.

Fig. 5.
Fig. 5.

Comparison of CC-DFD and GC-DFD algorithms using error metric and standard deviation (std) computed over a collection of (a) grayscale scenes and (b) color scenes.

Fig. 6.
Fig. 6.

Comparison of CC-DFD algorithm using either an achromatic lens (AL) or a chromatic lens (CL). Lens parameters for both systems are given in Section 3.C.

Fig. 7.
Fig. 7.

Theoretical blur variation with respect to depth for the designed chromatic camera.

Fig. 8.
Fig. 8.

Focal shift and lateral chromatic shift between the red and the blue wavelengths of the proposed chromatic lens estimated by the optical design software Zemax.

Fig. 9.
Fig. 9.

Chromatic lens scheme given by Zemax, according to the optical design of Section 4.A. Color rays correspond to point sources at different field angles.

Fig. 10.
Fig. 10.

Customized chromatic lens (left) has been mounted on a Stingray color sensor to obtain a chromatic camera (right) of dimensions 4.5×4.5×8cm.

Fig. 11.
Fig. 11.

(a) Random pattern of [35] used for the RGB PSFs calibration. (b)–(d) Examples of calibrated PSFs at (b) 4.7 m, (c) 2.7 m, and (d) 2 m.

Fig. 12.
Fig. 12.

Evaluation of depth estimation accuracy on real fronto-parallel color scenes. Axes are in m.

Fig. 13.
Fig. 13.

Evaluation of depth estimation accuracy on real fronto-parallel grayscale scenes. Axes are in m.

Fig. 14.
Fig. 14.

Results of the prototype chromatic camera on indoor scenes. (a) Raw image acquired with the chromatic lens. (b) Kinect’s depth map. (c) Raw depth map with CC-DFD with a patch size of 21×21 and 50% overlapping. The depth labels are in m. Black label corresponds to homogeneous regions rejected by the algorithm.

Fig. 15.
Fig. 15.

Results of the prototype chromatic camera on outdoor scenes. (Left) Raw image acquired with the chromatic lens. (Right) Raw depth map with CC-DFD with a patch size of 9×9 and 50% overlapping. The depth labels are in m. Black label corresponds to homogeneous regions rejected by the algorithm.

Fig. 16.
Fig. 16.

Example of restoration of a real image acquired with the prototype chromatic camera using the high frequency transfer approach of Eq. (21). (a) Raw image. (b) Image after restoration. (c) and (e) Zoomed detail patches from the raw red channel. (d) and (f) Restored detailed of the red channel.

Fig. 17.
Fig. 17.

Weights for the high frequency transfer equation (21) as a function of depth.

Fig. 18.
Fig. 18.

Result of depth map regularization. From up to down: acquired image, raw depth map, regularized depth map: (a) with a median filter of size 3×3, (b) after the minimization of Eq. (22). The depth labels are in m. Black label corresponds to homogeneous regions rejected by the algorithm.

Tables (1)

Tables Icon

Table 1. Mean Error Metric (Err) and Standard Deviation (Std) of Depth Estimation Results over a Set of True Depths Varying from 1.3 to 3.5 m with a Step of 20 cm, for Various Values of μ

Equations (22)

Equations on this page are rendered with MathJax. Learn more.

ε=Ds|1f1d1s|,
(dk^,α^)=argmink,αGL(dk,α).
Y=[yRyGyB]=[HR(d)000HG(d)000HB(d)][xRxGxB]+N=HCX+N,
p(Y|X,σn2)=2πN/2(σn2)N/2exp(YHCX22σn2).
Y=[yRyGyB]=[HR(d)HG(d)HB(d)]x+N=HCgx+N.
p(x,σx2)exp(Dx22σx2),
p(Y|HCg(d),σn2,σx2)=p(Y|x,HCg(d),σn2)p(x,σx2)dx.
p(Y|HCg(d),σn2,α)|Q(α,d,σn2)|+12eYTQ(α,d,σn2)Y2,
Q(α,d,σn2)=1σn2(IN,NHCg(d)(HCg(d)THCg(d)+αDTD)1HCg(d)T),
P(α,d)=IN,NHCg(d)(HCg(d)THCg(d)+αDTD)1HCg(d)T.
p(Y|HCg(d),α)|P(α,d)|+12(YTP(α,d)Y)3N12.
GLG(d,α)=YTP(α,d)Y|P(α,d)|+1/(3N1).
[xRxGxB]=(TIM,M)XLC=(TIM,M)[xlxc1xc2],
T=[13121613121613026],
Y=HCc(d)XLC+N,
HCc(d)=HC(d)(TIM,M).
p(XLC,σx2,μ)exp(DC(μ)XLC22σx2)withDC=[μD000D000D].
GLC(d,α)=YT(P(α,d))Y|P(α,d)|+1/(3N3),
P(α,d)=IN,NHCc(d)(HCc(d)THCc(d)+αDCTDC)1HCc(d)T.
(dk^,α^)=argmink,αGLX(dk,α),
yc,out(p)=yc,in(p)+ad(p),RHPR(p)+ad(p),GHPG(p)+ad(p),BHPB(p),
E(d)=pGLC(d(p))+λp,qNpexp(yg(p)yg(q)22σ2)(1δ(dp,dq)),

Metrics