Abstract

Shape-from-focus (SFF) uses a sequence of space-variantly defocused observations captured with relative motion between camera and scene. It assumes that there is no motion parallax in the frames. This is a restriction and constrains the working environment. Moreover, SFF cannot recover the structure information when there are missing data in the frames due to CCD sensor damage or unavoidable occlusions. The capability of filling-in plausible information in regions devoid of data is of critical importance in many applications. Images of 3D scenes captured by off-the-shelf cameras with relative motion commonly exhibit parallax-induced pixel motion. We demonstrate the interesting possibility of exploiting motion parallax cue in the images captured in SFF with a practical camera to jointly inpaint the focused image and depth map.

© 2010 Optical Society of America

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. S. K. Nayar and Y. Nakagawa, “Shape from focus,” IEEE Trans. Pattern Anal. Mach. Intell. 16, 824–831 (1994).
    [CrossRef]
  2. A. Nedzved, V. Bucha, and S. Ablameyko, “Augmented 3D endoscopy video,” in 3DTV Conference: The True Vision-Capture, Transmission and Display of 3D Video, 28–30 May 2008, Istanbul, Turkey (2008), pp. 349–352.
    [CrossRef]
  3. M. Watanabe and S. K. Nayar, “Telecentric optics for focus analysis,” IEEE Trans. Pattern Anal. Mach. Intell. 19, 1360–1365 (1997).
    [CrossRef]
  4. R. G. Willson and S. A. Shafer, “What is the center of the image?” J. Opt. Soc. Am. A 11, 2946–2955 (1994).
    [CrossRef]
  5. T. Darrell and K. Wohn, “Pyramid based depth from focus,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 1988), pp. 504–509.
  6. R. Kingslake, Optical System Design (Academic, 1983).
  7. R. R. Sahay and A. N. Rajagopalan, “Extension of the shape from focus method for reconstruction of high-resolution images,” J. Opt. Soc. Am. A 24, 3649–3657 (2007).
    [CrossRef]
  8. M. Bertalmio, L. Vese, G. Sapiro, and S. Osher, “Simultaneous structure and image inpainting,” IEEE Trans. Image Process. 12, 882–889 (2003).
    [CrossRef]
  9. M. Bertalmio, G. Sapiro, V. Caselles, and C. Ballester, “Image inpainting,” in Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques (ACM Press, 2000), pp. 417–424.
  10. J. Verdera, V. Caselles, M. Bertalmio, and G. Sapiro, “Inpainting surface holes,” in Proceedings of the IEEE International Conference on Image Processing (IEEE, 2003), pp. 903–906.
  11. A. Criminisi, P. Perez, and K. Toyama, “Region filling and object removal by exemplar-based image inpainting,” IEEE Trans. Image Process. 13, 1200–1212 (2004).
    [CrossRef] [PubMed]
  12. K. A. Patwardhan, G. Sapiro, and M. Bertalmio, “Video inpainting under constrained camera motion,” IEEE Trans. Image Process. 16, 545–553 (2007).
    [CrossRef] [PubMed]
  13. S. Esedoglu and J. Shen, “Digital inpainting based on the Mumford-Shah-Euler image model,” Eur. J. Appl. Math. 13, 353–370 (2002).
    [CrossRef]
  14. C. A. Z. Barcelos and M. A. Batista, “Image restoration using digital inpainting and noise removal,” Image Vis. Comput. 25, 61–69 (2007).
    [CrossRef]
  15. A. C. Kokaram, “On missing data treatment for degraded video and film archives: a survey and a new Bayesian approach,” IEEE Trans. Image Process. 13, 397–415 (2004).
    [CrossRef] [PubMed]
  16. M. K. Ng, H. Shen, E. Y. Lam, and L. Zhang, “A total variation regularization based super-resolution reconstruction algorithm for digital video,” EURASIP J. Advances Signal Process. 2007, Article ID, 74585 16 pages (2007).
  17. G. D. Finlayson, M. S. Drew, and C. Lu, “Entropy minimization for shadow removal,” Int. J. Comput. Vis. 85, 35–57 (2009).
    [CrossRef]
  18. Y. Shor and D. Lischinski, “The shadow meets the mask: Pyramid-based shadow removal,” Comput. Graph. Forum 27, 577–586 (2008).
    [CrossRef]
  19. P. Favaro and S. Soatto, “Seeing beyond occlusions (and other marvels of a finite lens aperture,” in Proceedings of the International Conference on Computer Vision and Pattern Recognition (IEEE, 2003), pp. 579–586.
  20. S. S. Bhasin and S. Chaudhuri, “Depth from defocus in presence of partial self occlusion,” in Proceedings of the International Conference on Computer Vision (IEEE, 2001), pp. 488–493.
  21. S. W. Hasinoff and K. N. Kutulakos, “Confocal stereo,” Int. J. Comput. Vis. 81, 82–104 (2009).
    [CrossRef]
  22. M. Mcguire, W. Matusik, H. Pfister, J. F. Hughes, and F. Durand, “Defocus video matting,” in ACM Trans. Graph. (TOG) 24, 567–576 (2005).
    [CrossRef]
  23. S. W. Hasinoff and K. N. Kutulakos, “A layer-based restoration framework for variable-aperture photography,” in Proceedings of the 11th International Conference on Computer Vision (IEEE, 2007), pp. 1–8.
  24. M. Levoy, B. Chen, V. Vaish, M. Horowitz, I. McDowall, and M. Bolas, “Synthetic aperture confocal imaging,” ACM Trans. Graph (TOG) 23, 825–834 (2004).
    [CrossRef]
  25. V. Vaish, M. Levoy, R. Szeliski, C. L. Zitnick, and S. B. Kang, “Reconstructing occluded surfaces using synthetic apertures: Stereo, focus and robust measures,” in Proceedings of the International Conference on Computer Vision and Pattern Recognition (IEEE, 2006), pp. 2331–2338.
  26. L. Wang, H. Jin, R. Yang, and M. Gong, “Stereoscopic inpainting: joint color and depth completion from stereo images,” in Proceedings of the International Conference on Computer Vision and Pattern Recognition (IEEE, 2008), pp. 1–8.
  27. P. Brodatz, Textures: A Photographic Album for Artists and Designers, (Dover, 1966).
  28. A. P. Pentland, “A new sense for depth of field,” IEEE Trans. Pattern Anal. Mach. Intell. 9, 523–531 (1987).
    [CrossRef] [PubMed]
  29. S. Chaudhuri and A. N. Rajagopalan, Depth from Defocus: A Real Aperture Imaging Approach (Springer-Verlag, 1999).
    [CrossRef]
  30. S. Z. Li, Markov Random Field Modeling in Computer Vision (Springer-Verlag, 1995).
  31. J. Besag, “Spatial interaction and the statistical analysis of lattice systems,” J. R. Stat. Soc. Ser. B (Methodol.) 36, 192–236 (1974).
  32. Y. Boykov, O. Veksler, and R. Zabih, “Fast approximate energy minimization via graph cuts,” IEEE Trans. Pattern Anal. Mach. Intell. 23, 1222–1239 (2001).
    [CrossRef]
  33. V. Kolmogorov and R. Zabih, “What energy functions can be minimized via graph cuts?” IEEE Trans. Pattern Anal. Mach. Intell. 26, 147–159 (2004).
    [CrossRef] [PubMed]
  34. A. Raj and R. Zabih, “A graph cut algorithm for generalized image deconvolution,” in Proceedings of the International Conference on Computer Vision (IEEE, 2005), pp. 1048–1054.
  35. C. Rother, V. Kolmogorov, V. Lempitsky, and M. Szummer, “Optimizing binary MRFs via extended roof duality,” in Proceedings of the International Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–8.
  36. V. Kolmogorov and C. Rother, “Minimizing nonsubmodular functions with graph cuts-A review,” IEEE Trans. Pattern Anal. Mach. Intell. 29, 1274–1279 (2007).
    [CrossRef] [PubMed]

2009 (2)

G. D. Finlayson, M. S. Drew, and C. Lu, “Entropy minimization for shadow removal,” Int. J. Comput. Vis. 85, 35–57 (2009).
[CrossRef]

S. W. Hasinoff and K. N. Kutulakos, “Confocal stereo,” Int. J. Comput. Vis. 81, 82–104 (2009).
[CrossRef]

2008 (3)

L. Wang, H. Jin, R. Yang, and M. Gong, “Stereoscopic inpainting: joint color and depth completion from stereo images,” in Proceedings of the International Conference on Computer Vision and Pattern Recognition (IEEE, 2008), pp. 1–8.

Y. Shor and D. Lischinski, “The shadow meets the mask: Pyramid-based shadow removal,” Comput. Graph. Forum 27, 577–586 (2008).
[CrossRef]

A. Nedzved, V. Bucha, and S. Ablameyko, “Augmented 3D endoscopy video,” in 3DTV Conference: The True Vision-Capture, Transmission and Display of 3D Video, 28–30 May 2008, Istanbul, Turkey (2008), pp. 349–352.
[CrossRef]

2007 (7)

R. R. Sahay and A. N. Rajagopalan, “Extension of the shape from focus method for reconstruction of high-resolution images,” J. Opt. Soc. Am. A 24, 3649–3657 (2007).
[CrossRef]

K. A. Patwardhan, G. Sapiro, and M. Bertalmio, “Video inpainting under constrained camera motion,” IEEE Trans. Image Process. 16, 545–553 (2007).
[CrossRef] [PubMed]

C. A. Z. Barcelos and M. A. Batista, “Image restoration using digital inpainting and noise removal,” Image Vis. Comput. 25, 61–69 (2007).
[CrossRef]

S. W. Hasinoff and K. N. Kutulakos, “A layer-based restoration framework for variable-aperture photography,” in Proceedings of the 11th International Conference on Computer Vision (IEEE, 2007), pp. 1–8.

M. K. Ng, H. Shen, E. Y. Lam, and L. Zhang, “A total variation regularization based super-resolution reconstruction algorithm for digital video,” EURASIP J. Advances Signal Process. 2007, Article ID, 74585 16 pages (2007).

C. Rother, V. Kolmogorov, V. Lempitsky, and M. Szummer, “Optimizing binary MRFs via extended roof duality,” in Proceedings of the International Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–8.

V. Kolmogorov and C. Rother, “Minimizing nonsubmodular functions with graph cuts-A review,” IEEE Trans. Pattern Anal. Mach. Intell. 29, 1274–1279 (2007).
[CrossRef] [PubMed]

2006 (1)

V. Vaish, M. Levoy, R. Szeliski, C. L. Zitnick, and S. B. Kang, “Reconstructing occluded surfaces using synthetic apertures: Stereo, focus and robust measures,” in Proceedings of the International Conference on Computer Vision and Pattern Recognition (IEEE, 2006), pp. 2331–2338.

2005 (2)

M. Mcguire, W. Matusik, H. Pfister, J. F. Hughes, and F. Durand, “Defocus video matting,” in ACM Trans. Graph. (TOG) 24, 567–576 (2005).
[CrossRef]

A. Raj and R. Zabih, “A graph cut algorithm for generalized image deconvolution,” in Proceedings of the International Conference on Computer Vision (IEEE, 2005), pp. 1048–1054.

2004 (4)

A. Criminisi, P. Perez, and K. Toyama, “Region filling and object removal by exemplar-based image inpainting,” IEEE Trans. Image Process. 13, 1200–1212 (2004).
[CrossRef] [PubMed]

M. Levoy, B. Chen, V. Vaish, M. Horowitz, I. McDowall, and M. Bolas, “Synthetic aperture confocal imaging,” ACM Trans. Graph (TOG) 23, 825–834 (2004).
[CrossRef]

V. Kolmogorov and R. Zabih, “What energy functions can be minimized via graph cuts?” IEEE Trans. Pattern Anal. Mach. Intell. 26, 147–159 (2004).
[CrossRef] [PubMed]

A. C. Kokaram, “On missing data treatment for degraded video and film archives: a survey and a new Bayesian approach,” IEEE Trans. Image Process. 13, 397–415 (2004).
[CrossRef] [PubMed]

2003 (3)

P. Favaro and S. Soatto, “Seeing beyond occlusions (and other marvels of a finite lens aperture,” in Proceedings of the International Conference on Computer Vision and Pattern Recognition (IEEE, 2003), pp. 579–586.

M. Bertalmio, L. Vese, G. Sapiro, and S. Osher, “Simultaneous structure and image inpainting,” IEEE Trans. Image Process. 12, 882–889 (2003).
[CrossRef]

J. Verdera, V. Caselles, M. Bertalmio, and G. Sapiro, “Inpainting surface holes,” in Proceedings of the IEEE International Conference on Image Processing (IEEE, 2003), pp. 903–906.

2002 (1)

S. Esedoglu and J. Shen, “Digital inpainting based on the Mumford-Shah-Euler image model,” Eur. J. Appl. Math. 13, 353–370 (2002).
[CrossRef]

2001 (2)

S. S. Bhasin and S. Chaudhuri, “Depth from defocus in presence of partial self occlusion,” in Proceedings of the International Conference on Computer Vision (IEEE, 2001), pp. 488–493.

Y. Boykov, O. Veksler, and R. Zabih, “Fast approximate energy minimization via graph cuts,” IEEE Trans. Pattern Anal. Mach. Intell. 23, 1222–1239 (2001).
[CrossRef]

2000 (1)

M. Bertalmio, G. Sapiro, V. Caselles, and C. Ballester, “Image inpainting,” in Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques (ACM Press, 2000), pp. 417–424.

1999 (1)

S. Chaudhuri and A. N. Rajagopalan, Depth from Defocus: A Real Aperture Imaging Approach (Springer-Verlag, 1999).
[CrossRef]

1997 (1)

M. Watanabe and S. K. Nayar, “Telecentric optics for focus analysis,” IEEE Trans. Pattern Anal. Mach. Intell. 19, 1360–1365 (1997).
[CrossRef]

1995 (1)

S. Z. Li, Markov Random Field Modeling in Computer Vision (Springer-Verlag, 1995).

1994 (2)

R. G. Willson and S. A. Shafer, “What is the center of the image?” J. Opt. Soc. Am. A 11, 2946–2955 (1994).
[CrossRef]

S. K. Nayar and Y. Nakagawa, “Shape from focus,” IEEE Trans. Pattern Anal. Mach. Intell. 16, 824–831 (1994).
[CrossRef]

1988 (1)

T. Darrell and K. Wohn, “Pyramid based depth from focus,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 1988), pp. 504–509.

1987 (1)

A. P. Pentland, “A new sense for depth of field,” IEEE Trans. Pattern Anal. Mach. Intell. 9, 523–531 (1987).
[CrossRef] [PubMed]

1983 (1)

R. Kingslake, Optical System Design (Academic, 1983).

1974 (1)

J. Besag, “Spatial interaction and the statistical analysis of lattice systems,” J. R. Stat. Soc. Ser. B (Methodol.) 36, 192–236 (1974).

1966 (1)

P. Brodatz, Textures: A Photographic Album for Artists and Designers, (Dover, 1966).

Ablameyko, S.

A. Nedzved, V. Bucha, and S. Ablameyko, “Augmented 3D endoscopy video,” in 3DTV Conference: The True Vision-Capture, Transmission and Display of 3D Video, 28–30 May 2008, Istanbul, Turkey (2008), pp. 349–352.
[CrossRef]

Ballester, C.

M. Bertalmio, G. Sapiro, V. Caselles, and C. Ballester, “Image inpainting,” in Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques (ACM Press, 2000), pp. 417–424.

Barcelos, C. A. Z.

C. A. Z. Barcelos and M. A. Batista, “Image restoration using digital inpainting and noise removal,” Image Vis. Comput. 25, 61–69 (2007).
[CrossRef]

Batista, M. A.

C. A. Z. Barcelos and M. A. Batista, “Image restoration using digital inpainting and noise removal,” Image Vis. Comput. 25, 61–69 (2007).
[CrossRef]

Bertalmio, M.

K. A. Patwardhan, G. Sapiro, and M. Bertalmio, “Video inpainting under constrained camera motion,” IEEE Trans. Image Process. 16, 545–553 (2007).
[CrossRef] [PubMed]

J. Verdera, V. Caselles, M. Bertalmio, and G. Sapiro, “Inpainting surface holes,” in Proceedings of the IEEE International Conference on Image Processing (IEEE, 2003), pp. 903–906.

M. Bertalmio, L. Vese, G. Sapiro, and S. Osher, “Simultaneous structure and image inpainting,” IEEE Trans. Image Process. 12, 882–889 (2003).
[CrossRef]

M. Bertalmio, G. Sapiro, V. Caselles, and C. Ballester, “Image inpainting,” in Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques (ACM Press, 2000), pp. 417–424.

Besag, J.

J. Besag, “Spatial interaction and the statistical analysis of lattice systems,” J. R. Stat. Soc. Ser. B (Methodol.) 36, 192–236 (1974).

Bhasin, S. S.

S. S. Bhasin and S. Chaudhuri, “Depth from defocus in presence of partial self occlusion,” in Proceedings of the International Conference on Computer Vision (IEEE, 2001), pp. 488–493.

Bolas, M.

M. Levoy, B. Chen, V. Vaish, M. Horowitz, I. McDowall, and M. Bolas, “Synthetic aperture confocal imaging,” ACM Trans. Graph (TOG) 23, 825–834 (2004).
[CrossRef]

Boykov, Y.

Y. Boykov, O. Veksler, and R. Zabih, “Fast approximate energy minimization via graph cuts,” IEEE Trans. Pattern Anal. Mach. Intell. 23, 1222–1239 (2001).
[CrossRef]

Brodatz, P.

P. Brodatz, Textures: A Photographic Album for Artists and Designers, (Dover, 1966).

Bucha, V.

A. Nedzved, V. Bucha, and S. Ablameyko, “Augmented 3D endoscopy video,” in 3DTV Conference: The True Vision-Capture, Transmission and Display of 3D Video, 28–30 May 2008, Istanbul, Turkey (2008), pp. 349–352.
[CrossRef]

Caselles, V.

J. Verdera, V. Caselles, M. Bertalmio, and G. Sapiro, “Inpainting surface holes,” in Proceedings of the IEEE International Conference on Image Processing (IEEE, 2003), pp. 903–906.

M. Bertalmio, G. Sapiro, V. Caselles, and C. Ballester, “Image inpainting,” in Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques (ACM Press, 2000), pp. 417–424.

Chaudhuri, S.

S. S. Bhasin and S. Chaudhuri, “Depth from defocus in presence of partial self occlusion,” in Proceedings of the International Conference on Computer Vision (IEEE, 2001), pp. 488–493.

S. Chaudhuri and A. N. Rajagopalan, Depth from Defocus: A Real Aperture Imaging Approach (Springer-Verlag, 1999).
[CrossRef]

Chen, B.

M. Levoy, B. Chen, V. Vaish, M. Horowitz, I. McDowall, and M. Bolas, “Synthetic aperture confocal imaging,” ACM Trans. Graph (TOG) 23, 825–834 (2004).
[CrossRef]

Criminisi, A.

A. Criminisi, P. Perez, and K. Toyama, “Region filling and object removal by exemplar-based image inpainting,” IEEE Trans. Image Process. 13, 1200–1212 (2004).
[CrossRef] [PubMed]

Darrell, T.

T. Darrell and K. Wohn, “Pyramid based depth from focus,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 1988), pp. 504–509.

Drew, M. S.

G. D. Finlayson, M. S. Drew, and C. Lu, “Entropy minimization for shadow removal,” Int. J. Comput. Vis. 85, 35–57 (2009).
[CrossRef]

Durand, F.

M. Mcguire, W. Matusik, H. Pfister, J. F. Hughes, and F. Durand, “Defocus video matting,” in ACM Trans. Graph. (TOG) 24, 567–576 (2005).
[CrossRef]

Esedoglu, S.

S. Esedoglu and J. Shen, “Digital inpainting based on the Mumford-Shah-Euler image model,” Eur. J. Appl. Math. 13, 353–370 (2002).
[CrossRef]

Favaro, P.

P. Favaro and S. Soatto, “Seeing beyond occlusions (and other marvels of a finite lens aperture,” in Proceedings of the International Conference on Computer Vision and Pattern Recognition (IEEE, 2003), pp. 579–586.

Finlayson, G. D.

G. D. Finlayson, M. S. Drew, and C. Lu, “Entropy minimization for shadow removal,” Int. J. Comput. Vis. 85, 35–57 (2009).
[CrossRef]

Gong, M.

L. Wang, H. Jin, R. Yang, and M. Gong, “Stereoscopic inpainting: joint color and depth completion from stereo images,” in Proceedings of the International Conference on Computer Vision and Pattern Recognition (IEEE, 2008), pp. 1–8.

Hasinoff, S. W.

S. W. Hasinoff and K. N. Kutulakos, “Confocal stereo,” Int. J. Comput. Vis. 81, 82–104 (2009).
[CrossRef]

S. W. Hasinoff and K. N. Kutulakos, “A layer-based restoration framework for variable-aperture photography,” in Proceedings of the 11th International Conference on Computer Vision (IEEE, 2007), pp. 1–8.

Horowitz, M.

M. Levoy, B. Chen, V. Vaish, M. Horowitz, I. McDowall, and M. Bolas, “Synthetic aperture confocal imaging,” ACM Trans. Graph (TOG) 23, 825–834 (2004).
[CrossRef]

Hughes, J. F.

M. Mcguire, W. Matusik, H. Pfister, J. F. Hughes, and F. Durand, “Defocus video matting,” in ACM Trans. Graph. (TOG) 24, 567–576 (2005).
[CrossRef]

Jin, H.

L. Wang, H. Jin, R. Yang, and M. Gong, “Stereoscopic inpainting: joint color and depth completion from stereo images,” in Proceedings of the International Conference on Computer Vision and Pattern Recognition (IEEE, 2008), pp. 1–8.

Kang, S. B.

V. Vaish, M. Levoy, R. Szeliski, C. L. Zitnick, and S. B. Kang, “Reconstructing occluded surfaces using synthetic apertures: Stereo, focus and robust measures,” in Proceedings of the International Conference on Computer Vision and Pattern Recognition (IEEE, 2006), pp. 2331–2338.

Kingslake, R.

R. Kingslake, Optical System Design (Academic, 1983).

Kokaram, A. C.

A. C. Kokaram, “On missing data treatment for degraded video and film archives: a survey and a new Bayesian approach,” IEEE Trans. Image Process. 13, 397–415 (2004).
[CrossRef] [PubMed]

Kolmogorov, V.

C. Rother, V. Kolmogorov, V. Lempitsky, and M. Szummer, “Optimizing binary MRFs via extended roof duality,” in Proceedings of the International Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–8.

V. Kolmogorov and C. Rother, “Minimizing nonsubmodular functions with graph cuts-A review,” IEEE Trans. Pattern Anal. Mach. Intell. 29, 1274–1279 (2007).
[CrossRef] [PubMed]

V. Kolmogorov and R. Zabih, “What energy functions can be minimized via graph cuts?” IEEE Trans. Pattern Anal. Mach. Intell. 26, 147–159 (2004).
[CrossRef] [PubMed]

Kutulakos, K. N.

S. W. Hasinoff and K. N. Kutulakos, “Confocal stereo,” Int. J. Comput. Vis. 81, 82–104 (2009).
[CrossRef]

S. W. Hasinoff and K. N. Kutulakos, “A layer-based restoration framework for variable-aperture photography,” in Proceedings of the 11th International Conference on Computer Vision (IEEE, 2007), pp. 1–8.

Lam, E. Y.

M. K. Ng, H. Shen, E. Y. Lam, and L. Zhang, “A total variation regularization based super-resolution reconstruction algorithm for digital video,” EURASIP J. Advances Signal Process. 2007, Article ID, 74585 16 pages (2007).

Lempitsky, V.

C. Rother, V. Kolmogorov, V. Lempitsky, and M. Szummer, “Optimizing binary MRFs via extended roof duality,” in Proceedings of the International Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–8.

Levoy, M.

V. Vaish, M. Levoy, R. Szeliski, C. L. Zitnick, and S. B. Kang, “Reconstructing occluded surfaces using synthetic apertures: Stereo, focus and robust measures,” in Proceedings of the International Conference on Computer Vision and Pattern Recognition (IEEE, 2006), pp. 2331–2338.

M. Levoy, B. Chen, V. Vaish, M. Horowitz, I. McDowall, and M. Bolas, “Synthetic aperture confocal imaging,” ACM Trans. Graph (TOG) 23, 825–834 (2004).
[CrossRef]

Li, S. Z.

S. Z. Li, Markov Random Field Modeling in Computer Vision (Springer-Verlag, 1995).

Lischinski, D.

Y. Shor and D. Lischinski, “The shadow meets the mask: Pyramid-based shadow removal,” Comput. Graph. Forum 27, 577–586 (2008).
[CrossRef]

Lu, C.

G. D. Finlayson, M. S. Drew, and C. Lu, “Entropy minimization for shadow removal,” Int. J. Comput. Vis. 85, 35–57 (2009).
[CrossRef]

Matusik, W.

M. Mcguire, W. Matusik, H. Pfister, J. F. Hughes, and F. Durand, “Defocus video matting,” in ACM Trans. Graph. (TOG) 24, 567–576 (2005).
[CrossRef]

McDowall, I.

M. Levoy, B. Chen, V. Vaish, M. Horowitz, I. McDowall, and M. Bolas, “Synthetic aperture confocal imaging,” ACM Trans. Graph (TOG) 23, 825–834 (2004).
[CrossRef]

Mcguire, M.

M. Mcguire, W. Matusik, H. Pfister, J. F. Hughes, and F. Durand, “Defocus video matting,” in ACM Trans. Graph. (TOG) 24, 567–576 (2005).
[CrossRef]

Nakagawa, Y.

S. K. Nayar and Y. Nakagawa, “Shape from focus,” IEEE Trans. Pattern Anal. Mach. Intell. 16, 824–831 (1994).
[CrossRef]

Nayar, S. K.

M. Watanabe and S. K. Nayar, “Telecentric optics for focus analysis,” IEEE Trans. Pattern Anal. Mach. Intell. 19, 1360–1365 (1997).
[CrossRef]

S. K. Nayar and Y. Nakagawa, “Shape from focus,” IEEE Trans. Pattern Anal. Mach. Intell. 16, 824–831 (1994).
[CrossRef]

Nedzved, A.

A. Nedzved, V. Bucha, and S. Ablameyko, “Augmented 3D endoscopy video,” in 3DTV Conference: The True Vision-Capture, Transmission and Display of 3D Video, 28–30 May 2008, Istanbul, Turkey (2008), pp. 349–352.
[CrossRef]

Ng, M. K.

M. K. Ng, H. Shen, E. Y. Lam, and L. Zhang, “A total variation regularization based super-resolution reconstruction algorithm for digital video,” EURASIP J. Advances Signal Process. 2007, Article ID, 74585 16 pages (2007).

Osher, S.

M. Bertalmio, L. Vese, G. Sapiro, and S. Osher, “Simultaneous structure and image inpainting,” IEEE Trans. Image Process. 12, 882–889 (2003).
[CrossRef]

Patwardhan, K. A.

K. A. Patwardhan, G. Sapiro, and M. Bertalmio, “Video inpainting under constrained camera motion,” IEEE Trans. Image Process. 16, 545–553 (2007).
[CrossRef] [PubMed]

Pentland, A. P.

A. P. Pentland, “A new sense for depth of field,” IEEE Trans. Pattern Anal. Mach. Intell. 9, 523–531 (1987).
[CrossRef] [PubMed]

Perez, P.

A. Criminisi, P. Perez, and K. Toyama, “Region filling and object removal by exemplar-based image inpainting,” IEEE Trans. Image Process. 13, 1200–1212 (2004).
[CrossRef] [PubMed]

Pfister, H.

M. Mcguire, W. Matusik, H. Pfister, J. F. Hughes, and F. Durand, “Defocus video matting,” in ACM Trans. Graph. (TOG) 24, 567–576 (2005).
[CrossRef]

Raj, A.

A. Raj and R. Zabih, “A graph cut algorithm for generalized image deconvolution,” in Proceedings of the International Conference on Computer Vision (IEEE, 2005), pp. 1048–1054.

Rajagopalan, A. N.

Rother, C.

C. Rother, V. Kolmogorov, V. Lempitsky, and M. Szummer, “Optimizing binary MRFs via extended roof duality,” in Proceedings of the International Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–8.

V. Kolmogorov and C. Rother, “Minimizing nonsubmodular functions with graph cuts-A review,” IEEE Trans. Pattern Anal. Mach. Intell. 29, 1274–1279 (2007).
[CrossRef] [PubMed]

Sahay, R. R.

Sapiro, G.

K. A. Patwardhan, G. Sapiro, and M. Bertalmio, “Video inpainting under constrained camera motion,” IEEE Trans. Image Process. 16, 545–553 (2007).
[CrossRef] [PubMed]

J. Verdera, V. Caselles, M. Bertalmio, and G. Sapiro, “Inpainting surface holes,” in Proceedings of the IEEE International Conference on Image Processing (IEEE, 2003), pp. 903–906.

M. Bertalmio, L. Vese, G. Sapiro, and S. Osher, “Simultaneous structure and image inpainting,” IEEE Trans. Image Process. 12, 882–889 (2003).
[CrossRef]

M. Bertalmio, G. Sapiro, V. Caselles, and C. Ballester, “Image inpainting,” in Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques (ACM Press, 2000), pp. 417–424.

Shafer, S. A.

Shen, H.

M. K. Ng, H. Shen, E. Y. Lam, and L. Zhang, “A total variation regularization based super-resolution reconstruction algorithm for digital video,” EURASIP J. Advances Signal Process. 2007, Article ID, 74585 16 pages (2007).

Shen, J.

S. Esedoglu and J. Shen, “Digital inpainting based on the Mumford-Shah-Euler image model,” Eur. J. Appl. Math. 13, 353–370 (2002).
[CrossRef]

Shor, Y.

Y. Shor and D. Lischinski, “The shadow meets the mask: Pyramid-based shadow removal,” Comput. Graph. Forum 27, 577–586 (2008).
[CrossRef]

Soatto, S.

P. Favaro and S. Soatto, “Seeing beyond occlusions (and other marvels of a finite lens aperture,” in Proceedings of the International Conference on Computer Vision and Pattern Recognition (IEEE, 2003), pp. 579–586.

Szeliski, R.

V. Vaish, M. Levoy, R. Szeliski, C. L. Zitnick, and S. B. Kang, “Reconstructing occluded surfaces using synthetic apertures: Stereo, focus and robust measures,” in Proceedings of the International Conference on Computer Vision and Pattern Recognition (IEEE, 2006), pp. 2331–2338.

Szummer, M.

C. Rother, V. Kolmogorov, V. Lempitsky, and M. Szummer, “Optimizing binary MRFs via extended roof duality,” in Proceedings of the International Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–8.

Toyama, K.

A. Criminisi, P. Perez, and K. Toyama, “Region filling and object removal by exemplar-based image inpainting,” IEEE Trans. Image Process. 13, 1200–1212 (2004).
[CrossRef] [PubMed]

Vaish, V.

V. Vaish, M. Levoy, R. Szeliski, C. L. Zitnick, and S. B. Kang, “Reconstructing occluded surfaces using synthetic apertures: Stereo, focus and robust measures,” in Proceedings of the International Conference on Computer Vision and Pattern Recognition (IEEE, 2006), pp. 2331–2338.

M. Levoy, B. Chen, V. Vaish, M. Horowitz, I. McDowall, and M. Bolas, “Synthetic aperture confocal imaging,” ACM Trans. Graph (TOG) 23, 825–834 (2004).
[CrossRef]

Veksler, O.

Y. Boykov, O. Veksler, and R. Zabih, “Fast approximate energy minimization via graph cuts,” IEEE Trans. Pattern Anal. Mach. Intell. 23, 1222–1239 (2001).
[CrossRef]

Verdera, J.

J. Verdera, V. Caselles, M. Bertalmio, and G. Sapiro, “Inpainting surface holes,” in Proceedings of the IEEE International Conference on Image Processing (IEEE, 2003), pp. 903–906.

Vese, L.

M. Bertalmio, L. Vese, G. Sapiro, and S. Osher, “Simultaneous structure and image inpainting,” IEEE Trans. Image Process. 12, 882–889 (2003).
[CrossRef]

Wang, L.

L. Wang, H. Jin, R. Yang, and M. Gong, “Stereoscopic inpainting: joint color and depth completion from stereo images,” in Proceedings of the International Conference on Computer Vision and Pattern Recognition (IEEE, 2008), pp. 1–8.

Watanabe, M.

M. Watanabe and S. K. Nayar, “Telecentric optics for focus analysis,” IEEE Trans. Pattern Anal. Mach. Intell. 19, 1360–1365 (1997).
[CrossRef]

Willson, R. G.

Wohn, K.

T. Darrell and K. Wohn, “Pyramid based depth from focus,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 1988), pp. 504–509.

Yang, R.

L. Wang, H. Jin, R. Yang, and M. Gong, “Stereoscopic inpainting: joint color and depth completion from stereo images,” in Proceedings of the International Conference on Computer Vision and Pattern Recognition (IEEE, 2008), pp. 1–8.

Zabih, R.

A. Raj and R. Zabih, “A graph cut algorithm for generalized image deconvolution,” in Proceedings of the International Conference on Computer Vision (IEEE, 2005), pp. 1048–1054.

V. Kolmogorov and R. Zabih, “What energy functions can be minimized via graph cuts?” IEEE Trans. Pattern Anal. Mach. Intell. 26, 147–159 (2004).
[CrossRef] [PubMed]

Y. Boykov, O. Veksler, and R. Zabih, “Fast approximate energy minimization via graph cuts,” IEEE Trans. Pattern Anal. Mach. Intell. 23, 1222–1239 (2001).
[CrossRef]

Zhang, L.

M. K. Ng, H. Shen, E. Y. Lam, and L. Zhang, “A total variation regularization based super-resolution reconstruction algorithm for digital video,” EURASIP J. Advances Signal Process. 2007, Article ID, 74585 16 pages (2007).

Zitnick, C. L.

V. Vaish, M. Levoy, R. Szeliski, C. L. Zitnick, and S. B. Kang, “Reconstructing occluded surfaces using synthetic apertures: Stereo, focus and robust measures,” in Proceedings of the International Conference on Computer Vision and Pattern Recognition (IEEE, 2006), pp. 2331–2338.

ACM Trans. Graph (TOG) (1)

M. Levoy, B. Chen, V. Vaish, M. Horowitz, I. McDowall, and M. Bolas, “Synthetic aperture confocal imaging,” ACM Trans. Graph (TOG) 23, 825–834 (2004).
[CrossRef]

ACM Trans. Graph. (TOG) (1)

M. Mcguire, W. Matusik, H. Pfister, J. F. Hughes, and F. Durand, “Defocus video matting,” in ACM Trans. Graph. (TOG) 24, 567–576 (2005).
[CrossRef]

Comput. Graph. Forum (1)

Y. Shor and D. Lischinski, “The shadow meets the mask: Pyramid-based shadow removal,” Comput. Graph. Forum 27, 577–586 (2008).
[CrossRef]

Eur. J. Appl. Math. (1)

S. Esedoglu and J. Shen, “Digital inpainting based on the Mumford-Shah-Euler image model,” Eur. J. Appl. Math. 13, 353–370 (2002).
[CrossRef]

EURASIP J. Advances Signal Process. (1)

M. K. Ng, H. Shen, E. Y. Lam, and L. Zhang, “A total variation regularization based super-resolution reconstruction algorithm for digital video,” EURASIP J. Advances Signal Process. 2007, Article ID, 74585 16 pages (2007).

IEEE Trans. Image Process. (4)

A. C. Kokaram, “On missing data treatment for degraded video and film archives: a survey and a new Bayesian approach,” IEEE Trans. Image Process. 13, 397–415 (2004).
[CrossRef] [PubMed]

A. Criminisi, P. Perez, and K. Toyama, “Region filling and object removal by exemplar-based image inpainting,” IEEE Trans. Image Process. 13, 1200–1212 (2004).
[CrossRef] [PubMed]

K. A. Patwardhan, G. Sapiro, and M. Bertalmio, “Video inpainting under constrained camera motion,” IEEE Trans. Image Process. 16, 545–553 (2007).
[CrossRef] [PubMed]

M. Bertalmio, L. Vese, G. Sapiro, and S. Osher, “Simultaneous structure and image inpainting,” IEEE Trans. Image Process. 12, 882–889 (2003).
[CrossRef]

IEEE Trans. Pattern Anal. Mach. Intell. (6)

S. K. Nayar and Y. Nakagawa, “Shape from focus,” IEEE Trans. Pattern Anal. Mach. Intell. 16, 824–831 (1994).
[CrossRef]

M. Watanabe and S. K. Nayar, “Telecentric optics for focus analysis,” IEEE Trans. Pattern Anal. Mach. Intell. 19, 1360–1365 (1997).
[CrossRef]

A. P. Pentland, “A new sense for depth of field,” IEEE Trans. Pattern Anal. Mach. Intell. 9, 523–531 (1987).
[CrossRef] [PubMed]

Y. Boykov, O. Veksler, and R. Zabih, “Fast approximate energy minimization via graph cuts,” IEEE Trans. Pattern Anal. Mach. Intell. 23, 1222–1239 (2001).
[CrossRef]

V. Kolmogorov and R. Zabih, “What energy functions can be minimized via graph cuts?” IEEE Trans. Pattern Anal. Mach. Intell. 26, 147–159 (2004).
[CrossRef] [PubMed]

V. Kolmogorov and C. Rother, “Minimizing nonsubmodular functions with graph cuts-A review,” IEEE Trans. Pattern Anal. Mach. Intell. 29, 1274–1279 (2007).
[CrossRef] [PubMed]

Image Vis. Comput. (1)

C. A. Z. Barcelos and M. A. Batista, “Image restoration using digital inpainting and noise removal,” Image Vis. Comput. 25, 61–69 (2007).
[CrossRef]

Int. J. Comput. Vis. (2)

G. D. Finlayson, M. S. Drew, and C. Lu, “Entropy minimization for shadow removal,” Int. J. Comput. Vis. 85, 35–57 (2009).
[CrossRef]

S. W. Hasinoff and K. N. Kutulakos, “Confocal stereo,” Int. J. Comput. Vis. 81, 82–104 (2009).
[CrossRef]

J. Opt. Soc. Am. A (2)

J. R. Stat. Soc. Ser. B (Methodol.) (1)

J. Besag, “Spatial interaction and the statistical analysis of lattice systems,” J. R. Stat. Soc. Ser. B (Methodol.) 36, 192–236 (1974).

Other (15)

S. Chaudhuri and A. N. Rajagopalan, Depth from Defocus: A Real Aperture Imaging Approach (Springer-Verlag, 1999).
[CrossRef]

S. Z. Li, Markov Random Field Modeling in Computer Vision (Springer-Verlag, 1995).

A. Raj and R. Zabih, “A graph cut algorithm for generalized image deconvolution,” in Proceedings of the International Conference on Computer Vision (IEEE, 2005), pp. 1048–1054.

C. Rother, V. Kolmogorov, V. Lempitsky, and M. Szummer, “Optimizing binary MRFs via extended roof duality,” in Proceedings of the International Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–8.

P. Favaro and S. Soatto, “Seeing beyond occlusions (and other marvels of a finite lens aperture,” in Proceedings of the International Conference on Computer Vision and Pattern Recognition (IEEE, 2003), pp. 579–586.

S. S. Bhasin and S. Chaudhuri, “Depth from defocus in presence of partial self occlusion,” in Proceedings of the International Conference on Computer Vision (IEEE, 2001), pp. 488–493.

S. W. Hasinoff and K. N. Kutulakos, “A layer-based restoration framework for variable-aperture photography,” in Proceedings of the 11th International Conference on Computer Vision (IEEE, 2007), pp. 1–8.

V. Vaish, M. Levoy, R. Szeliski, C. L. Zitnick, and S. B. Kang, “Reconstructing occluded surfaces using synthetic apertures: Stereo, focus and robust measures,” in Proceedings of the International Conference on Computer Vision and Pattern Recognition (IEEE, 2006), pp. 2331–2338.

L. Wang, H. Jin, R. Yang, and M. Gong, “Stereoscopic inpainting: joint color and depth completion from stereo images,” in Proceedings of the International Conference on Computer Vision and Pattern Recognition (IEEE, 2008), pp. 1–8.

P. Brodatz, Textures: A Photographic Album for Artists and Designers, (Dover, 1966).

T. Darrell and K. Wohn, “Pyramid based depth from focus,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 1988), pp. 504–509.

R. Kingslake, Optical System Design (Academic, 1983).

A. Nedzved, V. Bucha, and S. Ablameyko, “Augmented 3D endoscopy video,” in 3DTV Conference: The True Vision-Capture, Transmission and Display of 3D Video, 28–30 May 2008, Istanbul, Turkey (2008), pp. 349–352.
[CrossRef]

M. Bertalmio, G. Sapiro, V. Caselles, and C. Ballester, “Image inpainting,” in Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques (ACM Press, 2000), pp. 417–424.

J. Verdera, V. Caselles, M. Bertalmio, and G. Sapiro, “Inpainting surface holes,” in Proceedings of the IEEE International Conference on Image Processing (IEEE, 2003), pp. 903–906.

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1
Fig. 1

(a,b) Frames 1 and 40 (unscaled stack). (c,d) Reconstructed depth profile and focused image using [1]. (e,f) Frames 20 and 40 from scaled stack. (g,h) Inpainted focused image and completed depth profile, respectively, using our method.

Fig. 2
Fig. 2

Wooden face specimen. (a,b) Second and eighth frames, respectively. (c,d) Corresponding frames with thicker scratches.

Fig. 3
Fig. 3

(a) Working principle of SFF. (b) Schematic showing mechanism of structure-dependent pixel motion in SFF.

Fig. 4
Fig. 4

(a,b,c) Observations corresponding to the second frame for three different specimens. (d) Inpainted focused image corresponding to specimen (c). (e) Completed depth map. (f) Cylindrical fit to the estimated depth profile.

Fig. 5
Fig. 5

(a) Inpainted focused image. (b) Inpainted depth map. (c,d) Novel view depiction. (e,f) Inpainted focused image and depth map, respectively, obtained using the observations in Figs. 2c, 2d.

Fig. 6
Fig. 6

(a,b) Observations of a clay model of a bunny occluded by a pin. (c) Inpainted focused image. (d) Estimated shape profile. (e, f) Novel views of the bunny.

Fig. 7
Fig. 7

Wooden Buddha statue occluded by a pin. (a, b) Frames 2 and 8. (c,d) Inpainted focused image and shape profile. (e,f) Novel views after texture mapping.

Tables (1)

Tables Icon

Table 1 Errors for Synthetic and Real Specimens

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

x = v X P Z P , x = v X Q Z Q , and y = v Y P Z P , y = v Y Q Z Q ,
x = x ( w d d ¯ ) ( w d d ¯ ) + m Δ d , y = y ( w d d ¯ ) ( w d d ¯ ) + m Δ d .
y m vis = O m [ H m ( d ¯ ) W m ( d ¯ ) x + n m ] , m = 0 , , N 1 ,
σ m ( k , l ) = ρ R v ( 1 w d 1 w d + m Δ d d ¯ ( k , l ) ) ,
h x max 0 m N 1 | x min m Δ d w d d ¯ min + m Δ d | , or
h y max 0 m N 1 | y min m Δ d w d d ¯ min + m Δ d | ,
h x or h y min x , y S ( max 0 m N 1 ( 6 σ m ( x , y ) + 1 ) ) .
U p ( d ¯ , x ) = m O y m vis O m [ H m ( d ¯ ) W m ( d ¯ ) x ] 2 2 σ η 2 + λ d ¯ c C d ¯ V c d ¯ ( d ¯ ) + λ x c C x V c x ( x ) ,
c C d ¯ V c d ¯ ( d ¯ ) = i = 1 M j = 1 M [ ( d ¯ ( i , j ) d ¯ ( i , j 1 ) ) 2 + ( d ¯ ( i , j + 1 ) d ¯ ( i , j ) ) 2 + ( d ¯ ( i + 1 , j ) d ¯ ( i , j ) ) 2 + ( d ¯ ( i , j ) d ¯ ( i 1 , j ) ) 2 ] ,

Metrics