Abstract

Depth maps captured by range scanning devices or by using optical cameras often suffer from missing regions due to occlusions, reflectivity, limited scanning area, sensor imperfections, etc. In this paper, we propose a fast and reliable algorithm for depth map inpainting using the tensor voting (TV) framework. For less complex missing regions, local edge and depth information is utilized for synthesizing missing values. The depth variations are modeled by local planes using 3D TV, and missing values are estimated using plane equations. For large and complex missing regions, we collect and evaluate depth estimates from self-similar (training) datasets. We align the depth maps of the training set with the target (defective) depth map and evaluate the goodness of depth estimates among candidate values using 3D TV. We demonstrate the effectiveness of the proposed approaches on real as well as synthetic data.

© 2013 Optical Society of America

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. Z. J. Chen and J. Samaranbandu, “Planar region depth filling using edge detection with embedded confidence technique and Hough transform,” in International Conference on Multimedia and Expo (IEEE, 2003), pp. 89–92.
  2. R. Duda and P. Hart, “Use of the Hough transformation to detect lines and curves in pictures,” Commun. ACM 15, 11–15 (1972).
    [CrossRef]
  3. J. Wang and M. Oliveira, “A hole-filling strategy for reconstruction of smooth surfaces in range images,” in Brazilian Symposium on Computer Graphics and Image Processing (IEEE, 2003), pp. 11–18.
  4. P. Stavrou, P. Mavridis, G. Papaioannou, G. Passalis, and T. Theoharis, “3D object repair using 2D algorithms,” in Proceedings of International Conference on Computational Science (ACM, 2006), pp. 271–278.
  5. R. Sahay and A. Rajagopalan, “Joint image and depth completion in shape-from-focus: taking a cue from parallax,” J. Opt. Soc. Am. A 27, 1203–1213 (2010).
    [CrossRef]
  6. A. Bhavsar and A. Rajagopalan, “Inpainting large missing regions in range images,” in IEEE Conference on Pattern Recognition (IEEE, 2010), pp. 3464–3467.
  7. Davis S. Marschner, M. Garr, and M. Levoy, “Filling holes in complex surfaces using volumetric diffusion,” in First International Symposium on 3D Data Processing, Visualization, and Transmission (IEEE, 2002), pp. 428–441.
  8. A. Sharf, M. Alexa, and D. Cohen-Or, “Context-based surface completion,” ACM Trans. Graph. 23, 878–887 (2004).
    [CrossRef]
  9. X. Liu, X. Yang, and H. Zhang, “Fusion of depth maps based on confidence,” in International Conference on Electronics, Communications, and Control (IEEE, 2011), pp. 2658–2661.
  10. C. Frueh, S. Jain, and A. Zakhor, “Data processing algorithms for generating textured 3D building facade meshes from laser scans and camera images,” Int. J. Comput. Vis. 61, 159–184 (2005).
    [CrossRef]
  11. C. Frueh, R. Sammon, and A. Zakhor, “Automated texture mapping of 3D city models with oblique aerial imagery,” in Proceedings of 2nd International Symposium on 3D Data Processing, Visualization, and Transmission (ACM, 2004), pp. 396–403.
  12. A. Abdelhafiz, B. Riedel, and W. Niemeier, “Towards a 3D true colored space by the fusion of laser scanner point cloud and digital photos,” in Proceedings of the ISPRS Working Group V/4 Workshop (ISPRS, 2005).
  13. A. Brunton, S. Wuhrer, and C. Shu, “Image-based model completion,” in Proceedings of the 6th International Conference on 3-D Digital Imaging and Modeling (IEEE, 2007), pp. 305–311.
  14. P. Dias, V. Sequeira, F. Vaz, and J. Goncalves, “Registration and fusion of intensity and range data for 3D modelling of real world scenes,” in Proceedings 4th International Conference on 3-D Digital Imaging and Modeling (IEEE, 2003), pp. 418–425.
  15. S. Xu, A. Georghiades, H. Rushmeier, J. Dorsey, and L. McMillan, “Image guided geometry inference,” in Third International Symposium on 3D Data Processing, Visualization, and Transmission (ACM, 2006), pp. 310–317.
  16. I. Tosic, B. A. Olshausen, and B. J. Culpepper, “Learning sparse representations of depth,” IEEE J. Sel. Top. Signal Process. 5, 941–952 (2011).
    [CrossRef]
  17. F. Qi, J. Han, P. Wang, G. Shi, and F. Li, “Structure guided fusion for depth map inpainting,” Pattern Recogn. Lett. 34, 70–76 (2013).
    [CrossRef]
  18. R. Koch, I. Schiller, B. Bartczak, F. Kellner, and K. Kser, “MixIn3D: 3D mixed reality with ToF-camera,” Lect. Notes Comput. Sci. 5742, 126–141 (2009).
    [CrossRef]
  19. J. Jia and C.-K. Tang, “Image repairing: robust image synthesis by adaptive ND tensor voting,” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2003), pp. 643–650.
  20. M. Kulkarni, A. Rajagopalan, and G. Rigoll, “Depth inpainting with tensor voting using local geometry,” in Proceedings of International Conference on Computer Vision Theory and Applications (SciTePress, 2012), pp. 22–30.
  21. M. Bertalmo, G. Sapiro, V. Caselles, and C. Ballester, “Image inpainting,” in Proceedings of ACM SIGGRAPH (ACM, 2000), pp. 417–424.
  22. G. Medioni, M. Lee, and C. Tang, A Computational Framework for Segmentation and Grouping (Elsevier, 2000).
  23. G. Medioni and G. Guy, “Inference of surfaces, 3D curves, and junctions from sparse, noisy, 3-D data,” IEEE Trans. Pattern Anal. Mach. Intell. 19, 1265–1277 (1997).
    [CrossRef]
  24. G. Medioni, C. K. Tang, and M. S. Lee, “Tensor voting—theory and applications,” Int. J. Comput. Inf. Sci. 5, 1–10 (2000).
  25. S. Lee and G. Medioni, “Non-uniform skew estimation by tensor voting,” in Proceedings of Workshop on Document Image Analysis (ACM, 1997), pp. 1–4.
  26. D. Scharstein and R. Szeliski, “A taxonomy and evaluation of dense two-frame stereo correspondence algorithms,” Int. J. Comput. Vis. 47, 7–42 (2002).
    [CrossRef]
  27. K.-J. Oh, S. Yea, and Y.-S. Ho, “Hole-filling method using depth based in-painting for view synthesis in free viewpoint television (FTV) and 3D video,” in Picture Coding Symposium (IEEE, 2009), pp. 1–4.
  28. P. Besl and H. McKay, “A method for registration of 3-D shapes,” IEEE Trans. Pattern Anal. Mach. Intell. 14, 239–256 (1992).
    [CrossRef]
  29. J. Shi and C. Tomasi, “Good features to track,” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 1994), pp. 593–600.
  30. L. Yin, X. Wei, Y. Sun, J. Wang, and M. J. Rosato, “A 3D facial expression database for facial behavior research,” in 7th International Conference on Automatic Face and Gesture Recognition (IEEE, 2006), pp. 211–216.
  31. D. Scharstein and C. Pal, “Learning conditional random fields for stereo,” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–8.

2013 (1)

F. Qi, J. Han, P. Wang, G. Shi, and F. Li, “Structure guided fusion for depth map inpainting,” Pattern Recogn. Lett. 34, 70–76 (2013).
[CrossRef]

2011 (1)

I. Tosic, B. A. Olshausen, and B. J. Culpepper, “Learning sparse representations of depth,” IEEE J. Sel. Top. Signal Process. 5, 941–952 (2011).
[CrossRef]

2010 (1)

2009 (1)

R. Koch, I. Schiller, B. Bartczak, F. Kellner, and K. Kser, “MixIn3D: 3D mixed reality with ToF-camera,” Lect. Notes Comput. Sci. 5742, 126–141 (2009).
[CrossRef]

2005 (1)

C. Frueh, S. Jain, and A. Zakhor, “Data processing algorithms for generating textured 3D building facade meshes from laser scans and camera images,” Int. J. Comput. Vis. 61, 159–184 (2005).
[CrossRef]

2004 (1)

A. Sharf, M. Alexa, and D. Cohen-Or, “Context-based surface completion,” ACM Trans. Graph. 23, 878–887 (2004).
[CrossRef]

2002 (1)

D. Scharstein and R. Szeliski, “A taxonomy and evaluation of dense two-frame stereo correspondence algorithms,” Int. J. Comput. Vis. 47, 7–42 (2002).
[CrossRef]

2000 (1)

G. Medioni, C. K. Tang, and M. S. Lee, “Tensor voting—theory and applications,” Int. J. Comput. Inf. Sci. 5, 1–10 (2000).

1997 (1)

G. Medioni and G. Guy, “Inference of surfaces, 3D curves, and junctions from sparse, noisy, 3-D data,” IEEE Trans. Pattern Anal. Mach. Intell. 19, 1265–1277 (1997).
[CrossRef]

1992 (1)

P. Besl and H. McKay, “A method for registration of 3-D shapes,” IEEE Trans. Pattern Anal. Mach. Intell. 14, 239–256 (1992).
[CrossRef]

1972 (1)

R. Duda and P. Hart, “Use of the Hough transformation to detect lines and curves in pictures,” Commun. ACM 15, 11–15 (1972).
[CrossRef]

Abdelhafiz, A.

A. Abdelhafiz, B. Riedel, and W. Niemeier, “Towards a 3D true colored space by the fusion of laser scanner point cloud and digital photos,” in Proceedings of the ISPRS Working Group V/4 Workshop (ISPRS, 2005).

Alexa, M.

A. Sharf, M. Alexa, and D. Cohen-Or, “Context-based surface completion,” ACM Trans. Graph. 23, 878–887 (2004).
[CrossRef]

Ballester, C.

M. Bertalmo, G. Sapiro, V. Caselles, and C. Ballester, “Image inpainting,” in Proceedings of ACM SIGGRAPH (ACM, 2000), pp. 417–424.

Bartczak, B.

R. Koch, I. Schiller, B. Bartczak, F. Kellner, and K. Kser, “MixIn3D: 3D mixed reality with ToF-camera,” Lect. Notes Comput. Sci. 5742, 126–141 (2009).
[CrossRef]

Bertalmo, M.

M. Bertalmo, G. Sapiro, V. Caselles, and C. Ballester, “Image inpainting,” in Proceedings of ACM SIGGRAPH (ACM, 2000), pp. 417–424.

Besl, P.

P. Besl and H. McKay, “A method for registration of 3-D shapes,” IEEE Trans. Pattern Anal. Mach. Intell. 14, 239–256 (1992).
[CrossRef]

Bhavsar, A.

A. Bhavsar and A. Rajagopalan, “Inpainting large missing regions in range images,” in IEEE Conference on Pattern Recognition (IEEE, 2010), pp. 3464–3467.

Brunton, A.

A. Brunton, S. Wuhrer, and C. Shu, “Image-based model completion,” in Proceedings of the 6th International Conference on 3-D Digital Imaging and Modeling (IEEE, 2007), pp. 305–311.

Caselles, V.

M. Bertalmo, G. Sapiro, V. Caselles, and C. Ballester, “Image inpainting,” in Proceedings of ACM SIGGRAPH (ACM, 2000), pp. 417–424.

Chen, Z. J.

Z. J. Chen and J. Samaranbandu, “Planar region depth filling using edge detection with embedded confidence technique and Hough transform,” in International Conference on Multimedia and Expo (IEEE, 2003), pp. 89–92.

Cohen-Or, D.

A. Sharf, M. Alexa, and D. Cohen-Or, “Context-based surface completion,” ACM Trans. Graph. 23, 878–887 (2004).
[CrossRef]

Culpepper, B. J.

I. Tosic, B. A. Olshausen, and B. J. Culpepper, “Learning sparse representations of depth,” IEEE J. Sel. Top. Signal Process. 5, 941–952 (2011).
[CrossRef]

Dias, P.

P. Dias, V. Sequeira, F. Vaz, and J. Goncalves, “Registration and fusion of intensity and range data for 3D modelling of real world scenes,” in Proceedings 4th International Conference on 3-D Digital Imaging and Modeling (IEEE, 2003), pp. 418–425.

Dorsey, J.

S. Xu, A. Georghiades, H. Rushmeier, J. Dorsey, and L. McMillan, “Image guided geometry inference,” in Third International Symposium on 3D Data Processing, Visualization, and Transmission (ACM, 2006), pp. 310–317.

Duda, R.

R. Duda and P. Hart, “Use of the Hough transformation to detect lines and curves in pictures,” Commun. ACM 15, 11–15 (1972).
[CrossRef]

Frueh, C.

C. Frueh, S. Jain, and A. Zakhor, “Data processing algorithms for generating textured 3D building facade meshes from laser scans and camera images,” Int. J. Comput. Vis. 61, 159–184 (2005).
[CrossRef]

C. Frueh, R. Sammon, and A. Zakhor, “Automated texture mapping of 3D city models with oblique aerial imagery,” in Proceedings of 2nd International Symposium on 3D Data Processing, Visualization, and Transmission (ACM, 2004), pp. 396–403.

Garr, M.

Davis S. Marschner, M. Garr, and M. Levoy, “Filling holes in complex surfaces using volumetric diffusion,” in First International Symposium on 3D Data Processing, Visualization, and Transmission (IEEE, 2002), pp. 428–441.

Georghiades, A.

S. Xu, A. Georghiades, H. Rushmeier, J. Dorsey, and L. McMillan, “Image guided geometry inference,” in Third International Symposium on 3D Data Processing, Visualization, and Transmission (ACM, 2006), pp. 310–317.

Goncalves, J.

P. Dias, V. Sequeira, F. Vaz, and J. Goncalves, “Registration and fusion of intensity and range data for 3D modelling of real world scenes,” in Proceedings 4th International Conference on 3-D Digital Imaging and Modeling (IEEE, 2003), pp. 418–425.

Guy, G.

G. Medioni and G. Guy, “Inference of surfaces, 3D curves, and junctions from sparse, noisy, 3-D data,” IEEE Trans. Pattern Anal. Mach. Intell. 19, 1265–1277 (1997).
[CrossRef]

Han, J.

F. Qi, J. Han, P. Wang, G. Shi, and F. Li, “Structure guided fusion for depth map inpainting,” Pattern Recogn. Lett. 34, 70–76 (2013).
[CrossRef]

Hart, P.

R. Duda and P. Hart, “Use of the Hough transformation to detect lines and curves in pictures,” Commun. ACM 15, 11–15 (1972).
[CrossRef]

Ho, Y.-S.

K.-J. Oh, S. Yea, and Y.-S. Ho, “Hole-filling method using depth based in-painting for view synthesis in free viewpoint television (FTV) and 3D video,” in Picture Coding Symposium (IEEE, 2009), pp. 1–4.

Jain, S.

C. Frueh, S. Jain, and A. Zakhor, “Data processing algorithms for generating textured 3D building facade meshes from laser scans and camera images,” Int. J. Comput. Vis. 61, 159–184 (2005).
[CrossRef]

Jia, J.

J. Jia and C.-K. Tang, “Image repairing: robust image synthesis by adaptive ND tensor voting,” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2003), pp. 643–650.

Kellner, F.

R. Koch, I. Schiller, B. Bartczak, F. Kellner, and K. Kser, “MixIn3D: 3D mixed reality with ToF-camera,” Lect. Notes Comput. Sci. 5742, 126–141 (2009).
[CrossRef]

Koch, R.

R. Koch, I. Schiller, B. Bartczak, F. Kellner, and K. Kser, “MixIn3D: 3D mixed reality with ToF-camera,” Lect. Notes Comput. Sci. 5742, 126–141 (2009).
[CrossRef]

Kser, K.

R. Koch, I. Schiller, B. Bartczak, F. Kellner, and K. Kser, “MixIn3D: 3D mixed reality with ToF-camera,” Lect. Notes Comput. Sci. 5742, 126–141 (2009).
[CrossRef]

Kulkarni, M.

M. Kulkarni, A. Rajagopalan, and G. Rigoll, “Depth inpainting with tensor voting using local geometry,” in Proceedings of International Conference on Computer Vision Theory and Applications (SciTePress, 2012), pp. 22–30.

Lee, M.

G. Medioni, M. Lee, and C. Tang, A Computational Framework for Segmentation and Grouping (Elsevier, 2000).

Lee, M. S.

G. Medioni, C. K. Tang, and M. S. Lee, “Tensor voting—theory and applications,” Int. J. Comput. Inf. Sci. 5, 1–10 (2000).

Lee, S.

S. Lee and G. Medioni, “Non-uniform skew estimation by tensor voting,” in Proceedings of Workshop on Document Image Analysis (ACM, 1997), pp. 1–4.

Levoy, M.

Davis S. Marschner, M. Garr, and M. Levoy, “Filling holes in complex surfaces using volumetric diffusion,” in First International Symposium on 3D Data Processing, Visualization, and Transmission (IEEE, 2002), pp. 428–441.

Li, F.

F. Qi, J. Han, P. Wang, G. Shi, and F. Li, “Structure guided fusion for depth map inpainting,” Pattern Recogn. Lett. 34, 70–76 (2013).
[CrossRef]

Liu, X.

X. Liu, X. Yang, and H. Zhang, “Fusion of depth maps based on confidence,” in International Conference on Electronics, Communications, and Control (IEEE, 2011), pp. 2658–2661.

Marschner, Davis S.

Davis S. Marschner, M. Garr, and M. Levoy, “Filling holes in complex surfaces using volumetric diffusion,” in First International Symposium on 3D Data Processing, Visualization, and Transmission (IEEE, 2002), pp. 428–441.

Mavridis, P.

P. Stavrou, P. Mavridis, G. Papaioannou, G. Passalis, and T. Theoharis, “3D object repair using 2D algorithms,” in Proceedings of International Conference on Computational Science (ACM, 2006), pp. 271–278.

McKay, H.

P. Besl and H. McKay, “A method for registration of 3-D shapes,” IEEE Trans. Pattern Anal. Mach. Intell. 14, 239–256 (1992).
[CrossRef]

McMillan, L.

S. Xu, A. Georghiades, H. Rushmeier, J. Dorsey, and L. McMillan, “Image guided geometry inference,” in Third International Symposium on 3D Data Processing, Visualization, and Transmission (ACM, 2006), pp. 310–317.

Medioni, G.

G. Medioni, C. K. Tang, and M. S. Lee, “Tensor voting—theory and applications,” Int. J. Comput. Inf. Sci. 5, 1–10 (2000).

G. Medioni and G. Guy, “Inference of surfaces, 3D curves, and junctions from sparse, noisy, 3-D data,” IEEE Trans. Pattern Anal. Mach. Intell. 19, 1265–1277 (1997).
[CrossRef]

S. Lee and G. Medioni, “Non-uniform skew estimation by tensor voting,” in Proceedings of Workshop on Document Image Analysis (ACM, 1997), pp. 1–4.

G. Medioni, M. Lee, and C. Tang, A Computational Framework for Segmentation and Grouping (Elsevier, 2000).

Niemeier, W.

A. Abdelhafiz, B. Riedel, and W. Niemeier, “Towards a 3D true colored space by the fusion of laser scanner point cloud and digital photos,” in Proceedings of the ISPRS Working Group V/4 Workshop (ISPRS, 2005).

Oh, K.-J.

K.-J. Oh, S. Yea, and Y.-S. Ho, “Hole-filling method using depth based in-painting for view synthesis in free viewpoint television (FTV) and 3D video,” in Picture Coding Symposium (IEEE, 2009), pp. 1–4.

Oliveira, M.

J. Wang and M. Oliveira, “A hole-filling strategy for reconstruction of smooth surfaces in range images,” in Brazilian Symposium on Computer Graphics and Image Processing (IEEE, 2003), pp. 11–18.

Olshausen, B. A.

I. Tosic, B. A. Olshausen, and B. J. Culpepper, “Learning sparse representations of depth,” IEEE J. Sel. Top. Signal Process. 5, 941–952 (2011).
[CrossRef]

Pal, C.

D. Scharstein and C. Pal, “Learning conditional random fields for stereo,” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–8.

Papaioannou, G.

P. Stavrou, P. Mavridis, G. Papaioannou, G. Passalis, and T. Theoharis, “3D object repair using 2D algorithms,” in Proceedings of International Conference on Computational Science (ACM, 2006), pp. 271–278.

Passalis, G.

P. Stavrou, P. Mavridis, G. Papaioannou, G. Passalis, and T. Theoharis, “3D object repair using 2D algorithms,” in Proceedings of International Conference on Computational Science (ACM, 2006), pp. 271–278.

Qi, F.

F. Qi, J. Han, P. Wang, G. Shi, and F. Li, “Structure guided fusion for depth map inpainting,” Pattern Recogn. Lett. 34, 70–76 (2013).
[CrossRef]

Rajagopalan, A.

R. Sahay and A. Rajagopalan, “Joint image and depth completion in shape-from-focus: taking a cue from parallax,” J. Opt. Soc. Am. A 27, 1203–1213 (2010).
[CrossRef]

A. Bhavsar and A. Rajagopalan, “Inpainting large missing regions in range images,” in IEEE Conference on Pattern Recognition (IEEE, 2010), pp. 3464–3467.

M. Kulkarni, A. Rajagopalan, and G. Rigoll, “Depth inpainting with tensor voting using local geometry,” in Proceedings of International Conference on Computer Vision Theory and Applications (SciTePress, 2012), pp. 22–30.

Riedel, B.

A. Abdelhafiz, B. Riedel, and W. Niemeier, “Towards a 3D true colored space by the fusion of laser scanner point cloud and digital photos,” in Proceedings of the ISPRS Working Group V/4 Workshop (ISPRS, 2005).

Rigoll, G.

M. Kulkarni, A. Rajagopalan, and G. Rigoll, “Depth inpainting with tensor voting using local geometry,” in Proceedings of International Conference on Computer Vision Theory and Applications (SciTePress, 2012), pp. 22–30.

Rosato, M. J.

L. Yin, X. Wei, Y. Sun, J. Wang, and M. J. Rosato, “A 3D facial expression database for facial behavior research,” in 7th International Conference on Automatic Face and Gesture Recognition (IEEE, 2006), pp. 211–216.

Rushmeier, H.

S. Xu, A. Georghiades, H. Rushmeier, J. Dorsey, and L. McMillan, “Image guided geometry inference,” in Third International Symposium on 3D Data Processing, Visualization, and Transmission (ACM, 2006), pp. 310–317.

Sahay, R.

Samaranbandu, J.

Z. J. Chen and J. Samaranbandu, “Planar region depth filling using edge detection with embedded confidence technique and Hough transform,” in International Conference on Multimedia and Expo (IEEE, 2003), pp. 89–92.

Sammon, R.

C. Frueh, R. Sammon, and A. Zakhor, “Automated texture mapping of 3D city models with oblique aerial imagery,” in Proceedings of 2nd International Symposium on 3D Data Processing, Visualization, and Transmission (ACM, 2004), pp. 396–403.

Sapiro, G.

M. Bertalmo, G. Sapiro, V. Caselles, and C. Ballester, “Image inpainting,” in Proceedings of ACM SIGGRAPH (ACM, 2000), pp. 417–424.

Scharstein, D.

D. Scharstein and R. Szeliski, “A taxonomy and evaluation of dense two-frame stereo correspondence algorithms,” Int. J. Comput. Vis. 47, 7–42 (2002).
[CrossRef]

D. Scharstein and C. Pal, “Learning conditional random fields for stereo,” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–8.

Schiller, I.

R. Koch, I. Schiller, B. Bartczak, F. Kellner, and K. Kser, “MixIn3D: 3D mixed reality with ToF-camera,” Lect. Notes Comput. Sci. 5742, 126–141 (2009).
[CrossRef]

Sequeira, V.

P. Dias, V. Sequeira, F. Vaz, and J. Goncalves, “Registration and fusion of intensity and range data for 3D modelling of real world scenes,” in Proceedings 4th International Conference on 3-D Digital Imaging and Modeling (IEEE, 2003), pp. 418–425.

Sharf, A.

A. Sharf, M. Alexa, and D. Cohen-Or, “Context-based surface completion,” ACM Trans. Graph. 23, 878–887 (2004).
[CrossRef]

Shi, G.

F. Qi, J. Han, P. Wang, G. Shi, and F. Li, “Structure guided fusion for depth map inpainting,” Pattern Recogn. Lett. 34, 70–76 (2013).
[CrossRef]

Shi, J.

J. Shi and C. Tomasi, “Good features to track,” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 1994), pp. 593–600.

Shu, C.

A. Brunton, S. Wuhrer, and C. Shu, “Image-based model completion,” in Proceedings of the 6th International Conference on 3-D Digital Imaging and Modeling (IEEE, 2007), pp. 305–311.

Stavrou, P.

P. Stavrou, P. Mavridis, G. Papaioannou, G. Passalis, and T. Theoharis, “3D object repair using 2D algorithms,” in Proceedings of International Conference on Computational Science (ACM, 2006), pp. 271–278.

Sun, Y.

L. Yin, X. Wei, Y. Sun, J. Wang, and M. J. Rosato, “A 3D facial expression database for facial behavior research,” in 7th International Conference on Automatic Face and Gesture Recognition (IEEE, 2006), pp. 211–216.

Szeliski, R.

D. Scharstein and R. Szeliski, “A taxonomy and evaluation of dense two-frame stereo correspondence algorithms,” Int. J. Comput. Vis. 47, 7–42 (2002).
[CrossRef]

Tang, C.

G. Medioni, M. Lee, and C. Tang, A Computational Framework for Segmentation and Grouping (Elsevier, 2000).

Tang, C. K.

G. Medioni, C. K. Tang, and M. S. Lee, “Tensor voting—theory and applications,” Int. J. Comput. Inf. Sci. 5, 1–10 (2000).

Tang, C.-K.

J. Jia and C.-K. Tang, “Image repairing: robust image synthesis by adaptive ND tensor voting,” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2003), pp. 643–650.

Theoharis, T.

P. Stavrou, P. Mavridis, G. Papaioannou, G. Passalis, and T. Theoharis, “3D object repair using 2D algorithms,” in Proceedings of International Conference on Computational Science (ACM, 2006), pp. 271–278.

Tomasi, C.

J. Shi and C. Tomasi, “Good features to track,” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 1994), pp. 593–600.

Tosic, I.

I. Tosic, B. A. Olshausen, and B. J. Culpepper, “Learning sparse representations of depth,” IEEE J. Sel. Top. Signal Process. 5, 941–952 (2011).
[CrossRef]

Vaz, F.

P. Dias, V. Sequeira, F. Vaz, and J. Goncalves, “Registration and fusion of intensity and range data for 3D modelling of real world scenes,” in Proceedings 4th International Conference on 3-D Digital Imaging and Modeling (IEEE, 2003), pp. 418–425.

Wang, J.

L. Yin, X. Wei, Y. Sun, J. Wang, and M. J. Rosato, “A 3D facial expression database for facial behavior research,” in 7th International Conference on Automatic Face and Gesture Recognition (IEEE, 2006), pp. 211–216.

J. Wang and M. Oliveira, “A hole-filling strategy for reconstruction of smooth surfaces in range images,” in Brazilian Symposium on Computer Graphics and Image Processing (IEEE, 2003), pp. 11–18.

Wang, P.

F. Qi, J. Han, P. Wang, G. Shi, and F. Li, “Structure guided fusion for depth map inpainting,” Pattern Recogn. Lett. 34, 70–76 (2013).
[CrossRef]

Wei, X.

L. Yin, X. Wei, Y. Sun, J. Wang, and M. J. Rosato, “A 3D facial expression database for facial behavior research,” in 7th International Conference on Automatic Face and Gesture Recognition (IEEE, 2006), pp. 211–216.

Wuhrer, S.

A. Brunton, S. Wuhrer, and C. Shu, “Image-based model completion,” in Proceedings of the 6th International Conference on 3-D Digital Imaging and Modeling (IEEE, 2007), pp. 305–311.

Xu, S.

S. Xu, A. Georghiades, H. Rushmeier, J. Dorsey, and L. McMillan, “Image guided geometry inference,” in Third International Symposium on 3D Data Processing, Visualization, and Transmission (ACM, 2006), pp. 310–317.

Yang, X.

X. Liu, X. Yang, and H. Zhang, “Fusion of depth maps based on confidence,” in International Conference on Electronics, Communications, and Control (IEEE, 2011), pp. 2658–2661.

Yea, S.

K.-J. Oh, S. Yea, and Y.-S. Ho, “Hole-filling method using depth based in-painting for view synthesis in free viewpoint television (FTV) and 3D video,” in Picture Coding Symposium (IEEE, 2009), pp. 1–4.

Yin, L.

L. Yin, X. Wei, Y. Sun, J. Wang, and M. J. Rosato, “A 3D facial expression database for facial behavior research,” in 7th International Conference on Automatic Face and Gesture Recognition (IEEE, 2006), pp. 211–216.

Zakhor, A.

C. Frueh, S. Jain, and A. Zakhor, “Data processing algorithms for generating textured 3D building facade meshes from laser scans and camera images,” Int. J. Comput. Vis. 61, 159–184 (2005).
[CrossRef]

C. Frueh, R. Sammon, and A. Zakhor, “Automated texture mapping of 3D city models with oblique aerial imagery,” in Proceedings of 2nd International Symposium on 3D Data Processing, Visualization, and Transmission (ACM, 2004), pp. 396–403.

Zhang, H.

X. Liu, X. Yang, and H. Zhang, “Fusion of depth maps based on confidence,” in International Conference on Electronics, Communications, and Control (IEEE, 2011), pp. 2658–2661.

ACM Trans. Graph. (1)

A. Sharf, M. Alexa, and D. Cohen-Or, “Context-based surface completion,” ACM Trans. Graph. 23, 878–887 (2004).
[CrossRef]

Commun. ACM (1)

R. Duda and P. Hart, “Use of the Hough transformation to detect lines and curves in pictures,” Commun. ACM 15, 11–15 (1972).
[CrossRef]

IEEE J. Sel. Top. Signal Process. (1)

I. Tosic, B. A. Olshausen, and B. J. Culpepper, “Learning sparse representations of depth,” IEEE J. Sel. Top. Signal Process. 5, 941–952 (2011).
[CrossRef]

IEEE Trans. Pattern Anal. Mach. Intell. (2)

G. Medioni and G. Guy, “Inference of surfaces, 3D curves, and junctions from sparse, noisy, 3-D data,” IEEE Trans. Pattern Anal. Mach. Intell. 19, 1265–1277 (1997).
[CrossRef]

P. Besl and H. McKay, “A method for registration of 3-D shapes,” IEEE Trans. Pattern Anal. Mach. Intell. 14, 239–256 (1992).
[CrossRef]

Int. J. Comput. Inf. Sci. (1)

G. Medioni, C. K. Tang, and M. S. Lee, “Tensor voting—theory and applications,” Int. J. Comput. Inf. Sci. 5, 1–10 (2000).

Int. J. Comput. Vis. (2)

D. Scharstein and R. Szeliski, “A taxonomy and evaluation of dense two-frame stereo correspondence algorithms,” Int. J. Comput. Vis. 47, 7–42 (2002).
[CrossRef]

C. Frueh, S. Jain, and A. Zakhor, “Data processing algorithms for generating textured 3D building facade meshes from laser scans and camera images,” Int. J. Comput. Vis. 61, 159–184 (2005).
[CrossRef]

J. Opt. Soc. Am. A (1)

Lect. Notes Comput. Sci. (1)

R. Koch, I. Schiller, B. Bartczak, F. Kellner, and K. Kser, “MixIn3D: 3D mixed reality with ToF-camera,” Lect. Notes Comput. Sci. 5742, 126–141 (2009).
[CrossRef]

Pattern Recogn. Lett. (1)

F. Qi, J. Han, P. Wang, G. Shi, and F. Li, “Structure guided fusion for depth map inpainting,” Pattern Recogn. Lett. 34, 70–76 (2013).
[CrossRef]

Other (20)

X. Liu, X. Yang, and H. Zhang, “Fusion of depth maps based on confidence,” in International Conference on Electronics, Communications, and Control (IEEE, 2011), pp. 2658–2661.

J. Jia and C.-K. Tang, “Image repairing: robust image synthesis by adaptive ND tensor voting,” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2003), pp. 643–650.

M. Kulkarni, A. Rajagopalan, and G. Rigoll, “Depth inpainting with tensor voting using local geometry,” in Proceedings of International Conference on Computer Vision Theory and Applications (SciTePress, 2012), pp. 22–30.

M. Bertalmo, G. Sapiro, V. Caselles, and C. Ballester, “Image inpainting,” in Proceedings of ACM SIGGRAPH (ACM, 2000), pp. 417–424.

G. Medioni, M. Lee, and C. Tang, A Computational Framework for Segmentation and Grouping (Elsevier, 2000).

A. Bhavsar and A. Rajagopalan, “Inpainting large missing regions in range images,” in IEEE Conference on Pattern Recognition (IEEE, 2010), pp. 3464–3467.

Davis S. Marschner, M. Garr, and M. Levoy, “Filling holes in complex surfaces using volumetric diffusion,” in First International Symposium on 3D Data Processing, Visualization, and Transmission (IEEE, 2002), pp. 428–441.

J. Wang and M. Oliveira, “A hole-filling strategy for reconstruction of smooth surfaces in range images,” in Brazilian Symposium on Computer Graphics and Image Processing (IEEE, 2003), pp. 11–18.

P. Stavrou, P. Mavridis, G. Papaioannou, G. Passalis, and T. Theoharis, “3D object repair using 2D algorithms,” in Proceedings of International Conference on Computational Science (ACM, 2006), pp. 271–278.

C. Frueh, R. Sammon, and A. Zakhor, “Automated texture mapping of 3D city models with oblique aerial imagery,” in Proceedings of 2nd International Symposium on 3D Data Processing, Visualization, and Transmission (ACM, 2004), pp. 396–403.

A. Abdelhafiz, B. Riedel, and W. Niemeier, “Towards a 3D true colored space by the fusion of laser scanner point cloud and digital photos,” in Proceedings of the ISPRS Working Group V/4 Workshop (ISPRS, 2005).

A. Brunton, S. Wuhrer, and C. Shu, “Image-based model completion,” in Proceedings of the 6th International Conference on 3-D Digital Imaging and Modeling (IEEE, 2007), pp. 305–311.

P. Dias, V. Sequeira, F. Vaz, and J. Goncalves, “Registration and fusion of intensity and range data for 3D modelling of real world scenes,” in Proceedings 4th International Conference on 3-D Digital Imaging and Modeling (IEEE, 2003), pp. 418–425.

S. Xu, A. Georghiades, H. Rushmeier, J. Dorsey, and L. McMillan, “Image guided geometry inference,” in Third International Symposium on 3D Data Processing, Visualization, and Transmission (ACM, 2006), pp. 310–317.

K.-J. Oh, S. Yea, and Y.-S. Ho, “Hole-filling method using depth based in-painting for view synthesis in free viewpoint television (FTV) and 3D video,” in Picture Coding Symposium (IEEE, 2009), pp. 1–4.

S. Lee and G. Medioni, “Non-uniform skew estimation by tensor voting,” in Proceedings of Workshop on Document Image Analysis (ACM, 1997), pp. 1–4.

J. Shi and C. Tomasi, “Good features to track,” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 1994), pp. 593–600.

L. Yin, X. Wei, Y. Sun, J. Wang, and M. J. Rosato, “A 3D facial expression database for facial behavior research,” in 7th International Conference on Automatic Face and Gesture Recognition (IEEE, 2006), pp. 211–216.

D. Scharstein and C. Pal, “Learning conditional random fields for stereo,” in IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2007), pp. 1–8.

Z. J. Chen and J. Samaranbandu, “Planar region depth filling using edge detection with embedded confidence technique and Hough transform,” in International Conference on Multimedia and Expo (IEEE, 2003), pp. 89–92.

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (15)

Fig. 1.
Fig. 1.

Robust edge detection. (a) Real depth map. (b) Reference edge pixels. (c) Candidate edge pixels. (d) Result of TV-based edge detection.

Fig. 2.
Fig. 2.

Missing-region segmentation. Green color indicates missing region.

Fig. 3.
Fig. 3.

Possible edge interconnection in the missing region: edges are shown thick for demonstration. (a) Broken edge components around the missing region shown in (c). (b) Extrapolated edge within a missing region. (c)–(f) Order of filling in from lower to higher depth values (final reconstruction error=0.0224).

Fig. 4.
Fig. 4.

Results on face data. (a), (c) Defective face range maps. (b), (d) Corresponding inpainted face range maps [final reconstruction error for (b) (0.0075) and for (d) (0.0077)].

Fig. 5.
Fig. 5.

Results on real data. (a), (b) Optical images of a staircase, (c) defective range map, and (d) inpainted range map (final reconstruction error=0.0127).

Fig. 6.
Fig. 6.

Results on real data. (a) Optical image of a sculpted elephant, (b) damaged depth map, (c) missing regions to be inpainted shown in red, and (d) inpainted range map.

Fig. 7.
Fig. 7.

Results for Kinect, PMD, and OSU data. (a), (b) Kinect depth maps with real missing regions, (c) depth map from OSU dataset with real missing region, (d) Kinect depth map with synthetic missing region, (e) depth map from PMD TOF camera, (f), (g) inpainted outputs for Kinect depth maps in (a) and (b), respectively, (h) inpainted output for OSU depth map of (c) (final reconstruction error=0.0019), (i) inpainting result for Kinect depth map of (d) (final reconstruction error=0.006), and (j) inpainted output for PMD depth map of (e) (final reconstruction error=0.0016).

Fig. 8.
Fig. 8.

Comparison results: (a) defective range map, (b) color image of the scene, (c) resultant output for the method proposed in [6], and (d) our output.

Fig. 9.
Fig. 9.

Comparison results: (a) defective range map, (b) Meshlab’s hole-filling output, and (c) output of our method.

Fig. 10.
Fig. 10.

Comparison results: (a) defective range map, (b) inpainting output using [21], and (c) our result.

Fig. 11.
Fig. 11.

Symmetry depiction.

Fig. 12.
Fig. 12.

Results for a case when a large chunk around the nose is missing as shown in (a). (b)–(d) Output as iteration progresses (final reconstruction error=0.0123).

Fig. 13.
Fig. 13.

Results for a case in which a similar range pattern is available in the target image itself. (a) Large missing region. (b)–(d) Output as iterations progress (final reconstruction error=0.0026).

Fig. 14.
Fig. 14.

Results for the real case. (a) Image of a statue with damaged palm portion, (b) reconstructed depth map with multiview stereo, (c) inpainting result for the depth map in (b) with our single range inpainting approach of Section 3, (d) image of a statue with palm portion intact, (e) reconstructed depth map of (d) with multiview stereo, and (f) inpainting results for the depth map in (e) with our single range inpainting method of Section 3.

Fig. 15.
Fig. 15.

Inpainted depth map of Fig. 14(c) using the method of Section 4.

Equations (17)

Equations on this page are rendered with MathJax. Learn more.

Bvote=(Iv^v^Tv^v^T)exp[(l2σ2)],
γ=exp[(s2+mk2σ2)],
Svote=γ·(vvT),
T2×2=(λ1λ2)e1e1T+λ2(e1e1T+e2e2T),
T3×3=(λ1λ2)e1e1T+(λ2λ3)(e1e1T+e2e2T)+λ3(e1e1T+e2e2T+e3e3T).
η=λ1λ2.
y=ax2+bx+c,
Edge(i)=[xi,yi],1iH,
Errorj=i=1H(yiajxi2bjxicj)2,
|yax2bxc|α,
If(YaiX2biXci)0,M(X,Y)Lelse M(X,Y)R,
Ax+By+Cz+D=0,
Data(i)=[xi,yi,zi],1iJ,
|Akx+Bky+Ckz+Dk|β,
zk=AkxBkyDkCk,1kK.
μ=λ1λ2.
Error=Avg.(z^z1)2.

Metrics