Abstract

Three-dimensional reconstruction of a dynamic object based on the structured light (SL) technique has attracted intensive research. Since a single projector only covers a limited area of the scene, multiple projectors should be employed simultaneously to extend the imaging area for a 3D object. However, patterns projected by different projectors superpose each other in a light-intersected area, which makes it difficult to recognize the original patterns and obtain the correct depth maps. To solve such a problem, we propose a method to design hierarchical patterns that can be separated from each other. In the proposed patterns, each pixel in binary patterns based on the de Bruijn sequence is replaced by a different bin with limited size. Then the proposed patterns can be separated by identifying distributions of colors in each bin in superposed patterns, and depth maps are obtained by decoding the separated patterns. To verify the performance of the proposed method, we design two hierarchical patterns and conduct several experiments in different scenes. Experimental results demonstrate that the proposed patterns can be separated in a multiple-projector SL system to obtain accurate depth maps, and they are robust for different conditions.

© 2014 Optical Society of America

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. Z. Ren and L. Cai, “Three-dimensional structure measurement of diamond crowns based on stereo vision,” Appl. Opt. 48, 5917–5932 (2009).
    [CrossRef]
  2. M. Hartrumpf and R. Munser, “Optical three-dimensional measurements by radially symmetric structured light projection,” Appl. Opt. 36, 2923–2928 (1997).
    [CrossRef]
  3. I. Yamaguchi, T. Ida, M. Yokota, and K. Yamashita, “Surface shape measurement by phase-shifting digital holography with a wavelength shift,” Appl. Opt. 45, 7610–7616 (2006).
    [CrossRef]
  4. P. S. Huang, S. Zhang, and F. P. Chiang, “Trapezoidal phase-shifting method for three-dimensional shape measurement,” Opt. Eng. 44, 123601 (2005).
    [CrossRef]
  5. P. Jia, J. Kofman, and C. English, “Two-step triangular-pattern phase-shifting method for three-dimensional object-shape measurement,” Opt. Eng. 46, 083201 (2007).
    [CrossRef]
  6. P. S. Huang and S. Zhang, “Fast three-step phase-shifting algorithm,” Appl. Opt. 45, 5086–5091 (2006).
    [CrossRef]
  7. X. Zhang, Y. Li, and L. Zhu, “Color code identification in coded structured light,” Appl. Opt. 51, 5340–5356 (2012).
    [CrossRef]
  8. H. J. W. Spoelder, F. Vos, E. Petriu, and F. Croen, “A study of the robustness of pseudorandom binary-array-based surface characterization,” IEEE Trans. Instrum. Meas. 47, 833–838 (1998).
    [CrossRef]
  9. S. Chen, Y. Li, and J. Zhang, “Vision processing for realtime 3-d data acquisition based on coded structured light,” IEEE Trans. Image Process. 17, 167–176 (2008).
    [CrossRef]
  10. A. Griesser, T. Koninckx, and L. Van Gool, “Adaptive real-time 3d acquisition and contour tracking within a multiple structured light system,” in Proceedings of Pacific Conference on Computer Graphics and Applications (IEEE, 2004), pp. 361–370.
  11. A. Maimone and H. Fuchs, “Reducing interference between multiple structured light depth sensors usingmotion,” in Proceedings of IEEE Virtual Reality Short Papers and Posters (VRW) (IEEE, 2012), pp. 51–54.
  12. J. Wang, C. Zhang, W. Zhu, Z. Zhang, Z. Xiong, and P. Chou, “3d scene reconstruction by multiple structured-light based commodity depth cameras,” in Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (IEEE, 2012), pp. 5429–5432.
  13. C. Je, K. H. Lee, and S. W. Lee, “Multi-projector color structured-light vision,” Signal Processing: Image Communication 28, 1046–1058 (2013).
    [CrossRef]
  14. H. Fredricksen and I. Kessler, “Lexicographic compositions and deBruijn sequences,” J. Comb. Theory, Ser. A 22, 17–30 (1977).
    [CrossRef]
  15. R. Morano, C. Ozturk, R. Conn, S. Dubin, S. Zietz, and J. Nissano, “Structured light using pseudorandom codes,” IEEE Trans. Patt. Anal. Mach. Intell. 20, 322–327 (1998).
    [CrossRef]
  16. J. Xu, N. Xi, C. Zhang, Q. Shi, and J. Gregory, “Real-time 3d shape inspection system of automotive parts based on structured light pattern,” Opt. Laser Technol. 43, 1–8 (2011).
    [CrossRef]
  17. K. Murdock, 3ds Max 2010 Bible (Wiley, 2009).
  18. P. Cignoni, M. Corsini, and G. Ranzuglia, “Meshlab: an open-source 3d mesh processing system,” Ercim News 73(73), 45–46 (2008).
  19. C. Cigla and A. Alatan, “An efficient hole filling for depth image based rendering,” in Proceedings of IEEE International Conference on Multimedia and Expo Workshops (ICMEW) (IEEE, 2013), pp. 1–6.

2013

C. Je, K. H. Lee, and S. W. Lee, “Multi-projector color structured-light vision,” Signal Processing: Image Communication 28, 1046–1058 (2013).
[CrossRef]

2012

2011

J. Xu, N. Xi, C. Zhang, Q. Shi, and J. Gregory, “Real-time 3d shape inspection system of automotive parts based on structured light pattern,” Opt. Laser Technol. 43, 1–8 (2011).
[CrossRef]

2009

2008

P. Cignoni, M. Corsini, and G. Ranzuglia, “Meshlab: an open-source 3d mesh processing system,” Ercim News 73(73), 45–46 (2008).

S. Chen, Y. Li, and J. Zhang, “Vision processing for realtime 3-d data acquisition based on coded structured light,” IEEE Trans. Image Process. 17, 167–176 (2008).
[CrossRef]

2007

P. Jia, J. Kofman, and C. English, “Two-step triangular-pattern phase-shifting method for three-dimensional object-shape measurement,” Opt. Eng. 46, 083201 (2007).
[CrossRef]

2006

2005

P. S. Huang, S. Zhang, and F. P. Chiang, “Trapezoidal phase-shifting method for three-dimensional shape measurement,” Opt. Eng. 44, 123601 (2005).
[CrossRef]

1998

H. J. W. Spoelder, F. Vos, E. Petriu, and F. Croen, “A study of the robustness of pseudorandom binary-array-based surface characterization,” IEEE Trans. Instrum. Meas. 47, 833–838 (1998).
[CrossRef]

R. Morano, C. Ozturk, R. Conn, S. Dubin, S. Zietz, and J. Nissano, “Structured light using pseudorandom codes,” IEEE Trans. Patt. Anal. Mach. Intell. 20, 322–327 (1998).
[CrossRef]

1997

1977

H. Fredricksen and I. Kessler, “Lexicographic compositions and deBruijn sequences,” J. Comb. Theory, Ser. A 22, 17–30 (1977).
[CrossRef]

Alatan, A.

C. Cigla and A. Alatan, “An efficient hole filling for depth image based rendering,” in Proceedings of IEEE International Conference on Multimedia and Expo Workshops (ICMEW) (IEEE, 2013), pp. 1–6.

Cai, L.

Chen, S.

S. Chen, Y. Li, and J. Zhang, “Vision processing for realtime 3-d data acquisition based on coded structured light,” IEEE Trans. Image Process. 17, 167–176 (2008).
[CrossRef]

Chiang, F. P.

P. S. Huang, S. Zhang, and F. P. Chiang, “Trapezoidal phase-shifting method for three-dimensional shape measurement,” Opt. Eng. 44, 123601 (2005).
[CrossRef]

Chou, P.

J. Wang, C. Zhang, W. Zhu, Z. Zhang, Z. Xiong, and P. Chou, “3d scene reconstruction by multiple structured-light based commodity depth cameras,” in Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (IEEE, 2012), pp. 5429–5432.

Cigla, C.

C. Cigla and A. Alatan, “An efficient hole filling for depth image based rendering,” in Proceedings of IEEE International Conference on Multimedia and Expo Workshops (ICMEW) (IEEE, 2013), pp. 1–6.

Cignoni, P.

P. Cignoni, M. Corsini, and G. Ranzuglia, “Meshlab: an open-source 3d mesh processing system,” Ercim News 73(73), 45–46 (2008).

Conn, R.

R. Morano, C. Ozturk, R. Conn, S. Dubin, S. Zietz, and J. Nissano, “Structured light using pseudorandom codes,” IEEE Trans. Patt. Anal. Mach. Intell. 20, 322–327 (1998).
[CrossRef]

Corsini, M.

P. Cignoni, M. Corsini, and G. Ranzuglia, “Meshlab: an open-source 3d mesh processing system,” Ercim News 73(73), 45–46 (2008).

Croen, F.

H. J. W. Spoelder, F. Vos, E. Petriu, and F. Croen, “A study of the robustness of pseudorandom binary-array-based surface characterization,” IEEE Trans. Instrum. Meas. 47, 833–838 (1998).
[CrossRef]

Dubin, S.

R. Morano, C. Ozturk, R. Conn, S. Dubin, S. Zietz, and J. Nissano, “Structured light using pseudorandom codes,” IEEE Trans. Patt. Anal. Mach. Intell. 20, 322–327 (1998).
[CrossRef]

English, C.

P. Jia, J. Kofman, and C. English, “Two-step triangular-pattern phase-shifting method for three-dimensional object-shape measurement,” Opt. Eng. 46, 083201 (2007).
[CrossRef]

Fredricksen, H.

H. Fredricksen and I. Kessler, “Lexicographic compositions and deBruijn sequences,” J. Comb. Theory, Ser. A 22, 17–30 (1977).
[CrossRef]

Fuchs, H.

A. Maimone and H. Fuchs, “Reducing interference between multiple structured light depth sensors usingmotion,” in Proceedings of IEEE Virtual Reality Short Papers and Posters (VRW) (IEEE, 2012), pp. 51–54.

Gregory, J.

J. Xu, N. Xi, C. Zhang, Q. Shi, and J. Gregory, “Real-time 3d shape inspection system of automotive parts based on structured light pattern,” Opt. Laser Technol. 43, 1–8 (2011).
[CrossRef]

Griesser, A.

A. Griesser, T. Koninckx, and L. Van Gool, “Adaptive real-time 3d acquisition and contour tracking within a multiple structured light system,” in Proceedings of Pacific Conference on Computer Graphics and Applications (IEEE, 2004), pp. 361–370.

Hartrumpf, M.

Huang, P. S.

P. S. Huang and S. Zhang, “Fast three-step phase-shifting algorithm,” Appl. Opt. 45, 5086–5091 (2006).
[CrossRef]

P. S. Huang, S. Zhang, and F. P. Chiang, “Trapezoidal phase-shifting method for three-dimensional shape measurement,” Opt. Eng. 44, 123601 (2005).
[CrossRef]

Ida, T.

Je, C.

C. Je, K. H. Lee, and S. W. Lee, “Multi-projector color structured-light vision,” Signal Processing: Image Communication 28, 1046–1058 (2013).
[CrossRef]

Jia, P.

P. Jia, J. Kofman, and C. English, “Two-step triangular-pattern phase-shifting method for three-dimensional object-shape measurement,” Opt. Eng. 46, 083201 (2007).
[CrossRef]

Kessler, I.

H. Fredricksen and I. Kessler, “Lexicographic compositions and deBruijn sequences,” J. Comb. Theory, Ser. A 22, 17–30 (1977).
[CrossRef]

Kofman, J.

P. Jia, J. Kofman, and C. English, “Two-step triangular-pattern phase-shifting method for three-dimensional object-shape measurement,” Opt. Eng. 46, 083201 (2007).
[CrossRef]

Koninckx, T.

A. Griesser, T. Koninckx, and L. Van Gool, “Adaptive real-time 3d acquisition and contour tracking within a multiple structured light system,” in Proceedings of Pacific Conference on Computer Graphics and Applications (IEEE, 2004), pp. 361–370.

Lee, K. H.

C. Je, K. H. Lee, and S. W. Lee, “Multi-projector color structured-light vision,” Signal Processing: Image Communication 28, 1046–1058 (2013).
[CrossRef]

Lee, S. W.

C. Je, K. H. Lee, and S. W. Lee, “Multi-projector color structured-light vision,” Signal Processing: Image Communication 28, 1046–1058 (2013).
[CrossRef]

Li, Y.

X. Zhang, Y. Li, and L. Zhu, “Color code identification in coded structured light,” Appl. Opt. 51, 5340–5356 (2012).
[CrossRef]

S. Chen, Y. Li, and J. Zhang, “Vision processing for realtime 3-d data acquisition based on coded structured light,” IEEE Trans. Image Process. 17, 167–176 (2008).
[CrossRef]

Maimone, A.

A. Maimone and H. Fuchs, “Reducing interference between multiple structured light depth sensors usingmotion,” in Proceedings of IEEE Virtual Reality Short Papers and Posters (VRW) (IEEE, 2012), pp. 51–54.

Morano, R.

R. Morano, C. Ozturk, R. Conn, S. Dubin, S. Zietz, and J. Nissano, “Structured light using pseudorandom codes,” IEEE Trans. Patt. Anal. Mach. Intell. 20, 322–327 (1998).
[CrossRef]

Munser, R.

Murdock, K.

K. Murdock, 3ds Max 2010 Bible (Wiley, 2009).

Nissano, J.

R. Morano, C. Ozturk, R. Conn, S. Dubin, S. Zietz, and J. Nissano, “Structured light using pseudorandom codes,” IEEE Trans. Patt. Anal. Mach. Intell. 20, 322–327 (1998).
[CrossRef]

Ozturk, C.

R. Morano, C. Ozturk, R. Conn, S. Dubin, S. Zietz, and J. Nissano, “Structured light using pseudorandom codes,” IEEE Trans. Patt. Anal. Mach. Intell. 20, 322–327 (1998).
[CrossRef]

Petriu, E.

H. J. W. Spoelder, F. Vos, E. Petriu, and F. Croen, “A study of the robustness of pseudorandom binary-array-based surface characterization,” IEEE Trans. Instrum. Meas. 47, 833–838 (1998).
[CrossRef]

Ranzuglia, G.

P. Cignoni, M. Corsini, and G. Ranzuglia, “Meshlab: an open-source 3d mesh processing system,” Ercim News 73(73), 45–46 (2008).

Ren, Z.

Shi, Q.

J. Xu, N. Xi, C. Zhang, Q. Shi, and J. Gregory, “Real-time 3d shape inspection system of automotive parts based on structured light pattern,” Opt. Laser Technol. 43, 1–8 (2011).
[CrossRef]

Spoelder, H. J. W.

H. J. W. Spoelder, F. Vos, E. Petriu, and F. Croen, “A study of the robustness of pseudorandom binary-array-based surface characterization,” IEEE Trans. Instrum. Meas. 47, 833–838 (1998).
[CrossRef]

Van Gool, L.

A. Griesser, T. Koninckx, and L. Van Gool, “Adaptive real-time 3d acquisition and contour tracking within a multiple structured light system,” in Proceedings of Pacific Conference on Computer Graphics and Applications (IEEE, 2004), pp. 361–370.

Vos, F.

H. J. W. Spoelder, F. Vos, E. Petriu, and F. Croen, “A study of the robustness of pseudorandom binary-array-based surface characterization,” IEEE Trans. Instrum. Meas. 47, 833–838 (1998).
[CrossRef]

Wang, J.

J. Wang, C. Zhang, W. Zhu, Z. Zhang, Z. Xiong, and P. Chou, “3d scene reconstruction by multiple structured-light based commodity depth cameras,” in Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (IEEE, 2012), pp. 5429–5432.

Xi, N.

J. Xu, N. Xi, C. Zhang, Q. Shi, and J. Gregory, “Real-time 3d shape inspection system of automotive parts based on structured light pattern,” Opt. Laser Technol. 43, 1–8 (2011).
[CrossRef]

Xiong, Z.

J. Wang, C. Zhang, W. Zhu, Z. Zhang, Z. Xiong, and P. Chou, “3d scene reconstruction by multiple structured-light based commodity depth cameras,” in Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (IEEE, 2012), pp. 5429–5432.

Xu, J.

J. Xu, N. Xi, C. Zhang, Q. Shi, and J. Gregory, “Real-time 3d shape inspection system of automotive parts based on structured light pattern,” Opt. Laser Technol. 43, 1–8 (2011).
[CrossRef]

Yamaguchi, I.

Yamashita, K.

Yokota, M.

Zhang, C.

J. Xu, N. Xi, C. Zhang, Q. Shi, and J. Gregory, “Real-time 3d shape inspection system of automotive parts based on structured light pattern,” Opt. Laser Technol. 43, 1–8 (2011).
[CrossRef]

J. Wang, C. Zhang, W. Zhu, Z. Zhang, Z. Xiong, and P. Chou, “3d scene reconstruction by multiple structured-light based commodity depth cameras,” in Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (IEEE, 2012), pp. 5429–5432.

Zhang, J.

S. Chen, Y. Li, and J. Zhang, “Vision processing for realtime 3-d data acquisition based on coded structured light,” IEEE Trans. Image Process. 17, 167–176 (2008).
[CrossRef]

Zhang, S.

P. S. Huang and S. Zhang, “Fast three-step phase-shifting algorithm,” Appl. Opt. 45, 5086–5091 (2006).
[CrossRef]

P. S. Huang, S. Zhang, and F. P. Chiang, “Trapezoidal phase-shifting method for three-dimensional shape measurement,” Opt. Eng. 44, 123601 (2005).
[CrossRef]

Zhang, X.

Zhang, Z.

J. Wang, C. Zhang, W. Zhu, Z. Zhang, Z. Xiong, and P. Chou, “3d scene reconstruction by multiple structured-light based commodity depth cameras,” in Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (IEEE, 2012), pp. 5429–5432.

Zhu, L.

Zhu, W.

J. Wang, C. Zhang, W. Zhu, Z. Zhang, Z. Xiong, and P. Chou, “3d scene reconstruction by multiple structured-light based commodity depth cameras,” in Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (IEEE, 2012), pp. 5429–5432.

Zietz, S.

R. Morano, C. Ozturk, R. Conn, S. Dubin, S. Zietz, and J. Nissano, “Structured light using pseudorandom codes,” IEEE Trans. Patt. Anal. Mach. Intell. 20, 322–327 (1998).
[CrossRef]

Appl. Opt.

Ercim News

P. Cignoni, M. Corsini, and G. Ranzuglia, “Meshlab: an open-source 3d mesh processing system,” Ercim News 73(73), 45–46 (2008).

IEEE Trans. Image Process.

S. Chen, Y. Li, and J. Zhang, “Vision processing for realtime 3-d data acquisition based on coded structured light,” IEEE Trans. Image Process. 17, 167–176 (2008).
[CrossRef]

IEEE Trans. Instrum. Meas.

H. J. W. Spoelder, F. Vos, E. Petriu, and F. Croen, “A study of the robustness of pseudorandom binary-array-based surface characterization,” IEEE Trans. Instrum. Meas. 47, 833–838 (1998).
[CrossRef]

IEEE Trans. Patt. Anal. Mach. Intell.

R. Morano, C. Ozturk, R. Conn, S. Dubin, S. Zietz, and J. Nissano, “Structured light using pseudorandom codes,” IEEE Trans. Patt. Anal. Mach. Intell. 20, 322–327 (1998).
[CrossRef]

J. Comb. Theory, Ser. A

H. Fredricksen and I. Kessler, “Lexicographic compositions and deBruijn sequences,” J. Comb. Theory, Ser. A 22, 17–30 (1977).
[CrossRef]

Opt. Eng.

P. S. Huang, S. Zhang, and F. P. Chiang, “Trapezoidal phase-shifting method for three-dimensional shape measurement,” Opt. Eng. 44, 123601 (2005).
[CrossRef]

P. Jia, J. Kofman, and C. English, “Two-step triangular-pattern phase-shifting method for three-dimensional object-shape measurement,” Opt. Eng. 46, 083201 (2007).
[CrossRef]

Opt. Laser Technol.

J. Xu, N. Xi, C. Zhang, Q. Shi, and J. Gregory, “Real-time 3d shape inspection system of automotive parts based on structured light pattern,” Opt. Laser Technol. 43, 1–8 (2011).
[CrossRef]

Signal Processing: Image Communication

C. Je, K. H. Lee, and S. W. Lee, “Multi-projector color structured-light vision,” Signal Processing: Image Communication 28, 1046–1058 (2013).
[CrossRef]

Other

K. Murdock, 3ds Max 2010 Bible (Wiley, 2009).

A. Griesser, T. Koninckx, and L. Van Gool, “Adaptive real-time 3d acquisition and contour tracking within a multiple structured light system,” in Proceedings of Pacific Conference on Computer Graphics and Applications (IEEE, 2004), pp. 361–370.

A. Maimone and H. Fuchs, “Reducing interference between multiple structured light depth sensors usingmotion,” in Proceedings of IEEE Virtual Reality Short Papers and Posters (VRW) (IEEE, 2012), pp. 51–54.

J. Wang, C. Zhang, W. Zhu, Z. Zhang, Z. Xiong, and P. Chou, “3d scene reconstruction by multiple structured-light based commodity depth cameras,” in Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (IEEE, 2012), pp. 5429–5432.

C. Cigla and A. Alatan, “An efficient hole filling for depth image based rendering,” in Proceedings of IEEE International Conference on Multimedia and Expo Workshops (ICMEW) (IEEE, 2013), pp. 1–6.

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (19)

Fig. 1.
Fig. 1.

Structured light system with single projector and camera.

Fig. 2.
Fig. 2.

SL system with multiple projectors and cameras.

Fig. 3.
Fig. 3.

Triangulation based on SL technique.

Fig. 4.
Fig. 4.

Verification experiment for the interference problem. (a) Depth map without interference. (b) Depth map obtained with interference.

Fig. 5.
Fig. 5.

Parts of binary patterns based on Eq. (3). (a) Left binary pattern. (b) Right binary pattern.

Fig. 6.
Fig. 6.

Two pairs of bins: left_0 and left_1 used for left binary pattern, and right_0 and right_1 for right binary pattern.

Fig. 7.
Fig. 7.

Parts of bin-level binary patterns. (a) Left and right bin-level binary patterns. (b) Example of superposed pattern when two patterns are superposed.

Fig. 8.
Fig. 8.

Proposed bins and corresponding hierarchical patterns. (a) Bins with the corresponding code words. (b) Proposed hierarchical patterns.

Fig. 9.
Fig. 9.

Proposed algorithm to obtain depth maps.

Fig. 10.
Fig. 10.

Arrangement of projectors and cameras for experiment of dynamic object.

Fig. 11.
Fig. 11.

Small holes in depth map of the sculpture.

Fig. 12.
Fig. 12.

(a) and (e) Captured images with superposed patterns. (b) and (f) Separated patterns on surfaces of object. (c) and (g) Depth maps obtained by decoding separated patterns. (d) and (h) 3D point cloud of sculpture [(d) is for left viewpoint and (h) is for right viewpoint].

Fig. 13.
Fig. 13.

Holes in depth maps of the dynamic object.

Fig. 14.
Fig. 14.

(a) and (d) Captured images with superposed patterns. (b) and (e) Separated patterns on surfaces of object. (c) and (f) Depth maps obtained by decoding separated patterns. (g) and (h) 3D point cloud of Happy Buddha.

Fig. 15.
Fig. 15.

Arrangement of projectors and cameras in contrast experiment.

Fig. 16.
Fig. 16.

Images used in both MRCC and plane sweeping algorithm. (a) Projected patterns. (b) Captured images in Bunny scene. (c) Captured images in Bonsai scene.

Fig. 17.
Fig. 17.

Depth maps obtained by different methods from left and right viewpoint. (a) Depth maps obtained by MRCC stereo matching algorithm without interference. (b) Depth maps obtained by hierarchical patterns without interference. (c) Depth maps by MRCC algorithm with interference. (d) Depth maps obtained by plane sweeping algorithm reconstructing depth maps in (c). (e) Depth maps obtained by hierarchical patterns with interference. (f) Depth maps obtained by applying a median filter to the depth maps in (e). (g) Ground true depth maps.

Fig. 18.
Fig. 18.

Depth maps obtained by different methods from left and right viewpoint. (a) Depth maps obtained by MRCC stereo matching algorithm without interference. (b) Depth maps obtained by hierarchical patterns without interference. (c) Depth maps by MRCC algorithm with interference. (d) Depth maps obtained by plane sweeping algorithm reconstructing depth maps in (c). (e) Depth maps obtained by hierarchical patterns with interference. (f) Depth maps obtained by applying a median filter to the depth maps in (e). (g) Ground true depth maps.

Fig. 19.
Fig. 19.

(a) Depth maps obtained by the hierarchical patterns. (b) Depth maps obtained by the patterns used for MRCC algorithm.

Tables (4)

Tables Icon

Table 1. PSNR Results of Different Depth Maps in Fig. 17

Tables Icon

Table 2. BPR Results of Different Depth Maps in Fig. 17

Tables Icon

Table 3. PSNR Results of Different Depth Maps in Fig. 18

Tables Icon

Table 4. BPR Results of Different Depth Maps in Fig. 18

Equations (5)

Equations on this page are rendered with MathJax. Learn more.

h=Disparity×SL+Disparity.
Vn=[1000111110011010010000101011101].
a(i,j)=255×mod(V(j(j1)/31×31)+V(j(j1)/4×4),2),
Pleft={[a,b,c,d]|a,b,c,d{white,red}},
Pright={[a,b,c,d]|a,b,c,d{white,blue}}.

Metrics