Abstract

Three dimensional (3D) body scanning has been of great interest to many fields, yet it is still a challenge to generate accurate human body models in a convenient manner. In this paper, we present an accurate 3D body scanning system based on structured light technology. A four-step phase shifting combined with Gray-code method is applied to match pixels in camera and projector planes. The calculation of 3D point coordinates are also derived. The main contribution of this paper is twofold. First, an improved registration algorithm is proposed to align point clouds reconstructed from different views. Second, we propose a graph optimization algorithm to further minimize registration errors. Experimental results demonstrate that our system can produce accurate 3D body models conveniently.

© 2018 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article
OSA Recommended Articles
Three-dimensional shape measurement using a structured light system with dual projectors

Chufan Jiang, Beatrice Lim, and Song Zhang
Appl. Opt. 57(14) 3983-3990 (2018)

References

  • View by:
  • |
  • |
  • |

  1. J. Siebert and S. Marshall, “Human body 3D imaging by speckle texture projection photogrammetry,” Sensor Rev. 20(3), 218–226 (2000).
    [Crossref]
  2. B. Allen, B. Curless, and Z. Popovic, “The space of human body shapes: reconstruction and parameterization from range scans,” ACM Trans. Graph. 22(3), 587–594 (2003).
    [Crossref]
  3. J. Lu and M. Wang, “Automated anthropometric data collection using 3D whole body scanners,” Expert Syst. Appl. 35(1), 407–414 (2008).
    [Crossref]
  4. J. Guo, X. Peng, A. Li, X. Liu, and J. Yu, “Automatic and rapid whole-body 3D shape measurement based on multinode 3D sensing and speckle projection,” Appl. Opt. 56(31), 8759–8768 (2017).
    [Crossref] [PubMed]
  5. R. Newcombe and A. Davison, “Live dense reconstruction with a single moving camera,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2010), pp. 1498–1505.
  6. J. Pons, R. Keriven, and O. Faugeras, “Multi-view stereo reconstruction and scene flow estimation with a global image-based matching score,” Int. J. Comput. Vis. 72(2), 179–193 (2007).
    [Crossref]
  7. H. Ji, K. An, J. Kang, M. Chung, and W. Yu, “3D environment reconstruction using modified color ICP algorithm by fusion of a camera and a 3D laser range finder,” in Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IEEE, 2009), pp. 3082–3088.
  8. D. Paulo, M. Miguel, and S. Vítor, “3D reconstruction of real world scenes using a low-cost 3D range scanner,” Comput.-Aided Civ. Inf. Eng. 21(7), 486–497 (2010).
  9. S. Chen, Y. Li, and J. Zhang, “Vision processing for realtime 3-D data acquisition based on coded structured light,” IEEE Trans. Image Process. 17(2), 167–176 (2008).
    [Crossref] [PubMed]
  10. R. Sagawa, R. Furukawa, and H. Kawasaki, “Dense 3D reconstruction from high frame-rate video using a static grid pattern,” IEEE Trans. Pattern Anal. Mach. Intell. 36(9), 1733–1747 (2014).
    [Crossref]
  11. T. Petkovic, T. Pribanic, and M. Donlic, “Single-shot dense 3D reconstruction using self-equalizing De Bruijn sequence,” IEEE Trans. Image Process. 25(11), 5131–5144 (2016).
    [Crossref] [PubMed]
  12. H. Jing, J. Zhu, and P. Zhou, “Optical 3-D surface reconstruction with color binary speckle pattern encoding,” Opt. Express 26(3), 3452–3465 (2018).
    [Crossref] [PubMed]
  13. J. Xu, S. Liu, A. Wan, B. Gao, Q. Yi, D. Zhao, R. Luo, and K. Chen, “An absolute phase technique for 3D profile measurement using four-step structured light pattern,” Opt. Lasers Eng. 50(9), 1274–1280 (2012).
    [Crossref]
  14. C. Jiang, B. Li, and S. Zhang, “Pixel-by-pixel absolute phase retrieval using three phase-shifted fringe patterns without markers,” Opt. Lasers Eng. 91232–241 (2017).
    [Crossref]
  15. Z. Cai, X. Liu, X. Peng, and B. Gao, “Ray calibration and phase mapping for structured-light-field 3D reconstruction,” Opt. Express 26(6), 7598–7613 (2018).
    [Crossref] [PubMed]
  16. G. Sansoni, S. Corini, S. Lazzari, R. Rodella, and F. Docchio, “Three-dimensional imaging based on Gray-code light projection: characterization of the measuring algorithm and development of a measuring system for industrial applications,” Appl. Opt. 36(19), 4463–4472 (1997).
    [Crossref] [PubMed]
  17. Y. Wang, K. Liu, Q. Hao, D. Lau, and L. Hassebrook, “Period coded phase shifting strategy for real-time 3-D structured light illumination,” IEEE Trans. Image Process. 20(11), 3001–3013 (2011).
    [Crossref] [PubMed]
  18. X. Huang, J. Bai, K. Wang, Q. Liu, Y. Luo, K. Yang, and X. Zhang, “Target enhanced 3D reconstruction based on polarization-coded structured light,” Opt. Express 25(2), 1173–1184 (2017).
    [Crossref] [PubMed]
  19. M. Wheeler, “Automatic modeling and localization for object recognition,” PhD Thesis, School of Computer Science, Carnegie Mellon University, 1996.
  20. Z. Cai, X. Liu, A. Li, Q. Tang, X. Peng, and B. Gao, “Phase-3D mapping method developed from back-projection stereovision model for fringe projection profilometry,” Opt. Express 25(2), 1262–1277 (2017).
    [Crossref]
  21. Q. Wu, B. Zhang, J. Huang, Z. Wu, and Z. Zeng, “Flexible 3D reconstruction method based on phase-matching in multi-sensor system,” Opt. Express 24(7), 7299–7318 (2016).
    [Crossref] [PubMed]
  22. S. Barone, A. Paoli, and A. Razionale, “Multiple alignments of range maps by active stereo imaging and global marker framing,” Opt. Lasers Eng. 51(2), 116–127 (2013).
    [Crossref]
  23. V. Pradeep, G. Medioni, and J. Weiland, “Visual loop closing using multi-resolution SIFT grids in metric-topological SLAM,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2009), pp. 1438–1445.
  24. Y. Wang, D. Hung, and C. Sun, “Improving data association in robot SLAM with monocular vision,” J. Inf. Sci. Eng. 27(6), 1823–1837 (2011).
  25. P. Henry, M. Krainin, E. Herbst, X. Ren, and D. Fox, “RGB-D mapping: using depth cameras for dense 3D modeling of indoor environments,” Int. J. Robot. Res. 31(5), 647–663 (2014).
    [Crossref]
  26. I. Dryanovski, R. Valenti, and J. Xiao, “Fast visual odometry and mapping from RGB-D data,” in Proceedings of IEEE International Conference on Robotics and Automation (IEEE, 2013), pp. 2305–2310.
  27. T. Lee, S. Lim, S. Lee, S. An, and S. Oh, “Indoor mapping using planes extracted from noisy RGB-D sensors,” in Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IEEE, 2012), pp. 1727–1733.
  28. A. Bartoli and P. Sturm, “Structure-from-motion using lines: representation, triangulation, and bundle adjustment,” Comput. Vis. Image Underst. 100(3), 416–441 (2005).
    [Crossref]
  29. P. Besl and N McKay, “A method for registration of 3-D shapes,” IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 239–256 (1992).
    [Crossref]
  30. H. Yue, W. Chen, X. Wu, and J. Liu, “Fast 3D modeling in complex environments using a single Kinect sensor,” Opt. Lasers Eng. 53, 104–111 (2014).
    [Crossref]
  31. K. Gasvik, Optical Metrology, 3 Edition, (Wiley, 2002).
    [Crossref]
  32. Z. Zhang, “A flexible new technique for camera calibration”, IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000).
    [Crossref]
  33. R. Kummerle, G. Grisetti, H. Strasdat, K. Konolige, and W. Burgard, “G2o: a general framework for graph optimization,” in Proceedings of IEEE International Conference on Robotics and Automation (IEEE, 2011), pp. 3607–3613.
  34. M. Servin, J. Quiroga, and M. Padilla, Fringe Pattern Analysis for Optical Metrology: Theory, Algorithms, and Applications, (Wiley-VCH, 2014).

2018 (2)

2017 (4)

2016 (2)

Q. Wu, B. Zhang, J. Huang, Z. Wu, and Z. Zeng, “Flexible 3D reconstruction method based on phase-matching in multi-sensor system,” Opt. Express 24(7), 7299–7318 (2016).
[Crossref] [PubMed]

T. Petkovic, T. Pribanic, and M. Donlic, “Single-shot dense 3D reconstruction using self-equalizing De Bruijn sequence,” IEEE Trans. Image Process. 25(11), 5131–5144 (2016).
[Crossref] [PubMed]

2014 (3)

R. Sagawa, R. Furukawa, and H. Kawasaki, “Dense 3D reconstruction from high frame-rate video using a static grid pattern,” IEEE Trans. Pattern Anal. Mach. Intell. 36(9), 1733–1747 (2014).
[Crossref]

P. Henry, M. Krainin, E. Herbst, X. Ren, and D. Fox, “RGB-D mapping: using depth cameras for dense 3D modeling of indoor environments,” Int. J. Robot. Res. 31(5), 647–663 (2014).
[Crossref]

H. Yue, W. Chen, X. Wu, and J. Liu, “Fast 3D modeling in complex environments using a single Kinect sensor,” Opt. Lasers Eng. 53, 104–111 (2014).
[Crossref]

2013 (1)

S. Barone, A. Paoli, and A. Razionale, “Multiple alignments of range maps by active stereo imaging and global marker framing,” Opt. Lasers Eng. 51(2), 116–127 (2013).
[Crossref]

2012 (1)

J. Xu, S. Liu, A. Wan, B. Gao, Q. Yi, D. Zhao, R. Luo, and K. Chen, “An absolute phase technique for 3D profile measurement using four-step structured light pattern,” Opt. Lasers Eng. 50(9), 1274–1280 (2012).
[Crossref]

2011 (2)

Y. Wang, D. Hung, and C. Sun, “Improving data association in robot SLAM with monocular vision,” J. Inf. Sci. Eng. 27(6), 1823–1837 (2011).

Y. Wang, K. Liu, Q. Hao, D. Lau, and L. Hassebrook, “Period coded phase shifting strategy for real-time 3-D structured light illumination,” IEEE Trans. Image Process. 20(11), 3001–3013 (2011).
[Crossref] [PubMed]

2010 (1)

D. Paulo, M. Miguel, and S. Vítor, “3D reconstruction of real world scenes using a low-cost 3D range scanner,” Comput.-Aided Civ. Inf. Eng. 21(7), 486–497 (2010).

2008 (2)

S. Chen, Y. Li, and J. Zhang, “Vision processing for realtime 3-D data acquisition based on coded structured light,” IEEE Trans. Image Process. 17(2), 167–176 (2008).
[Crossref] [PubMed]

J. Lu and M. Wang, “Automated anthropometric data collection using 3D whole body scanners,” Expert Syst. Appl. 35(1), 407–414 (2008).
[Crossref]

2007 (1)

J. Pons, R. Keriven, and O. Faugeras, “Multi-view stereo reconstruction and scene flow estimation with a global image-based matching score,” Int. J. Comput. Vis. 72(2), 179–193 (2007).
[Crossref]

2005 (1)

A. Bartoli and P. Sturm, “Structure-from-motion using lines: representation, triangulation, and bundle adjustment,” Comput. Vis. Image Underst. 100(3), 416–441 (2005).
[Crossref]

2003 (1)

B. Allen, B. Curless, and Z. Popovic, “The space of human body shapes: reconstruction and parameterization from range scans,” ACM Trans. Graph. 22(3), 587–594 (2003).
[Crossref]

2000 (2)

J. Siebert and S. Marshall, “Human body 3D imaging by speckle texture projection photogrammetry,” Sensor Rev. 20(3), 218–226 (2000).
[Crossref]

Z. Zhang, “A flexible new technique for camera calibration”, IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000).
[Crossref]

1997 (1)

1992 (1)

P. Besl and N McKay, “A method for registration of 3-D shapes,” IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 239–256 (1992).
[Crossref]

Allen, B.

B. Allen, B. Curless, and Z. Popovic, “The space of human body shapes: reconstruction and parameterization from range scans,” ACM Trans. Graph. 22(3), 587–594 (2003).
[Crossref]

An, K.

H. Ji, K. An, J. Kang, M. Chung, and W. Yu, “3D environment reconstruction using modified color ICP algorithm by fusion of a camera and a 3D laser range finder,” in Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IEEE, 2009), pp. 3082–3088.

An, S.

T. Lee, S. Lim, S. Lee, S. An, and S. Oh, “Indoor mapping using planes extracted from noisy RGB-D sensors,” in Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IEEE, 2012), pp. 1727–1733.

Bai, J.

Barone, S.

S. Barone, A. Paoli, and A. Razionale, “Multiple alignments of range maps by active stereo imaging and global marker framing,” Opt. Lasers Eng. 51(2), 116–127 (2013).
[Crossref]

Bartoli, A.

A. Bartoli and P. Sturm, “Structure-from-motion using lines: representation, triangulation, and bundle adjustment,” Comput. Vis. Image Underst. 100(3), 416–441 (2005).
[Crossref]

Besl, P.

P. Besl and N McKay, “A method for registration of 3-D shapes,” IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 239–256 (1992).
[Crossref]

Burgard, W.

R. Kummerle, G. Grisetti, H. Strasdat, K. Konolige, and W. Burgard, “G2o: a general framework for graph optimization,” in Proceedings of IEEE International Conference on Robotics and Automation (IEEE, 2011), pp. 3607–3613.

Cai, Z.

Chen, K.

J. Xu, S. Liu, A. Wan, B. Gao, Q. Yi, D. Zhao, R. Luo, and K. Chen, “An absolute phase technique for 3D profile measurement using four-step structured light pattern,” Opt. Lasers Eng. 50(9), 1274–1280 (2012).
[Crossref]

Chen, S.

S. Chen, Y. Li, and J. Zhang, “Vision processing for realtime 3-D data acquisition based on coded structured light,” IEEE Trans. Image Process. 17(2), 167–176 (2008).
[Crossref] [PubMed]

Chen, W.

H. Yue, W. Chen, X. Wu, and J. Liu, “Fast 3D modeling in complex environments using a single Kinect sensor,” Opt. Lasers Eng. 53, 104–111 (2014).
[Crossref]

Chung, M.

H. Ji, K. An, J. Kang, M. Chung, and W. Yu, “3D environment reconstruction using modified color ICP algorithm by fusion of a camera and a 3D laser range finder,” in Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IEEE, 2009), pp. 3082–3088.

Corini, S.

Curless, B.

B. Allen, B. Curless, and Z. Popovic, “The space of human body shapes: reconstruction and parameterization from range scans,” ACM Trans. Graph. 22(3), 587–594 (2003).
[Crossref]

Davison, A.

R. Newcombe and A. Davison, “Live dense reconstruction with a single moving camera,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2010), pp. 1498–1505.

Docchio, F.

Donlic, M.

T. Petkovic, T. Pribanic, and M. Donlic, “Single-shot dense 3D reconstruction using self-equalizing De Bruijn sequence,” IEEE Trans. Image Process. 25(11), 5131–5144 (2016).
[Crossref] [PubMed]

Dryanovski, I.

I. Dryanovski, R. Valenti, and J. Xiao, “Fast visual odometry and mapping from RGB-D data,” in Proceedings of IEEE International Conference on Robotics and Automation (IEEE, 2013), pp. 2305–2310.

Faugeras, O.

J. Pons, R. Keriven, and O. Faugeras, “Multi-view stereo reconstruction and scene flow estimation with a global image-based matching score,” Int. J. Comput. Vis. 72(2), 179–193 (2007).
[Crossref]

Fox, D.

P. Henry, M. Krainin, E. Herbst, X. Ren, and D. Fox, “RGB-D mapping: using depth cameras for dense 3D modeling of indoor environments,” Int. J. Robot. Res. 31(5), 647–663 (2014).
[Crossref]

Furukawa, R.

R. Sagawa, R. Furukawa, and H. Kawasaki, “Dense 3D reconstruction from high frame-rate video using a static grid pattern,” IEEE Trans. Pattern Anal. Mach. Intell. 36(9), 1733–1747 (2014).
[Crossref]

Gao, B.

Gasvik, K.

K. Gasvik, Optical Metrology, 3 Edition, (Wiley, 2002).
[Crossref]

Grisetti, G.

R. Kummerle, G. Grisetti, H. Strasdat, K. Konolige, and W. Burgard, “G2o: a general framework for graph optimization,” in Proceedings of IEEE International Conference on Robotics and Automation (IEEE, 2011), pp. 3607–3613.

Guo, J.

Hao, Q.

Y. Wang, K. Liu, Q. Hao, D. Lau, and L. Hassebrook, “Period coded phase shifting strategy for real-time 3-D structured light illumination,” IEEE Trans. Image Process. 20(11), 3001–3013 (2011).
[Crossref] [PubMed]

Hassebrook, L.

Y. Wang, K. Liu, Q. Hao, D. Lau, and L. Hassebrook, “Period coded phase shifting strategy for real-time 3-D structured light illumination,” IEEE Trans. Image Process. 20(11), 3001–3013 (2011).
[Crossref] [PubMed]

Henry, P.

P. Henry, M. Krainin, E. Herbst, X. Ren, and D. Fox, “RGB-D mapping: using depth cameras for dense 3D modeling of indoor environments,” Int. J. Robot. Res. 31(5), 647–663 (2014).
[Crossref]

Herbst, E.

P. Henry, M. Krainin, E. Herbst, X. Ren, and D. Fox, “RGB-D mapping: using depth cameras for dense 3D modeling of indoor environments,” Int. J. Robot. Res. 31(5), 647–663 (2014).
[Crossref]

Huang, J.

Huang, X.

Hung, D.

Y. Wang, D. Hung, and C. Sun, “Improving data association in robot SLAM with monocular vision,” J. Inf. Sci. Eng. 27(6), 1823–1837 (2011).

Ji, H.

H. Ji, K. An, J. Kang, M. Chung, and W. Yu, “3D environment reconstruction using modified color ICP algorithm by fusion of a camera and a 3D laser range finder,” in Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IEEE, 2009), pp. 3082–3088.

Jiang, C.

C. Jiang, B. Li, and S. Zhang, “Pixel-by-pixel absolute phase retrieval using three phase-shifted fringe patterns without markers,” Opt. Lasers Eng. 91232–241 (2017).
[Crossref]

Jing, H.

Kang, J.

H. Ji, K. An, J. Kang, M. Chung, and W. Yu, “3D environment reconstruction using modified color ICP algorithm by fusion of a camera and a 3D laser range finder,” in Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IEEE, 2009), pp. 3082–3088.

Kawasaki, H.

R. Sagawa, R. Furukawa, and H. Kawasaki, “Dense 3D reconstruction from high frame-rate video using a static grid pattern,” IEEE Trans. Pattern Anal. Mach. Intell. 36(9), 1733–1747 (2014).
[Crossref]

Keriven, R.

J. Pons, R. Keriven, and O. Faugeras, “Multi-view stereo reconstruction and scene flow estimation with a global image-based matching score,” Int. J. Comput. Vis. 72(2), 179–193 (2007).
[Crossref]

Konolige, K.

R. Kummerle, G. Grisetti, H. Strasdat, K. Konolige, and W. Burgard, “G2o: a general framework for graph optimization,” in Proceedings of IEEE International Conference on Robotics and Automation (IEEE, 2011), pp. 3607–3613.

Krainin, M.

P. Henry, M. Krainin, E. Herbst, X. Ren, and D. Fox, “RGB-D mapping: using depth cameras for dense 3D modeling of indoor environments,” Int. J. Robot. Res. 31(5), 647–663 (2014).
[Crossref]

Kummerle, R.

R. Kummerle, G. Grisetti, H. Strasdat, K. Konolige, and W. Burgard, “G2o: a general framework for graph optimization,” in Proceedings of IEEE International Conference on Robotics and Automation (IEEE, 2011), pp. 3607–3613.

Lau, D.

Y. Wang, K. Liu, Q. Hao, D. Lau, and L. Hassebrook, “Period coded phase shifting strategy for real-time 3-D structured light illumination,” IEEE Trans. Image Process. 20(11), 3001–3013 (2011).
[Crossref] [PubMed]

Lazzari, S.

Lee, S.

T. Lee, S. Lim, S. Lee, S. An, and S. Oh, “Indoor mapping using planes extracted from noisy RGB-D sensors,” in Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IEEE, 2012), pp. 1727–1733.

Lee, T.

T. Lee, S. Lim, S. Lee, S. An, and S. Oh, “Indoor mapping using planes extracted from noisy RGB-D sensors,” in Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IEEE, 2012), pp. 1727–1733.

Li, A.

Li, B.

C. Jiang, B. Li, and S. Zhang, “Pixel-by-pixel absolute phase retrieval using three phase-shifted fringe patterns without markers,” Opt. Lasers Eng. 91232–241 (2017).
[Crossref]

Li, Y.

S. Chen, Y. Li, and J. Zhang, “Vision processing for realtime 3-D data acquisition based on coded structured light,” IEEE Trans. Image Process. 17(2), 167–176 (2008).
[Crossref] [PubMed]

Lim, S.

T. Lee, S. Lim, S. Lee, S. An, and S. Oh, “Indoor mapping using planes extracted from noisy RGB-D sensors,” in Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IEEE, 2012), pp. 1727–1733.

Liu, J.

H. Yue, W. Chen, X. Wu, and J. Liu, “Fast 3D modeling in complex environments using a single Kinect sensor,” Opt. Lasers Eng. 53, 104–111 (2014).
[Crossref]

Liu, K.

Y. Wang, K. Liu, Q. Hao, D. Lau, and L. Hassebrook, “Period coded phase shifting strategy for real-time 3-D structured light illumination,” IEEE Trans. Image Process. 20(11), 3001–3013 (2011).
[Crossref] [PubMed]

Liu, Q.

Liu, S.

J. Xu, S. Liu, A. Wan, B. Gao, Q. Yi, D. Zhao, R. Luo, and K. Chen, “An absolute phase technique for 3D profile measurement using four-step structured light pattern,” Opt. Lasers Eng. 50(9), 1274–1280 (2012).
[Crossref]

Liu, X.

Lu, J.

J. Lu and M. Wang, “Automated anthropometric data collection using 3D whole body scanners,” Expert Syst. Appl. 35(1), 407–414 (2008).
[Crossref]

Luo, R.

J. Xu, S. Liu, A. Wan, B. Gao, Q. Yi, D. Zhao, R. Luo, and K. Chen, “An absolute phase technique for 3D profile measurement using four-step structured light pattern,” Opt. Lasers Eng. 50(9), 1274–1280 (2012).
[Crossref]

Luo, Y.

Marshall, S.

J. Siebert and S. Marshall, “Human body 3D imaging by speckle texture projection photogrammetry,” Sensor Rev. 20(3), 218–226 (2000).
[Crossref]

McKay, N

P. Besl and N McKay, “A method for registration of 3-D shapes,” IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 239–256 (1992).
[Crossref]

Medioni, G.

V. Pradeep, G. Medioni, and J. Weiland, “Visual loop closing using multi-resolution SIFT grids in metric-topological SLAM,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2009), pp. 1438–1445.

Miguel, M.

D. Paulo, M. Miguel, and S. Vítor, “3D reconstruction of real world scenes using a low-cost 3D range scanner,” Comput.-Aided Civ. Inf. Eng. 21(7), 486–497 (2010).

Newcombe, R.

R. Newcombe and A. Davison, “Live dense reconstruction with a single moving camera,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2010), pp. 1498–1505.

Oh, S.

T. Lee, S. Lim, S. Lee, S. An, and S. Oh, “Indoor mapping using planes extracted from noisy RGB-D sensors,” in Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IEEE, 2012), pp. 1727–1733.

Padilla, M.

M. Servin, J. Quiroga, and M. Padilla, Fringe Pattern Analysis for Optical Metrology: Theory, Algorithms, and Applications, (Wiley-VCH, 2014).

Paoli, A.

S. Barone, A. Paoli, and A. Razionale, “Multiple alignments of range maps by active stereo imaging and global marker framing,” Opt. Lasers Eng. 51(2), 116–127 (2013).
[Crossref]

Paulo, D.

D. Paulo, M. Miguel, and S. Vítor, “3D reconstruction of real world scenes using a low-cost 3D range scanner,” Comput.-Aided Civ. Inf. Eng. 21(7), 486–497 (2010).

Peng, X.

Petkovic, T.

T. Petkovic, T. Pribanic, and M. Donlic, “Single-shot dense 3D reconstruction using self-equalizing De Bruijn sequence,” IEEE Trans. Image Process. 25(11), 5131–5144 (2016).
[Crossref] [PubMed]

Pons, J.

J. Pons, R. Keriven, and O. Faugeras, “Multi-view stereo reconstruction and scene flow estimation with a global image-based matching score,” Int. J. Comput. Vis. 72(2), 179–193 (2007).
[Crossref]

Popovic, Z.

B. Allen, B. Curless, and Z. Popovic, “The space of human body shapes: reconstruction and parameterization from range scans,” ACM Trans. Graph. 22(3), 587–594 (2003).
[Crossref]

Pradeep, V.

V. Pradeep, G. Medioni, and J. Weiland, “Visual loop closing using multi-resolution SIFT grids in metric-topological SLAM,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2009), pp. 1438–1445.

Pribanic, T.

T. Petkovic, T. Pribanic, and M. Donlic, “Single-shot dense 3D reconstruction using self-equalizing De Bruijn sequence,” IEEE Trans. Image Process. 25(11), 5131–5144 (2016).
[Crossref] [PubMed]

Quiroga, J.

M. Servin, J. Quiroga, and M. Padilla, Fringe Pattern Analysis for Optical Metrology: Theory, Algorithms, and Applications, (Wiley-VCH, 2014).

Razionale, A.

S. Barone, A. Paoli, and A. Razionale, “Multiple alignments of range maps by active stereo imaging and global marker framing,” Opt. Lasers Eng. 51(2), 116–127 (2013).
[Crossref]

Ren, X.

P. Henry, M. Krainin, E. Herbst, X. Ren, and D. Fox, “RGB-D mapping: using depth cameras for dense 3D modeling of indoor environments,” Int. J. Robot. Res. 31(5), 647–663 (2014).
[Crossref]

Rodella, R.

Sagawa, R.

R. Sagawa, R. Furukawa, and H. Kawasaki, “Dense 3D reconstruction from high frame-rate video using a static grid pattern,” IEEE Trans. Pattern Anal. Mach. Intell. 36(9), 1733–1747 (2014).
[Crossref]

Sansoni, G.

Servin, M.

M. Servin, J. Quiroga, and M. Padilla, Fringe Pattern Analysis for Optical Metrology: Theory, Algorithms, and Applications, (Wiley-VCH, 2014).

Siebert, J.

J. Siebert and S. Marshall, “Human body 3D imaging by speckle texture projection photogrammetry,” Sensor Rev. 20(3), 218–226 (2000).
[Crossref]

Strasdat, H.

R. Kummerle, G. Grisetti, H. Strasdat, K. Konolige, and W. Burgard, “G2o: a general framework for graph optimization,” in Proceedings of IEEE International Conference on Robotics and Automation (IEEE, 2011), pp. 3607–3613.

Sturm, P.

A. Bartoli and P. Sturm, “Structure-from-motion using lines: representation, triangulation, and bundle adjustment,” Comput. Vis. Image Underst. 100(3), 416–441 (2005).
[Crossref]

Sun, C.

Y. Wang, D. Hung, and C. Sun, “Improving data association in robot SLAM with monocular vision,” J. Inf. Sci. Eng. 27(6), 1823–1837 (2011).

Tang, Q.

Valenti, R.

I. Dryanovski, R. Valenti, and J. Xiao, “Fast visual odometry and mapping from RGB-D data,” in Proceedings of IEEE International Conference on Robotics and Automation (IEEE, 2013), pp. 2305–2310.

Vítor, S.

D. Paulo, M. Miguel, and S. Vítor, “3D reconstruction of real world scenes using a low-cost 3D range scanner,” Comput.-Aided Civ. Inf. Eng. 21(7), 486–497 (2010).

Wan, A.

J. Xu, S. Liu, A. Wan, B. Gao, Q. Yi, D. Zhao, R. Luo, and K. Chen, “An absolute phase technique for 3D profile measurement using four-step structured light pattern,” Opt. Lasers Eng. 50(9), 1274–1280 (2012).
[Crossref]

Wang, K.

Wang, M.

J. Lu and M. Wang, “Automated anthropometric data collection using 3D whole body scanners,” Expert Syst. Appl. 35(1), 407–414 (2008).
[Crossref]

Wang, Y.

Y. Wang, D. Hung, and C. Sun, “Improving data association in robot SLAM with monocular vision,” J. Inf. Sci. Eng. 27(6), 1823–1837 (2011).

Y. Wang, K. Liu, Q. Hao, D. Lau, and L. Hassebrook, “Period coded phase shifting strategy for real-time 3-D structured light illumination,” IEEE Trans. Image Process. 20(11), 3001–3013 (2011).
[Crossref] [PubMed]

Weiland, J.

V. Pradeep, G. Medioni, and J. Weiland, “Visual loop closing using multi-resolution SIFT grids in metric-topological SLAM,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2009), pp. 1438–1445.

Wheeler, M.

M. Wheeler, “Automatic modeling and localization for object recognition,” PhD Thesis, School of Computer Science, Carnegie Mellon University, 1996.

Wu, Q.

Wu, X.

H. Yue, W. Chen, X. Wu, and J. Liu, “Fast 3D modeling in complex environments using a single Kinect sensor,” Opt. Lasers Eng. 53, 104–111 (2014).
[Crossref]

Wu, Z.

Xiao, J.

I. Dryanovski, R. Valenti, and J. Xiao, “Fast visual odometry and mapping from RGB-D data,” in Proceedings of IEEE International Conference on Robotics and Automation (IEEE, 2013), pp. 2305–2310.

Xu, J.

J. Xu, S. Liu, A. Wan, B. Gao, Q. Yi, D. Zhao, R. Luo, and K. Chen, “An absolute phase technique for 3D profile measurement using four-step structured light pattern,” Opt. Lasers Eng. 50(9), 1274–1280 (2012).
[Crossref]

Yang, K.

Yi, Q.

J. Xu, S. Liu, A. Wan, B. Gao, Q. Yi, D. Zhao, R. Luo, and K. Chen, “An absolute phase technique for 3D profile measurement using four-step structured light pattern,” Opt. Lasers Eng. 50(9), 1274–1280 (2012).
[Crossref]

Yu, J.

Yu, W.

H. Ji, K. An, J. Kang, M. Chung, and W. Yu, “3D environment reconstruction using modified color ICP algorithm by fusion of a camera and a 3D laser range finder,” in Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IEEE, 2009), pp. 3082–3088.

Yue, H.

H. Yue, W. Chen, X. Wu, and J. Liu, “Fast 3D modeling in complex environments using a single Kinect sensor,” Opt. Lasers Eng. 53, 104–111 (2014).
[Crossref]

Zeng, Z.

Zhang, B.

Zhang, J.

S. Chen, Y. Li, and J. Zhang, “Vision processing for realtime 3-D data acquisition based on coded structured light,” IEEE Trans. Image Process. 17(2), 167–176 (2008).
[Crossref] [PubMed]

Zhang, S.

C. Jiang, B. Li, and S. Zhang, “Pixel-by-pixel absolute phase retrieval using three phase-shifted fringe patterns without markers,” Opt. Lasers Eng. 91232–241 (2017).
[Crossref]

Zhang, X.

Zhang, Z.

Z. Zhang, “A flexible new technique for camera calibration”, IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000).
[Crossref]

Zhao, D.

J. Xu, S. Liu, A. Wan, B. Gao, Q. Yi, D. Zhao, R. Luo, and K. Chen, “An absolute phase technique for 3D profile measurement using four-step structured light pattern,” Opt. Lasers Eng. 50(9), 1274–1280 (2012).
[Crossref]

Zhou, P.

Zhu, J.

ACM Trans. Graph. (1)

B. Allen, B. Curless, and Z. Popovic, “The space of human body shapes: reconstruction and parameterization from range scans,” ACM Trans. Graph. 22(3), 587–594 (2003).
[Crossref]

Appl. Opt. (2)

Comput. Vis. Image Underst. (1)

A. Bartoli and P. Sturm, “Structure-from-motion using lines: representation, triangulation, and bundle adjustment,” Comput. Vis. Image Underst. 100(3), 416–441 (2005).
[Crossref]

Comput.-Aided Civ. Inf. Eng. (1)

D. Paulo, M. Miguel, and S. Vítor, “3D reconstruction of real world scenes using a low-cost 3D range scanner,” Comput.-Aided Civ. Inf. Eng. 21(7), 486–497 (2010).

Expert Syst. Appl. (1)

J. Lu and M. Wang, “Automated anthropometric data collection using 3D whole body scanners,” Expert Syst. Appl. 35(1), 407–414 (2008).
[Crossref]

IEEE Trans. Image Process. (3)

S. Chen, Y. Li, and J. Zhang, “Vision processing for realtime 3-D data acquisition based on coded structured light,” IEEE Trans. Image Process. 17(2), 167–176 (2008).
[Crossref] [PubMed]

Y. Wang, K. Liu, Q. Hao, D. Lau, and L. Hassebrook, “Period coded phase shifting strategy for real-time 3-D structured light illumination,” IEEE Trans. Image Process. 20(11), 3001–3013 (2011).
[Crossref] [PubMed]

T. Petkovic, T. Pribanic, and M. Donlic, “Single-shot dense 3D reconstruction using self-equalizing De Bruijn sequence,” IEEE Trans. Image Process. 25(11), 5131–5144 (2016).
[Crossref] [PubMed]

IEEE Trans. Pattern Anal. Mach. Intell. (3)

R. Sagawa, R. Furukawa, and H. Kawasaki, “Dense 3D reconstruction from high frame-rate video using a static grid pattern,” IEEE Trans. Pattern Anal. Mach. Intell. 36(9), 1733–1747 (2014).
[Crossref]

P. Besl and N McKay, “A method for registration of 3-D shapes,” IEEE Trans. Pattern Anal. Mach. Intell. 14(2), 239–256 (1992).
[Crossref]

Z. Zhang, “A flexible new technique for camera calibration”, IEEE Trans. Pattern Anal. Mach. Intell. 22(11), 1330–1334 (2000).
[Crossref]

Int. J. Comput. Vis. (1)

J. Pons, R. Keriven, and O. Faugeras, “Multi-view stereo reconstruction and scene flow estimation with a global image-based matching score,” Int. J. Comput. Vis. 72(2), 179–193 (2007).
[Crossref]

Int. J. Robot. Res. (1)

P. Henry, M. Krainin, E. Herbst, X. Ren, and D. Fox, “RGB-D mapping: using depth cameras for dense 3D modeling of indoor environments,” Int. J. Robot. Res. 31(5), 647–663 (2014).
[Crossref]

J. Inf. Sci. Eng. (1)

Y. Wang, D. Hung, and C. Sun, “Improving data association in robot SLAM with monocular vision,” J. Inf. Sci. Eng. 27(6), 1823–1837 (2011).

Opt. Express (5)

Opt. Lasers Eng. (4)

J. Xu, S. Liu, A. Wan, B. Gao, Q. Yi, D. Zhao, R. Luo, and K. Chen, “An absolute phase technique for 3D profile measurement using four-step structured light pattern,” Opt. Lasers Eng. 50(9), 1274–1280 (2012).
[Crossref]

C. Jiang, B. Li, and S. Zhang, “Pixel-by-pixel absolute phase retrieval using three phase-shifted fringe patterns without markers,” Opt. Lasers Eng. 91232–241 (2017).
[Crossref]

S. Barone, A. Paoli, and A. Razionale, “Multiple alignments of range maps by active stereo imaging and global marker framing,” Opt. Lasers Eng. 51(2), 116–127 (2013).
[Crossref]

H. Yue, W. Chen, X. Wu, and J. Liu, “Fast 3D modeling in complex environments using a single Kinect sensor,” Opt. Lasers Eng. 53, 104–111 (2014).
[Crossref]

Sensor Rev. (1)

J. Siebert and S. Marshall, “Human body 3D imaging by speckle texture projection photogrammetry,” Sensor Rev. 20(3), 218–226 (2000).
[Crossref]

Other (9)

I. Dryanovski, R. Valenti, and J. Xiao, “Fast visual odometry and mapping from RGB-D data,” in Proceedings of IEEE International Conference on Robotics and Automation (IEEE, 2013), pp. 2305–2310.

T. Lee, S. Lim, S. Lee, S. An, and S. Oh, “Indoor mapping using planes extracted from noisy RGB-D sensors,” in Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IEEE, 2012), pp. 1727–1733.

V. Pradeep, G. Medioni, and J. Weiland, “Visual loop closing using multi-resolution SIFT grids in metric-topological SLAM,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2009), pp. 1438–1445.

K. Gasvik, Optical Metrology, 3 Edition, (Wiley, 2002).
[Crossref]

R. Kummerle, G. Grisetti, H. Strasdat, K. Konolige, and W. Burgard, “G2o: a general framework for graph optimization,” in Proceedings of IEEE International Conference on Robotics and Automation (IEEE, 2011), pp. 3607–3613.

M. Servin, J. Quiroga, and M. Padilla, Fringe Pattern Analysis for Optical Metrology: Theory, Algorithms, and Applications, (Wiley-VCH, 2014).

M. Wheeler, “Automatic modeling and localization for object recognition,” PhD Thesis, School of Computer Science, Carnegie Mellon University, 1996.

H. Ji, K. An, J. Kang, M. Chung, and W. Yu, “3D environment reconstruction using modified color ICP algorithm by fusion of a camera and a 3D laser range finder,” in Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (IEEE, 2009), pp. 3082–3088.

R. Newcombe and A. Davison, “Live dense reconstruction with a single moving camera,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2010), pp. 1498–1505.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (14)

Fig. 1
Fig. 1 Configuration of the proposed 3D body scanning system.
Fig. 2
Fig. 2 Topology of the proposed 3D body scanning system.
Fig. 3
Fig. 3 Process of absolute phase calculation. Left: four-step phase shifting images. Bottom middle: phase map within [0, 2π). Top middle: six Gray-code images. Right: absolute phase map within [0, 2).
Fig. 4
Fig. 4 Checkerboard images captured by camera under different lighting conditions: (a) natural light, (b) fringe pattern, and (c) red light.
Fig. 5
Fig. 5 Mean reprojection errors of module calibration.
Fig. 6
Fig. 6 Extrinsic parameters visualization. Colored planes represent checkerboards under different poses.
Fig. 7
Fig. 7 3D reconstruction of a flat board. (a) Relative phase map obtained by four-step phase shifting method. (b) Absolute phase map with Gray-code decoding. (c) 3D point cloud of the board.
Fig. 8
Fig. 8 3D reconstruction of a mannequin. (a) Relative phase map obtained by four-step phase shifting method. (b) Absolute phase map with Gray-code decoding. (c, d) Two views of the reconstructed 3D point cloud.
Fig. 9
Fig. 9 3D reconstruction of a human body. (a) Absolute phase map. (b, c) Two views of the reconstructed 3D point cloud.
Fig. 10
Fig. 10 3D model of a human body. Points in different colors represent different parts reconstructed by different modules. Left: Before optimization. Right: After optimization.
Fig. 11
Fig. 11 Pose graphs of the modules (a) before and (b) after optimization. Numbered circles represent the poses of the modules. Circle 1′ represents the pose of the first module after point cloud registration.
Fig. 12
Fig. 12 Different views of the final model after combining all point clouds.
Fig. 13
Fig. 13 (a) A geometrical object to be measured. (b) Photograph of the real object. (c) Digitalized model using our scanning system.
Fig. 14
Fig. 14 Final results of two mannequins (a, b) and six human bodies (c-h).

Tables (2)

Tables Icon

Table 1 Positions of modules in world coordinate frame (in meter).

Tables Icon

Table 2 Measurement results of a geometrical object (in millimeter).

Equations (22)

Equations on this page are rendered with MathJax. Learn more.

I i ( x , y ) = I + I sin [ ϕ ( x , y ) + π 2 ( i 1 ) ] , i = 1 , 2 , , 4 ,
ϕ ( x , y ) = 2 π N x W ,
R i ( u , v ) = R + R sin [ u 0 u + u 0 tan ( θ 0 ) O ( u , v ) + π 2 ( i 1 ) ] , i = 1 , 2 , , 4 ,
φ ( u , v ) = u 0 u + u 0 tan ( θ 0 ) O ( u , v ) .
R i ( u , v ) = R + R sin [ φ ( u , v ) + π 2 ( i 1 ) ] , i = 1 , 2 , , 4 .
φ ( u , v ) = arctan ( R 4 ( u , v ) R 2 ( u , v ) R 1 ( u , v ) R 3 ( u , v ) ) .
Φ ( u , v ) = φ ( u , v ) + 2 π B ( u , v ) .
x = W Φ ( u , v ) 2 N π .
ϕ ( x , y ) = 2 π N y H ,
Z C [ u c v c 1 ] = [ a x 0 u 0 0 a y v 0 0 0 1 ] [ X C Y C Z C ]
{ x c = X C Z C = u c u 0 a x y c = Y C Z C = v c u 0 a y .
{ x p = X P Z P = u p u 0 a x y p = Y P Z P = v p v 0 a y ,
[ X P Y P Z P 1 ] = T P [ X C Y C Z C 1 ] = [ r 11 r 12 r 13 t 1 r 21 r 22 r 23 t 2 r 31 r 32 r 33 t 3 0 0 0 1 ] [ X C Y C Z C 1 ] ,
Z C = t 1 t 3 x p ( r 31 x c + r 32 y c + r 33 ) x p ( r 11 x c + r 12 y c + r 13 ) = t 2 t 3 y p ( r 31 x c + r 32 y c + r 33 ) y p ( r 21 x c + r 22 y c + r 23 ) .
P W = T P C ,
E ( R , t ) = arg min R , t 1 n i = 1 n ( 1 D ( u i , v i ) D max ) | | u i R v i t | | 2 ,
x i = T i , j x j .
e i = x i T i , j x j .
E o ( x ) = i = 1 8 e i T e i .
K C = [ a x 0 u 0 0 a y v 0 0 0 1 ] = [ 967.9 0 623.7 0 971.7 439.8 0 0 1 ] ,
K P = [ a x 0 u 0 0 a y v 0 0 0 1 ] = [ 1364.3 0 961.7 0 1353.9 1097.5 0 0 1 ] .
T P = [ r 11 r 12 r 13 t 1 r 21 r 22 r 23 t 2 r 31 r 32 r 33 t 3 0 0 0 1 ] = [ 0.9463 0.0671 0.3164 409.82 0.0570 0.9284 0.3672 19.27 0.3184 0.3655 0.8747 70.54 0 0 0 1 ] .

Metrics