Abstract

3D point reconstruction is a crucial component in optical inspection. A direct reconstruction process is proposed by combining two similarity invariants in active vision. A planar reference with an isosceles-right-angle pattern and a coplanar laser are adopted to generate the laser projective point on the measured object. The first invariant is the image of the conic dual to the circular points (ICDCP), which is derived from the lines in two pairs of perpendicular directions on the reference pattern. The invariant provides the transform from the projection space to the similarity space. Then, the ratio of the line segments consisting of the laser projection points and reference points is constructed as the other similarity invariant, by which the laser projection point in the similarity space is converted to Euclidean space. The solution of the laser point is modeled by the ratio invariant of the line segments and improved by a special point selection to avoid nonlinear equations. Finally, the benchmark-camera distance, the benchmark-generator distance, the benchmark length, image noise, and the number of orthogonal lines are experimentally investigated to explore the effectiveness and reconstruction error of the method. The reconstruction error averages of 0.94, 1.22, 1.77, and 2.15 mm are observed from the experiment results with the benchmark-camera distances from 600 mm to 750 mm with a 50 mm interval. This proves the validity and practicability of the reconstruction method.

© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. Y. Ko and S. Yi, “Development of color 3D scanner using laser structured-light imaging method,” Curr. Opt. Photon. 2(6), 554–562 (2018).
  2. A. Glowacz and Z. Glowacz, “Diagnostics of stator faults of the single-phase induction motor using thermal images, moasos and selected classifiers,” Measurement 93, 86–93 (2016).
    [Crossref]
  3. T. H. Lin, “Automatic 3D color shape measurement system based on a stereo camera,” Appl. Opt. 59(7), 2086–2096 (2020).
    [Crossref]
  4. A. Costanzo, M. Minasi, G. Casula, M. Musacchio, and M. F. Buongiorno, “Combined use of terrestrial laser scanning and IR thermography applied to a historical building,” Sensors 15(1), 194–213 (2014).
    [Crossref]
  5. G. Xu, J. Yuan, X. Li, and J. Su, “Optimization reconstruction method of object profile using flexible laser plane and bi-planar references,” Sci. Rep. 8(1), 1526 (2018).
    [Crossref]
  6. G. Zhan, H. Tang, K. Zhong, Z. Li, Y. Shi, and C. Wang, “High-speed FPGA-based phase measuring profilometry architecture,” Opt. Express 25(9), 10553–10564 (2017).
    [Crossref]
  7. Z. G. Ren, J. R. Liao, and L. L. Cai, “Three-dimensional measurement of small mechanical parts under a complicated background based on stereo vision,” Appl. Opt. 49(10), 1789–1801 (2010).
    [Crossref]
  8. J. S. Hyun and S. Zhang, “Influence of projector pixel shape on ultrahigh-resolution 3D shape measurement,” Opt. Express 28(7), 9510–9520 (2020).
    [Crossref]
  9. T. T. Wu and J. Y. Qu, “Optical imaging for medical diagnosis based on active stereo vision and motion tracking,” Opt. Express 15(16), 10421–10426 (2007).
    [Crossref]
  10. V. I. Venzel’, M. F. Danilov, A. A. Savel’eva, and A. A. Semenov, “Use of coordinate measuring machines for the assembly of axisymmetric two-mirror objectives with aspherical mirrors,” J. Opt. Technol. 86(2), 119–123 (2019).
    [Crossref]
  11. A. Glowacz, A. Glowacz, and Z. Glowacz, “Recognition of thermal images of direct current motor with application of area perimeter vector and Bayes classifier,” Meas. Sci. Rev. 15(3), 119–126 (2015).
    [Crossref]
  12. G. Xu, J. Yuan, X. T. Li, and J. Su, “Profile reconstruction method adopting parameterized re-projection errors of laser lines generated from bi-cuboid references,” Opt. Express 25(24), 29746–29760 (2017).
    [Crossref]
  13. R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision (Cambridge University, 2003).
  14. Z. Y. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Machine Intell. 22(11), 1330–1334 (2000).
    [Crossref]
  15. Z. Y. Zhang, “Camera calibration with one-dimensional objects,” IEEE Trans. Pattern Anal. Machine Intell. 26(7), 892–899 (2004).
    [Crossref]
  16. Y. Bok, H. G. Jeon, and I. S. Kweon, “Geometric calibration of micro-lens-based light field cameras using line features,” IEEE T. Pattern. Anal. 39(2), 287–300 (2017).
    [Crossref]
  17. P. Mukhopadhyay and B. B. Chaudhuri, “A survey of Hough transform,” Pattern Recogn. 48(3), 993–1010 (2015).
    [Crossref]
  18. X. Q. Meng and Z. Y. Hu, “A new easy camera calibration technique based on circular points,” Pattern Recogn. 36(5), 1155–1164 (2003).
    [Crossref]
  19. J. S. Kim, P. Gurdjos, and I. S. Kweon, “Geometric and algebraic constraints of projected concentric circles and their applications to camera calibration,” IEEE T. Pattern. Anal. 27(4), 637–642 (2005).
    [Crossref]
  20. Y. Bok, Y. Jeong, D. G. Choi, and I. S. Kweon, “Capturing village-level heritages with a hand-held camera-laser fusion sensor,” Int. J. Comput. Vis. 94(1), 36–53 (2011).
    [Crossref]
  21. T. T. Nguyen, D. C. Slaughter, N. Max, J. N. Maloof, and N. Sinha, “Structured light-based 3D reconstruction system for plants,” Sensors 15(8), 18587–18612 (2015).
    [Crossref]
  22. M. Y. Kim, S. M. Ayaz, J. Park, and Y. Roh, “Adaptive 3D sensing system based on variable magnification using stereo vision and structured light,” Opt. Laser. Eng. 55, 113–127 (2014).
    [Crossref]
  23. G. Xu, Y. P. Zhu, X. T. Li, and R. Chen, “Vision reconstruction based on planar laser with nonrestrictive installation position and posture relative to 2D reference,” Opt. Express 27(26), 38567–38578 (2019).
    [Crossref]
  24. R. A. Horn and C. R. Johnson, Matrix Analysis (Cambridge University, 2012).
  25. C. Steger, “An unbiased detector of curvilinear structures,” IEEE Trans. Pattern Anal. Machine Intell. 20(2), 113–125 (1998).
    [Crossref]

2020 (2)

2019 (2)

2018 (2)

G. Xu, J. Yuan, X. Li, and J. Su, “Optimization reconstruction method of object profile using flexible laser plane and bi-planar references,” Sci. Rep. 8(1), 1526 (2018).
[Crossref]

Y. Ko and S. Yi, “Development of color 3D scanner using laser structured-light imaging method,” Curr. Opt. Photon. 2(6), 554–562 (2018).

2017 (3)

2016 (1)

A. Glowacz and Z. Glowacz, “Diagnostics of stator faults of the single-phase induction motor using thermal images, moasos and selected classifiers,” Measurement 93, 86–93 (2016).
[Crossref]

2015 (3)

P. Mukhopadhyay and B. B. Chaudhuri, “A survey of Hough transform,” Pattern Recogn. 48(3), 993–1010 (2015).
[Crossref]

A. Glowacz, A. Glowacz, and Z. Glowacz, “Recognition of thermal images of direct current motor with application of area perimeter vector and Bayes classifier,” Meas. Sci. Rev. 15(3), 119–126 (2015).
[Crossref]

T. T. Nguyen, D. C. Slaughter, N. Max, J. N. Maloof, and N. Sinha, “Structured light-based 3D reconstruction system for plants,” Sensors 15(8), 18587–18612 (2015).
[Crossref]

2014 (2)

M. Y. Kim, S. M. Ayaz, J. Park, and Y. Roh, “Adaptive 3D sensing system based on variable magnification using stereo vision and structured light,” Opt. Laser. Eng. 55, 113–127 (2014).
[Crossref]

A. Costanzo, M. Minasi, G. Casula, M. Musacchio, and M. F. Buongiorno, “Combined use of terrestrial laser scanning and IR thermography applied to a historical building,” Sensors 15(1), 194–213 (2014).
[Crossref]

2011 (1)

Y. Bok, Y. Jeong, D. G. Choi, and I. S. Kweon, “Capturing village-level heritages with a hand-held camera-laser fusion sensor,” Int. J. Comput. Vis. 94(1), 36–53 (2011).
[Crossref]

2010 (1)

2007 (1)

2005 (1)

J. S. Kim, P. Gurdjos, and I. S. Kweon, “Geometric and algebraic constraints of projected concentric circles and their applications to camera calibration,” IEEE T. Pattern. Anal. 27(4), 637–642 (2005).
[Crossref]

2004 (1)

Z. Y. Zhang, “Camera calibration with one-dimensional objects,” IEEE Trans. Pattern Anal. Machine Intell. 26(7), 892–899 (2004).
[Crossref]

2003 (1)

X. Q. Meng and Z. Y. Hu, “A new easy camera calibration technique based on circular points,” Pattern Recogn. 36(5), 1155–1164 (2003).
[Crossref]

2000 (1)

Z. Y. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Machine Intell. 22(11), 1330–1334 (2000).
[Crossref]

1998 (1)

C. Steger, “An unbiased detector of curvilinear structures,” IEEE Trans. Pattern Anal. Machine Intell. 20(2), 113–125 (1998).
[Crossref]

Ayaz, S. M.

M. Y. Kim, S. M. Ayaz, J. Park, and Y. Roh, “Adaptive 3D sensing system based on variable magnification using stereo vision and structured light,” Opt. Laser. Eng. 55, 113–127 (2014).
[Crossref]

Bok, Y.

Y. Bok, H. G. Jeon, and I. S. Kweon, “Geometric calibration of micro-lens-based light field cameras using line features,” IEEE T. Pattern. Anal. 39(2), 287–300 (2017).
[Crossref]

Y. Bok, Y. Jeong, D. G. Choi, and I. S. Kweon, “Capturing village-level heritages with a hand-held camera-laser fusion sensor,” Int. J. Comput. Vis. 94(1), 36–53 (2011).
[Crossref]

Buongiorno, M. F.

A. Costanzo, M. Minasi, G. Casula, M. Musacchio, and M. F. Buongiorno, “Combined use of terrestrial laser scanning and IR thermography applied to a historical building,” Sensors 15(1), 194–213 (2014).
[Crossref]

Cai, L. L.

Casula, G.

A. Costanzo, M. Minasi, G. Casula, M. Musacchio, and M. F. Buongiorno, “Combined use of terrestrial laser scanning and IR thermography applied to a historical building,” Sensors 15(1), 194–213 (2014).
[Crossref]

Chaudhuri, B. B.

P. Mukhopadhyay and B. B. Chaudhuri, “A survey of Hough transform,” Pattern Recogn. 48(3), 993–1010 (2015).
[Crossref]

Chen, R.

Choi, D. G.

Y. Bok, Y. Jeong, D. G. Choi, and I. S. Kweon, “Capturing village-level heritages with a hand-held camera-laser fusion sensor,” Int. J. Comput. Vis. 94(1), 36–53 (2011).
[Crossref]

Costanzo, A.

A. Costanzo, M. Minasi, G. Casula, M. Musacchio, and M. F. Buongiorno, “Combined use of terrestrial laser scanning and IR thermography applied to a historical building,” Sensors 15(1), 194–213 (2014).
[Crossref]

Danilov, M. F.

Glowacz, A.

A. Glowacz and Z. Glowacz, “Diagnostics of stator faults of the single-phase induction motor using thermal images, moasos and selected classifiers,” Measurement 93, 86–93 (2016).
[Crossref]

A. Glowacz, A. Glowacz, and Z. Glowacz, “Recognition of thermal images of direct current motor with application of area perimeter vector and Bayes classifier,” Meas. Sci. Rev. 15(3), 119–126 (2015).
[Crossref]

A. Glowacz, A. Glowacz, and Z. Glowacz, “Recognition of thermal images of direct current motor with application of area perimeter vector and Bayes classifier,” Meas. Sci. Rev. 15(3), 119–126 (2015).
[Crossref]

Glowacz, Z.

A. Glowacz and Z. Glowacz, “Diagnostics of stator faults of the single-phase induction motor using thermal images, moasos and selected classifiers,” Measurement 93, 86–93 (2016).
[Crossref]

A. Glowacz, A. Glowacz, and Z. Glowacz, “Recognition of thermal images of direct current motor with application of area perimeter vector and Bayes classifier,” Meas. Sci. Rev. 15(3), 119–126 (2015).
[Crossref]

Gurdjos, P.

J. S. Kim, P. Gurdjos, and I. S. Kweon, “Geometric and algebraic constraints of projected concentric circles and their applications to camera calibration,” IEEE T. Pattern. Anal. 27(4), 637–642 (2005).
[Crossref]

Hartley, R.

R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision (Cambridge University, 2003).

Horn, R. A.

R. A. Horn and C. R. Johnson, Matrix Analysis (Cambridge University, 2012).

Hu, Z. Y.

X. Q. Meng and Z. Y. Hu, “A new easy camera calibration technique based on circular points,” Pattern Recogn. 36(5), 1155–1164 (2003).
[Crossref]

Hyun, J. S.

Jeon, H. G.

Y. Bok, H. G. Jeon, and I. S. Kweon, “Geometric calibration of micro-lens-based light field cameras using line features,” IEEE T. Pattern. Anal. 39(2), 287–300 (2017).
[Crossref]

Jeong, Y.

Y. Bok, Y. Jeong, D. G. Choi, and I. S. Kweon, “Capturing village-level heritages with a hand-held camera-laser fusion sensor,” Int. J. Comput. Vis. 94(1), 36–53 (2011).
[Crossref]

Johnson, C. R.

R. A. Horn and C. R. Johnson, Matrix Analysis (Cambridge University, 2012).

Kim, J. S.

J. S. Kim, P. Gurdjos, and I. S. Kweon, “Geometric and algebraic constraints of projected concentric circles and their applications to camera calibration,” IEEE T. Pattern. Anal. 27(4), 637–642 (2005).
[Crossref]

Kim, M. Y.

M. Y. Kim, S. M. Ayaz, J. Park, and Y. Roh, “Adaptive 3D sensing system based on variable magnification using stereo vision and structured light,” Opt. Laser. Eng. 55, 113–127 (2014).
[Crossref]

Ko, Y.

Kweon, I. S.

Y. Bok, H. G. Jeon, and I. S. Kweon, “Geometric calibration of micro-lens-based light field cameras using line features,” IEEE T. Pattern. Anal. 39(2), 287–300 (2017).
[Crossref]

Y. Bok, Y. Jeong, D. G. Choi, and I. S. Kweon, “Capturing village-level heritages with a hand-held camera-laser fusion sensor,” Int. J. Comput. Vis. 94(1), 36–53 (2011).
[Crossref]

J. S. Kim, P. Gurdjos, and I. S. Kweon, “Geometric and algebraic constraints of projected concentric circles and their applications to camera calibration,” IEEE T. Pattern. Anal. 27(4), 637–642 (2005).
[Crossref]

Li, X.

G. Xu, J. Yuan, X. Li, and J. Su, “Optimization reconstruction method of object profile using flexible laser plane and bi-planar references,” Sci. Rep. 8(1), 1526 (2018).
[Crossref]

Li, X. T.

Li, Z.

Liao, J. R.

Lin, T. H.

Maloof, J. N.

T. T. Nguyen, D. C. Slaughter, N. Max, J. N. Maloof, and N. Sinha, “Structured light-based 3D reconstruction system for plants,” Sensors 15(8), 18587–18612 (2015).
[Crossref]

Max, N.

T. T. Nguyen, D. C. Slaughter, N. Max, J. N. Maloof, and N. Sinha, “Structured light-based 3D reconstruction system for plants,” Sensors 15(8), 18587–18612 (2015).
[Crossref]

Meng, X. Q.

X. Q. Meng and Z. Y. Hu, “A new easy camera calibration technique based on circular points,” Pattern Recogn. 36(5), 1155–1164 (2003).
[Crossref]

Minasi, M.

A. Costanzo, M. Minasi, G. Casula, M. Musacchio, and M. F. Buongiorno, “Combined use of terrestrial laser scanning and IR thermography applied to a historical building,” Sensors 15(1), 194–213 (2014).
[Crossref]

Mukhopadhyay, P.

P. Mukhopadhyay and B. B. Chaudhuri, “A survey of Hough transform,” Pattern Recogn. 48(3), 993–1010 (2015).
[Crossref]

Musacchio, M.

A. Costanzo, M. Minasi, G. Casula, M. Musacchio, and M. F. Buongiorno, “Combined use of terrestrial laser scanning and IR thermography applied to a historical building,” Sensors 15(1), 194–213 (2014).
[Crossref]

Nguyen, T. T.

T. T. Nguyen, D. C. Slaughter, N. Max, J. N. Maloof, and N. Sinha, “Structured light-based 3D reconstruction system for plants,” Sensors 15(8), 18587–18612 (2015).
[Crossref]

Park, J.

M. Y. Kim, S. M. Ayaz, J. Park, and Y. Roh, “Adaptive 3D sensing system based on variable magnification using stereo vision and structured light,” Opt. Laser. Eng. 55, 113–127 (2014).
[Crossref]

Qu, J. Y.

Ren, Z. G.

Roh, Y.

M. Y. Kim, S. M. Ayaz, J. Park, and Y. Roh, “Adaptive 3D sensing system based on variable magnification using stereo vision and structured light,” Opt. Laser. Eng. 55, 113–127 (2014).
[Crossref]

Savel’eva, A. A.

Semenov, A. A.

Shi, Y.

Sinha, N.

T. T. Nguyen, D. C. Slaughter, N. Max, J. N. Maloof, and N. Sinha, “Structured light-based 3D reconstruction system for plants,” Sensors 15(8), 18587–18612 (2015).
[Crossref]

Slaughter, D. C.

T. T. Nguyen, D. C. Slaughter, N. Max, J. N. Maloof, and N. Sinha, “Structured light-based 3D reconstruction system for plants,” Sensors 15(8), 18587–18612 (2015).
[Crossref]

Steger, C.

C. Steger, “An unbiased detector of curvilinear structures,” IEEE Trans. Pattern Anal. Machine Intell. 20(2), 113–125 (1998).
[Crossref]

Su, J.

G. Xu, J. Yuan, X. Li, and J. Su, “Optimization reconstruction method of object profile using flexible laser plane and bi-planar references,” Sci. Rep. 8(1), 1526 (2018).
[Crossref]

G. Xu, J. Yuan, X. T. Li, and J. Su, “Profile reconstruction method adopting parameterized re-projection errors of laser lines generated from bi-cuboid references,” Opt. Express 25(24), 29746–29760 (2017).
[Crossref]

Tang, H.

Venzel’, V. I.

Wang, C.

Wu, T. T.

Xu, G.

Yi, S.

Yuan, J.

G. Xu, J. Yuan, X. Li, and J. Su, “Optimization reconstruction method of object profile using flexible laser plane and bi-planar references,” Sci. Rep. 8(1), 1526 (2018).
[Crossref]

G. Xu, J. Yuan, X. T. Li, and J. Su, “Profile reconstruction method adopting parameterized re-projection errors of laser lines generated from bi-cuboid references,” Opt. Express 25(24), 29746–29760 (2017).
[Crossref]

Zhan, G.

Zhang, S.

Zhang, Z. Y.

Z. Y. Zhang, “Camera calibration with one-dimensional objects,” IEEE Trans. Pattern Anal. Machine Intell. 26(7), 892–899 (2004).
[Crossref]

Z. Y. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Machine Intell. 22(11), 1330–1334 (2000).
[Crossref]

Zhong, K.

Zhu, Y. P.

Zisserman, A.

R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision (Cambridge University, 2003).

Appl. Opt. (2)

Curr. Opt. Photon. (1)

IEEE T. Pattern. Anal. (2)

J. S. Kim, P. Gurdjos, and I. S. Kweon, “Geometric and algebraic constraints of projected concentric circles and their applications to camera calibration,” IEEE T. Pattern. Anal. 27(4), 637–642 (2005).
[Crossref]

Y. Bok, H. G. Jeon, and I. S. Kweon, “Geometric calibration of micro-lens-based light field cameras using line features,” IEEE T. Pattern. Anal. 39(2), 287–300 (2017).
[Crossref]

IEEE Trans. Pattern Anal. Machine Intell. (3)

Z. Y. Zhang, “A flexible new technique for camera calibration,” IEEE Trans. Pattern Anal. Machine Intell. 22(11), 1330–1334 (2000).
[Crossref]

Z. Y. Zhang, “Camera calibration with one-dimensional objects,” IEEE Trans. Pattern Anal. Machine Intell. 26(7), 892–899 (2004).
[Crossref]

C. Steger, “An unbiased detector of curvilinear structures,” IEEE Trans. Pattern Anal. Machine Intell. 20(2), 113–125 (1998).
[Crossref]

Int. J. Comput. Vis. (1)

Y. Bok, Y. Jeong, D. G. Choi, and I. S. Kweon, “Capturing village-level heritages with a hand-held camera-laser fusion sensor,” Int. J. Comput. Vis. 94(1), 36–53 (2011).
[Crossref]

J. Opt. Technol. (1)

Meas. Sci. Rev. (1)

A. Glowacz, A. Glowacz, and Z. Glowacz, “Recognition of thermal images of direct current motor with application of area perimeter vector and Bayes classifier,” Meas. Sci. Rev. 15(3), 119–126 (2015).
[Crossref]

Measurement (1)

A. Glowacz and Z. Glowacz, “Diagnostics of stator faults of the single-phase induction motor using thermal images, moasos and selected classifiers,” Measurement 93, 86–93 (2016).
[Crossref]

Opt. Express (5)

Opt. Laser. Eng. (1)

M. Y. Kim, S. M. Ayaz, J. Park, and Y. Roh, “Adaptive 3D sensing system based on variable magnification using stereo vision and structured light,” Opt. Laser. Eng. 55, 113–127 (2014).
[Crossref]

Pattern Recogn. (2)

P. Mukhopadhyay and B. B. Chaudhuri, “A survey of Hough transform,” Pattern Recogn. 48(3), 993–1010 (2015).
[Crossref]

X. Q. Meng and Z. Y. Hu, “A new easy camera calibration technique based on circular points,” Pattern Recogn. 36(5), 1155–1164 (2003).
[Crossref]

Sci. Rep. (1)

G. Xu, J. Yuan, X. Li, and J. Su, “Optimization reconstruction method of object profile using flexible laser plane and bi-planar references,” Sci. Rep. 8(1), 1526 (2018).
[Crossref]

Sensors (2)

A. Costanzo, M. Minasi, G. Casula, M. Musacchio, and M. F. Buongiorno, “Combined use of terrestrial laser scanning and IR thermography applied to a historical building,” Sensors 15(1), 194–213 (2014).
[Crossref]

T. T. Nguyen, D. C. Slaughter, N. Max, J. N. Maloof, and N. Sinha, “Structured light-based 3D reconstruction system for plants,” Sensors 15(8), 18587–18612 (2015).
[Crossref]

Other (2)

R. A. Horn and C. R. Johnson, Matrix Analysis (Cambridge University, 2012).

R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision (Cambridge University, 2003).

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (9)

Fig. 1.
Fig. 1. Instrumentation, coordinate system definition, geometry invariant generation and point reconstruction based on the planar laser and the metrical rectification with the ICDCP in the similarity space.
Fig. 2.
Fig. 2. The reconstruction process based on the planar laser and the metrical rectification with the ICDCP in the similarity space.
Fig. 3.
Fig. 3. The experiments of the object reconstruction and the accuracy verification by the benchmark plate. (a) reconstruction test (b) verification test.
Fig. 4.
Fig. 4. The experiments of the comparison methods. (a), (c) calibration of methods in Refs. [23,14], (b), (d) error estimation of methods in Refs. [23],14].
Fig. 5.
Fig. 5. The reconstructions of four objects using the geometry invariant generated from planar laser and metrical rectification in similarity space. (a) a vacuum bottle, (b) a cubic box, (c) a tea cup, (d) a flat surface, (e)-(h) the image-processing results of (a)-(d), (i)-(l) the reconstructions of (a)-(d).
Fig. 6.
Fig. 6. The reconstruction errors using the geometry invariant generated from planar laser and metrical rectification in similarity space. The benchmark-generator distances are from 100 mm to 400 mm, with an interval of 100 mm, respectively. (a)-(d), (e)-(h), (i)-(l), (m)-(p) the camera-reference distances are 600, 650, 700, 750 mm, respectively.
Fig. 7.
Fig. 7. The error means of the reconstruction method using the geometry invariant generated from planar laser and metrical rectification in similarity space. (a)-(d) the reference-camera distance is from 600 mm to 750 mm, with an interval of 50 mm, separately.
Fig. 8.
Fig. 8. The reconstruction errors influenced by the pair number of the orthogonal lines under different distances. The benchmark-generator distances are from 100 mm to 400 mm, with an interval of 100 mm, respectively. (a)-(d), (e)-(h), (i)-(l), (m)-(p) the camera-reference distances are 600, 650, 700, 750 mm, respectively.
Fig. 9.
Fig. 9. The impact of the image noise on the error means of the reconstruction method. (a)-(d) the reference-camera distances are 600 mm-750 mm, with the interval of 50 mm, separately.

Tables (3)

Tables Icon

Table 1. The parameters of the ICDCP.

Tables Icon

Table 2. The error statistics of the proposed method and comparison methods. M1 is the proposed method. M2 is the reconstruction method by the planar laser with the nonrestrictive installation pose relative to the 2D reference. M3 is the reconstruction method with the homography solution. D1 is the reference-camera distance. D2 is the benchmark-generator distance. L is the benchmark length.

Tables Icon

Table 3. The statistics of error means of the reconstruction based on the geometry invariant generated from the planar laser and the metrical rectification influenced by the Gaussian noise.

Equations (19)

Equations on this page are rendered with MathJax. Learn more.

l i T G I m i  = 0
G I = ( H P H A ) G ( H A T H P T )
V =  ( K 0 w T K 1 )
( K K T K K T w w T K K T w T K K T w ) = ( a 1 a 2 / a 2 2 2 a 4 / a 4 2 2 a 2 / a 2 2 2 a 3 a 5 / a 5 2 2 a 4 / a 4 2 2 a 5 / a 5 2 2 a 6 )
K K T = ( a 1 a 2 / a 2 2 2 a 2 / a 2 2 2 a 3 )
K K T w = ( a 4 / a 4 2 2 a 5 / a 5 2 2 )
K =  ( a 1 0 a 2 /(2 a 1 ) a 3 a 2 2 /(4 a 1 ) )
w = ( ( 2 a 3 a 4 a 5 a 2 ) / ( 2 a 3 a 4 a 5 a 2 ) ( 4 a 1 a 3 a 2 2 ) ( 4 a 1 a 3 a 2 2 ) ( 2 a 1 a 5 a 2 a 4 ) / ( 2 a 1 a 5 a 2 a 4 ) ( 4 a 1 a 3 a 2 2 ) ( 4 a 1 a 3 a 2 2 ) )
V = ( a 1 0 0 a 2 / a 2 ( 2 a 1 ) ( 2 a 1 ) a 3 a 2 2 / a 2 2 ( 4 a 1 ) ( 4 a 1 ) 0 a 4 / a 4 ( 2 a 1 ) ( 2 a 1 ) ( 2 a 1 a 5 a 2 a 4 ) / ( 2 a 1 a 5 a 2 a 4 ) ( 2 4 a 1 2 a 3 a 2 2 a 1 ) ( 2 4 a 1 2 a 3 a 2 2 a 1 ) 1 )
A q S = V 1 A q I
B q S = V 1 B q I
X j S = V 1 x j
X j , q R B q R / X j , q R B q R A q R B q R A q R B q R = X j S B q S / X j S B q S A q S B q S A q S B q S = k j , q
( x j , q R ) 2 + ( y B , q R y j , q R ) 2 = ( k j , q h q ) 2
( x j , q R ) 2 + ( y B , q  +  1 R y j , q R ) 2 = ( k j , q  +  1 h q  +  1 ) 2
y j , q R = ( ( k q  +  1 h q  +  1 ) 2 ( k q h q ) 2 + ( y B , q R ) 2 ( y B , q  +  1 R ) 2 ) / ( ( k q  +  1 h q  +  1 ) 2 ( k q h q ) 2 + ( y B , q R ) 2 ( y B , q  +  1 R ) 2 ) 2 ( y B , q R y B , q  +  1 R ) 2 ( y B , q R y B , q  +  1 R )
y j R  =  1 m 1 q = 1 m 1 y j , q R
x j , q R = ( k j , q h q ) 2 ( y B , q R y j , q R ) 2
x j R  =  1 m 1 q = 1 m 1 x j , q R