Abstract

Depth recovery based on structured light using stripe patterns, especially for a region-based codec, demands accurate estimation of the true boundary of a light pattern captured on a camera image. This is because the accuracy of the estimated boundary has a direct impact on the accuracy of the depth recovery. However, recovering the true boundary of a light pattern is considered difficult due to the deformation incurred primarily by the texture- induced variation of the light reflectance at surface locales. Especially for heavily textured surfaces, the deformation of pattern boundaries becomes rather severe. We present here a novel (to the best of our knowledge) method to estimate the true boundaries of a light pattern that are severely deformed due to the heavy textures involved. First, a general formula that models the deformation of the projected light pattern at the imaging end is presented, taking into account not only the light reflectance variation but also the blurring along the optical passages. The local reflectance indices are then estimated by applying the model to two specially chosen reference projections, all-bright and all-dark. The estimated reflectance indices are to transform the edge-deformed, captured pattern signal into the edge-corrected, canonical pattern signal. A canonical pattern implies the virtual pattern that would have resulted if there were neither the reflectance variation nor the blurring in imaging optics. Finally, we estimate the boundaries of a light pattern by intersecting the canonical form of a light pattern with that of its inverse pattern. The experimental results show that the proposed method results in significant improvements in the accuracy of the estimated boundaries under various adverse conditions.

© 2011 Optical Society of America

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Patt. Recogn. 43, 2666-2680 (2010).
    [CrossRef]
  2. M. Trobina, “Error model of a coded-light range sensor,” Technical report BIWI-TR-164 (Communication Technology Laboratory, 1995).
  3. R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision (Cambridge University Press, 2004).
    [CrossRef]
  4. W. H. Richardson, “Bayesian-based iterative method of image restoration,” J. Opt. Soc. Am. A 62, 55-59 (1972).
    [CrossRef]
  5. L. B. Lucy, “An iterative technique for the rectification of observed distributions,” Astron. J. 79, 745-754 (1974).
    [CrossRef]
  6. L. Sukhan, C. Jongmoo, K. Daesik, J. Byungchan, N. Jaekeun, and K. Hoonmo, “An active 3D robot camera for home environment,” in Proceedings of IEEE Sensors (IEEE, 2004), Vol. 1, pp. 477-480.
    [CrossRef]

2010 (1)

J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Patt. Recogn. 43, 2666-2680 (2010).
[CrossRef]

1974 (1)

L. B. Lucy, “An iterative technique for the rectification of observed distributions,” Astron. J. 79, 745-754 (1974).
[CrossRef]

1972 (1)

W. H. Richardson, “Bayesian-based iterative method of image restoration,” J. Opt. Soc. Am. A 62, 55-59 (1972).
[CrossRef]

Byungchan, J.

L. Sukhan, C. Jongmoo, K. Daesik, J. Byungchan, N. Jaekeun, and K. Hoonmo, “An active 3D robot camera for home environment,” in Proceedings of IEEE Sensors (IEEE, 2004), Vol. 1, pp. 477-480.
[CrossRef]

Daesik, K.

L. Sukhan, C. Jongmoo, K. Daesik, J. Byungchan, N. Jaekeun, and K. Hoonmo, “An active 3D robot camera for home environment,” in Proceedings of IEEE Sensors (IEEE, 2004), Vol. 1, pp. 477-480.
[CrossRef]

Fernandez, S.

J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Patt. Recogn. 43, 2666-2680 (2010).
[CrossRef]

Hartley, R.

R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision (Cambridge University Press, 2004).
[CrossRef]

Hoonmo, K.

L. Sukhan, C. Jongmoo, K. Daesik, J. Byungchan, N. Jaekeun, and K. Hoonmo, “An active 3D robot camera for home environment,” in Proceedings of IEEE Sensors (IEEE, 2004), Vol. 1, pp. 477-480.
[CrossRef]

Jaekeun, N.

L. Sukhan, C. Jongmoo, K. Daesik, J. Byungchan, N. Jaekeun, and K. Hoonmo, “An active 3D robot camera for home environment,” in Proceedings of IEEE Sensors (IEEE, 2004), Vol. 1, pp. 477-480.
[CrossRef]

Jongmoo, C.

L. Sukhan, C. Jongmoo, K. Daesik, J. Byungchan, N. Jaekeun, and K. Hoonmo, “An active 3D robot camera for home environment,” in Proceedings of IEEE Sensors (IEEE, 2004), Vol. 1, pp. 477-480.
[CrossRef]

Llado, X.

J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Patt. Recogn. 43, 2666-2680 (2010).
[CrossRef]

Lucy, L. B.

L. B. Lucy, “An iterative technique for the rectification of observed distributions,” Astron. J. 79, 745-754 (1974).
[CrossRef]

Pribanic, T.

J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Patt. Recogn. 43, 2666-2680 (2010).
[CrossRef]

Richardson, W. H.

W. H. Richardson, “Bayesian-based iterative method of image restoration,” J. Opt. Soc. Am. A 62, 55-59 (1972).
[CrossRef]

Salvi, J.

J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Patt. Recogn. 43, 2666-2680 (2010).
[CrossRef]

Sukhan, L.

L. Sukhan, C. Jongmoo, K. Daesik, J. Byungchan, N. Jaekeun, and K. Hoonmo, “An active 3D robot camera for home environment,” in Proceedings of IEEE Sensors (IEEE, 2004), Vol. 1, pp. 477-480.
[CrossRef]

Trobina, M.

M. Trobina, “Error model of a coded-light range sensor,” Technical report BIWI-TR-164 (Communication Technology Laboratory, 1995).

Zisserman, A.

R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision (Cambridge University Press, 2004).
[CrossRef]

Astron. J. (1)

L. B. Lucy, “An iterative technique for the rectification of observed distributions,” Astron. J. 79, 745-754 (1974).
[CrossRef]

J. Opt. Soc. Am. A (1)

W. H. Richardson, “Bayesian-based iterative method of image restoration,” J. Opt. Soc. Am. A 62, 55-59 (1972).
[CrossRef]

Patt. Recogn. (1)

J. Salvi, S. Fernandez, T. Pribanic, and X. Llado, “A state of the art in structured light patterns for surface profilometry,” Patt. Recogn. 43, 2666-2680 (2010).
[CrossRef]

Other (3)

M. Trobina, “Error model of a coded-light range sensor,” Technical report BIWI-TR-164 (Communication Technology Laboratory, 1995).

R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision (Cambridge University Press, 2004).
[CrossRef]

L. Sukhan, C. Jongmoo, K. Daesik, J. Byungchan, N. Jaekeun, and K. Hoonmo, “An active 3D robot camera for home environment,” in Proceedings of IEEE Sensors (IEEE, 2004), Vol. 1, pp. 477-480.
[CrossRef]

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1
Fig. 1

Two boundary estimation methods: (a) to threshold the signal, using the average of the two, all-bright and all-dark, reference signals (shown as the top and bottom signals) as the threshold and (b) to use the zero crossing of the two signals, the signal (blue) captured from an original bright and dark pattern and its inverse signal (red) with the bright and dark pattern inversed from the original pattern.

Fig. 2
Fig. 2

Illustration of the potential error in conventional zero-crossing-based boundary estimation due to the asymmetric deformation between the original (blue) and its inverse (red) signals incurred by the reflectance variation at/around the boundary: (a) no asymmetric deformation present at the boundary on a textureless surface and (b) asymmetric deformation present at the boundary on a textured surface.

Fig. 3
Fig. 3

Schematics of ray tracing from the pattern projection by a projector to the image captured by a camera.

Fig. 4
Fig. 4

Edge blurring as the result of the convolution between step function and Gaussian blur kernel.

Fig. 5
Fig. 5

Block diagram of the proposed boundary estimation: (a) process of estimating the boundaries with asymmetric deformation based on the zero crossing of canonically recovered signals and (b) overall process of estimating boundaries implemented by combining the two classes of boundaries. The boundaries with asymmetric deformation and with no asymmetric deformation. The conventional zero crossing is applied to the latter for computational efficiency.

Fig. 6(a), 6(b), and 6(c)
Fig. 6(a), 6(b), and 6(c)

Deformed edges (blue in left) and their canonically recovered versions (middle) and after smoothing to eliminate noise (right). (Continues on next page)

Fig. 6(d), 6(e), and 6(f)
Fig. 6(d), 6(e), and 6(f)

(Color online) Fig. 6 Continued

Fig. 7
Fig. 7

Profile of error in boundary estimation in terms of the degree of asymmetric signal deformation represented by d x : the relative pixel distance between the edge of a light stripe and the edge of a white–black texture or of an abrupt change in reflection on an image. It shows that the smaller the d x is, or, equivalently, the more significant the asymmetric signal deformation becomes, the larger are the errors incurred by the conventional zero-crossing method (blue). On the other hand, the proposed zero-crossing method (red) based on canonically recovered signals is able to contain the errors within a small bound regardless of d x or, equivalently, of the significance of the asymmetric signal deformation.

Fig. 8
Fig. 8

Performance of the proposed boundary estimation: (a) light pattern projected on a checker patterned calibration block to produce asymmetric signal deformation in experimentation, (b) boundaries of light stripes estimated by the proposed zero-crossing method with canonical signal representation, and (c) boundaries estimated by the conventional zero-crossing method.

Fig. 9
Fig. 9

3D point clouds of the calibration block using HOC-B (left) and HOC (right) in different views. (a) Full view of visible faces. (b) Front view and (c) top view of left face (plane X = 0 ). (d) Front view and (e) top view of right face (plane Y = 0 ).

Fig. 10
Fig. 10

Horizontal section of the 3D points of the calibration block. 3D points are reconstructed using (a) the HOC-B version and (b) with the original HOC version.

Tables (1)

Tables Icon

Table 1 Errors of Reconstructed 3D Data Using HOC and HOC-B

Equations (16)

Equations on this page are rendered with MathJax. Learn more.

s ( x ) = { H x 0 L x < 0 .
f s ( x ) = ( ( s ( x ) g p ( x , σ p ) ) R ( x ) + A ( x ) ) g c ( x , σ c ) + W ( x ) ,
g i ( x , σ i ) = 1 σ i 2 π e ( x 2 2 σ i 2 ) ,
f 1 ( x ) = ( H * R ( x ) + A ( x ) ) g ( x , σ c ) + W 1 ( x ) f 0 ( x ) = ( L * R ( x ) + A ( x ) ) g ( x , σ c ) + W 0 ( x ) .
f 1 ( x ) f 0 ( x ) ( ( H L ) R ( x ) ) g ( x , σ c ) .
R ( x ) deconvlucy ( ( f 1 ( x ) f 0 ( x ) ) , σ c ) H L ,
f s ( x ) f 0 ( x ) = [ ( S ( x ) g ( x , σ p ) L ) R ( x ) ] g ( x , σ c ) + ( W s ( x ) W 0 ( x ) ) .
f s ( x ) f 0 ( x ) [ ( S ( x ) g ( x , σ p ) L ) R ( x ) ] g ( x , σ c ) .
f c ( x ) deconvlucy ( ( f s ( x ) f 0 ( x ) ) , σ c ) R ( x ) ,
f c ( x ) deconvlucy ( ( f s ( x ) f 0 ( x ) ) , σ c ) deconvlucy ( ( f 1 ( x ) f 0 ( x ) ) , σ c ) ( H L ) .
f 1 ( x ) f 0 ( x ) = ( ( H L ) R ( x ) ) g ( x , σ c ) + ( W 1 ( x ) W 0 ( x ) ) ( ( H L ) R ( x ) ) g ( x , σ c ) .
f 1 ( x ) f 0 ( x ) ( ( H L ) R ) g ( x , σ c ) = ( H L ) R = constant.
( f 1 ( x ) f 0 ( x ) ) x = 0.
| ( f 1 ( x ) f 0 ( x ) ) x | = | ( ( ( H L ) R ( x ) ) g ( x , σ c ) ) x | constant.
f s ( x ) = ( ( s ( x ) g p ( x , σ p ) ) R ( x ) ) g c ( x , σ c ) + ( A ( x ) g c ( x , σ c ) + W ( x ) ) .
f s ( x ) = ( ( s ( x ) g p ( x , σ p ) ) R ) g c ( x , σ c ) + ( A ( x ) g c ( x , σ c ) + W ( x ) ) = R ( s ( x ) g p ( x , σ p ) g c ( x , σ c ) ) + ( A ( x ) g c ( x , σ c ) + W ( x ) ) ,

Metrics