Abstract

We describe research on two imaging range sensors that use defocus to estimate range. One technique is completely passive and provides dense range measurements in textured areas with an rms error of 2.5%. This method can be useful when stereo is unreliable because of correspondence errors or when imaging geometry prevents the use of multiple viewpoints (e.g., microscopy). The second technique uses structured light and provides dense range measurements with an rms error of 0.5%. This method can be useful when imaging geometry prevents the use of multiple viewpoints and for acquiring range imagery of rapidly moving objects, as the structured light can be delivered in a single stroboscopic flash.

© 1994 Optical Society of America

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. R. Jarvis, “A perspective on range-finding techniques for computer vision,” IEEE Trans. Pattern Anal. Mach. Intell. PAMI-3, 122–139 (1983).
    [CrossRef]
  2. E. Krotkov, “Focusing,” Int. J. Comput. Vision, 223–237 (1987).
  3. T. Darrell, “Pyramid based depth from focus,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, New York, 1988), pp. 504–509.
  4. A. Pentland, “Depth of scene from depth of field,” in Proceedings, Image Understanding Workshop (Science Applications, Washington, D.C., 1982), pp. 253–259.
  5. A. Pentland, “A new sense for depth of field,”IEEE Trans. Pattern Anal. Mach. Intell. 9, 523–531 (1987).
    [CrossRef] [PubMed]
  6. A. Pentland, T. Darrell, M. Turk, W. Huang, “A simple, real-time range camera,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, New York, 1989), pp. 256–261.
    [CrossRef]
  7. M. Subbarao, “Parallel depth recovery by changing camera parameters,” in Proceedings of the IEEE International Conference on Computer Vision (Institute of Electrical and Electronics Engineers, New York, 1988), pp. 149–155.
  8. M. Subbarao, T. Wei, “Depth from defocus and rapid autofocusing: a practical approach,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, New York, 1992), pp. 773–776.
  9. M. Bove, “Discrete Fourier transform based depth-from-focus,” in Image Understanding and Machine Vision, Vol. 14 of 1989 OSA Technical Digest Series (Optical Society of America, Washington, D.C., 1989), pp. 118–121.
  10. Y. Xiong, S. Shafer, “Depth from focusing and defocusing,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, New York, 1993), pp. 68–73.
    [CrossRef]
  11. M. Bove, “Entropy-based depth from focus,” J. Opt. Soc. Am. A 10, 561–566 (1993).
    [CrossRef]
  12. K. Prasad, R. Mammone, “Depth restoration from defocused images using simulated annealing,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, New York, 1990), pp. 227–229.
  13. J. Ens, “An investigation of methods for determining depth from focus,” Ph.D. dissertation (University of British Columbia, Vancouver, B.C., 1990).
  14. G. Surya, M. Subbarao, “Depth from defocus by changing camera aperture: a spatial domain approach,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, New York, 1993), pp. 61–68.
    [CrossRef]
  15. B. Girod, S. Scherock, “Depth from defocus of structured light,” in Optics, Illumination, and Image Sensing for Machine Vision IV, D. J. Svetkoff, ed., Proc. Soc. Photo-Opt. Instrum. Eng.1194, 209–215 (1989).
    [CrossRef]
  16. P. Burt, E. Adelson, “The Laplacian pyramid as a compact image code,” IEEE Trans. Commun. COM-31, 532–540 (1983).
    [CrossRef]
  17. S. Scherock, “Depth from defocus of structured light,” M.S. Thesis (MIT, Cambridge, Massachusetts, 1991).
  18. A. Sužiedėlis, “A stroboscopic system for measuring depth from defocus,” B.S. Thesis (MIT, Cambridge, Massachusetts, 1990).
  19. A. Pentland, “Interpolation using wavelet bases,”IEEE Trans. Pattern Anal. Mach. Vision 16, 410–414 (1994).
    [CrossRef]

1994 (1)

A. Pentland, “Interpolation using wavelet bases,”IEEE Trans. Pattern Anal. Mach. Vision 16, 410–414 (1994).
[CrossRef]

1993 (1)

1987 (2)

E. Krotkov, “Focusing,” Int. J. Comput. Vision, 223–237 (1987).

A. Pentland, “A new sense for depth of field,”IEEE Trans. Pattern Anal. Mach. Intell. 9, 523–531 (1987).
[CrossRef] [PubMed]

1983 (2)

P. Burt, E. Adelson, “The Laplacian pyramid as a compact image code,” IEEE Trans. Commun. COM-31, 532–540 (1983).
[CrossRef]

R. Jarvis, “A perspective on range-finding techniques for computer vision,” IEEE Trans. Pattern Anal. Mach. Intell. PAMI-3, 122–139 (1983).
[CrossRef]

Adelson, E.

P. Burt, E. Adelson, “The Laplacian pyramid as a compact image code,” IEEE Trans. Commun. COM-31, 532–540 (1983).
[CrossRef]

Bove, M.

M. Bove, “Entropy-based depth from focus,” J. Opt. Soc. Am. A 10, 561–566 (1993).
[CrossRef]

M. Bove, “Discrete Fourier transform based depth-from-focus,” in Image Understanding and Machine Vision, Vol. 14 of 1989 OSA Technical Digest Series (Optical Society of America, Washington, D.C., 1989), pp. 118–121.

Burt, P.

P. Burt, E. Adelson, “The Laplacian pyramid as a compact image code,” IEEE Trans. Commun. COM-31, 532–540 (1983).
[CrossRef]

Darrell, T.

T. Darrell, “Pyramid based depth from focus,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, New York, 1988), pp. 504–509.

A. Pentland, T. Darrell, M. Turk, W. Huang, “A simple, real-time range camera,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, New York, 1989), pp. 256–261.
[CrossRef]

Ens, J.

J. Ens, “An investigation of methods for determining depth from focus,” Ph.D. dissertation (University of British Columbia, Vancouver, B.C., 1990).

Girod, B.

B. Girod, S. Scherock, “Depth from defocus of structured light,” in Optics, Illumination, and Image Sensing for Machine Vision IV, D. J. Svetkoff, ed., Proc. Soc. Photo-Opt. Instrum. Eng.1194, 209–215 (1989).
[CrossRef]

Huang, W.

A. Pentland, T. Darrell, M. Turk, W. Huang, “A simple, real-time range camera,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, New York, 1989), pp. 256–261.
[CrossRef]

Jarvis, R.

R. Jarvis, “A perspective on range-finding techniques for computer vision,” IEEE Trans. Pattern Anal. Mach. Intell. PAMI-3, 122–139 (1983).
[CrossRef]

Krotkov, E.

E. Krotkov, “Focusing,” Int. J. Comput. Vision, 223–237 (1987).

Mammone, R.

K. Prasad, R. Mammone, “Depth restoration from defocused images using simulated annealing,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, New York, 1990), pp. 227–229.

Pentland, A.

A. Pentland, “Interpolation using wavelet bases,”IEEE Trans. Pattern Anal. Mach. Vision 16, 410–414 (1994).
[CrossRef]

A. Pentland, “A new sense for depth of field,”IEEE Trans. Pattern Anal. Mach. Intell. 9, 523–531 (1987).
[CrossRef] [PubMed]

A. Pentland, “Depth of scene from depth of field,” in Proceedings, Image Understanding Workshop (Science Applications, Washington, D.C., 1982), pp. 253–259.

A. Pentland, T. Darrell, M. Turk, W. Huang, “A simple, real-time range camera,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, New York, 1989), pp. 256–261.
[CrossRef]

Prasad, K.

K. Prasad, R. Mammone, “Depth restoration from defocused images using simulated annealing,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, New York, 1990), pp. 227–229.

Scherock, S.

B. Girod, S. Scherock, “Depth from defocus of structured light,” in Optics, Illumination, and Image Sensing for Machine Vision IV, D. J. Svetkoff, ed., Proc. Soc. Photo-Opt. Instrum. Eng.1194, 209–215 (1989).
[CrossRef]

S. Scherock, “Depth from defocus of structured light,” M.S. Thesis (MIT, Cambridge, Massachusetts, 1991).

Shafer, S.

Y. Xiong, S. Shafer, “Depth from focusing and defocusing,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, New York, 1993), pp. 68–73.
[CrossRef]

Subbarao, M.

M. Subbarao, “Parallel depth recovery by changing camera parameters,” in Proceedings of the IEEE International Conference on Computer Vision (Institute of Electrical and Electronics Engineers, New York, 1988), pp. 149–155.

M. Subbarao, T. Wei, “Depth from defocus and rapid autofocusing: a practical approach,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, New York, 1992), pp. 773–776.

G. Surya, M. Subbarao, “Depth from defocus by changing camera aperture: a spatial domain approach,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, New York, 1993), pp. 61–68.
[CrossRef]

Surya, G.

G. Surya, M. Subbarao, “Depth from defocus by changing camera aperture: a spatial domain approach,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, New York, 1993), pp. 61–68.
[CrossRef]

Sužiedelis, A.

A. Sužiedėlis, “A stroboscopic system for measuring depth from defocus,” B.S. Thesis (MIT, Cambridge, Massachusetts, 1990).

Turk, M.

A. Pentland, T. Darrell, M. Turk, W. Huang, “A simple, real-time range camera,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, New York, 1989), pp. 256–261.
[CrossRef]

Wei, T.

M. Subbarao, T. Wei, “Depth from defocus and rapid autofocusing: a practical approach,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, New York, 1992), pp. 773–776.

Xiong, Y.

Y. Xiong, S. Shafer, “Depth from focusing and defocusing,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, New York, 1993), pp. 68–73.
[CrossRef]

IEEE Trans. Commun. (1)

P. Burt, E. Adelson, “The Laplacian pyramid as a compact image code,” IEEE Trans. Commun. COM-31, 532–540 (1983).
[CrossRef]

IEEE Trans. Pattern Anal. Mach. Intell. (2)

R. Jarvis, “A perspective on range-finding techniques for computer vision,” IEEE Trans. Pattern Anal. Mach. Intell. PAMI-3, 122–139 (1983).
[CrossRef]

A. Pentland, “A new sense for depth of field,”IEEE Trans. Pattern Anal. Mach. Intell. 9, 523–531 (1987).
[CrossRef] [PubMed]

IEEE Trans. Pattern Anal. Mach. Vision (1)

A. Pentland, “Interpolation using wavelet bases,”IEEE Trans. Pattern Anal. Mach. Vision 16, 410–414 (1994).
[CrossRef]

Int. J. Comput. Vision (1)

E. Krotkov, “Focusing,” Int. J. Comput. Vision, 223–237 (1987).

J. Opt. Soc. Am. A (1)

Other (13)

S. Scherock, “Depth from defocus of structured light,” M.S. Thesis (MIT, Cambridge, Massachusetts, 1991).

A. Sužiedėlis, “A stroboscopic system for measuring depth from defocus,” B.S. Thesis (MIT, Cambridge, Massachusetts, 1990).

T. Darrell, “Pyramid based depth from focus,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, New York, 1988), pp. 504–509.

A. Pentland, “Depth of scene from depth of field,” in Proceedings, Image Understanding Workshop (Science Applications, Washington, D.C., 1982), pp. 253–259.

A. Pentland, T. Darrell, M. Turk, W. Huang, “A simple, real-time range camera,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, New York, 1989), pp. 256–261.
[CrossRef]

M. Subbarao, “Parallel depth recovery by changing camera parameters,” in Proceedings of the IEEE International Conference on Computer Vision (Institute of Electrical and Electronics Engineers, New York, 1988), pp. 149–155.

M. Subbarao, T. Wei, “Depth from defocus and rapid autofocusing: a practical approach,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, New York, 1992), pp. 773–776.

M. Bove, “Discrete Fourier transform based depth-from-focus,” in Image Understanding and Machine Vision, Vol. 14 of 1989 OSA Technical Digest Series (Optical Society of America, Washington, D.C., 1989), pp. 118–121.

Y. Xiong, S. Shafer, “Depth from focusing and defocusing,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, New York, 1993), pp. 68–73.
[CrossRef]

K. Prasad, R. Mammone, “Depth restoration from defocused images using simulated annealing,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, New York, 1990), pp. 227–229.

J. Ens, “An investigation of methods for determining depth from focus,” Ph.D. dissertation (University of British Columbia, Vancouver, B.C., 1990).

G. Surya, M. Subbarao, “Depth from defocus by changing camera aperture: a spatial domain approach,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (Institute of Electrical and Electronics Engineers, New York, 1993), pp. 61–68.
[CrossRef]

B. Girod, S. Scherock, “Depth from defocus of structured light,” in Optics, Illumination, and Image Sensing for Machine Vision IV, D. J. Svetkoff, ed., Proc. Soc. Photo-Opt. Instrum. Eng.1194, 209–215 (1989).
[CrossRef]

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (11)

Fig. 1
Fig. 1

For most real lens systems range information is not lost during image formation.

Fig. 2
Fig. 2

Ray-trace diagram for the case in which the plane of best focus is between the object and the lens.

Fig. 3
Fig. 3

Plot of theoretical circle-of-confusion radius versus distance from plane of best focus, for an f/2, 6.25-mm lens system with a focal length of 25 mm focused at 1 m, with geometrical-optics formulas.

Fig. 4
Fig. 4

Depth from defocus of structured light geometry.

Fig. 5
Fig. 5

(a) Passive-depth-from-defocus hardware layout, (b) a picture of the device.

Fig. 6
Fig. 6

(a) Active-depth-from-defocus hardware layout, (b) the actual device.

Fig. 7
Fig. 7

(a) Lines pattern, (b) lines pattern uniformly blurred.

Fig. 8
Fig. 8

Cross section of actual lines pattern.

Fig. 9
Fig. 9

Recovered range for a ramp, obtained with (a) the passive range camera, (b) the active range camera.

Fig. 10
Fig. 10

Two images of a scene obtained with (a) long and (b) short depth of field (data from Bove9). (c) Range extracted by comparison of the amount of blurring in the two images, as described in the text. (d) Range after regularization.

Fig. 11
Fig. 11

Rolling ball sequence and recovered range.

Equations (41)

Equations on this page are rendered with MathJax. Learn more.

i ( x , y ) = h [ x - ξ , y - η , d ( ξ , η ) ] s ( ξ , η ) d ξ d η ,
i ( x , y ) = h ( x - ξ , y - η ) s ( ξ , η ) d ξ d η .
i ( x , y ) = h ( x , y ) * s ( x , y ) ,
I ( ω x , ω y ) = H ( ω x , ω y ) S ( ω x , ω y ) ,
i ( x , y ) I ( ω x , ω y ) ,             h ( x , y ) H ( ω x , ω y ) , s ( x , y ) S ( ω x , ω y )
h g ( x , y ) = { 1 π r c 2 x 2 + y 2 r c 2 0 otherwise .
H g ( ω x , ω y ) = 2 J 1 [ r c ( ω x 2 + ω y 2 ) 1 / 2 ] r c ( ω x 2 + ω y 2 ) 1 / 2 ,
H g ( ω r ) = 2 J 1 ( ω r r c ) ω r r c ,
1 u + 1 v = 1 F ,             1 u 0 + 1 v 0 = 1 F ,
r c δ = r v 0 ,
δ + v 0 = v ,
d + u = u 0 .
r c = r v F + F u 0 - v u 0 F u 0 .
r c = v F + F u 0 - v u 0 2 f u 0 = v F + F ( u - d ) - v ( u - d ) 2 f ( u - d ) .
u 0 = v F - 2 f r c + v - F ,             d = - u + v F - 2 f r c + v - F ,
δ + v = v 0 ,             d + u 0 = u ,
Δ r c Δ d d 2 ,
r c d = r u .
r c d = r u .
Δ r c = Δ d r u ,
i 1 ( x , y ) i 2 ( x , y ) = s ( x , y ) * h 1 ( x , y , d ) s ( x , y ) * h 2 ( x , y , d ) .
I 1 ( ω x , ω y ) I 2 ( ω x , ω y ) = H 1 ( ω x , ω y , d ) H 1 ( ω x , ω y , d ) .
I 1 ( ω x , ω y ) I 2 ( ω x , ω y ) = G ( ω x , ω y , r c 1 ) r c 2 G ( ω x , ω y , r c 2 ) r c 1 = r c 2 2 r c 1 2 exp [ ( ω x 2 + ω y 2 ) 2 π 2 ( r c 2 2 - r c 1 2 ) ] ,
I 1 ( ω r ) I 2 ( ω r ) = G ( ω r , r c 1 ) r c 2 G ( ω r , r c 2 ) r c 1 = r c 2 2 r c 1 2 exp [ ω r 2 2 π 2 ( r c 2 2 - r c 1 2 ) ] ,
ln r c 2 2 r c 1 2 + ω r 2 2 π 2 ( r c 2 2 - r c 1 2 ) = ln I 1 ( ω r ) - ln I 2 ( ω r ) .
k 1 r c 2 2 + k 2 ln r c 2 + k 3 = ln I 1 ( ω r ) - ln I 2 ( ω r ) ,
A = 2 π 2 ( r c 2 2 - r c 1 2 ) ,             B = ln r c 2 2 r c 1 2 , C i = ln I 1 ( ω i ) - ln I 2 ( ω i ) .
A = Σ i ( ω i 2 - ω ¯ 2 ) C i Σ i ( ω i 2 - ω ¯ 2 ) 2 ,
r c 2 = A / 2 π 2 ,
D = F v 0 v 0 - F - r c 2 f 2 ,
i ( x , y ) = h g ( x , y ) * δ ( x , 0 )
= h g ( x , η ) d η
= { - r c 2 - x 2 + r c 2 - x 2 1 π r c 2 d η - r c < x < r c 0 otherwise
= { 2 r c 2 - x 2 π r c 2 - r c < x < r c 0 otherwise ,
h ( x ) = { 2 r c 2 - x 2 π r c 2 - r c < x < r c 0 otherwise .
x 1 [ n ] = max ( x i [ n - 1 ] , x i [ n ] , x i [ n + 1 ] ) ,
x 2 [ n ] = x 1 [ n - 1 ] + x 1 [ n ] + x 1 [ n + 1 ] 3 ,
x 0 [ n ] = min ( x 2 [ n - 1 ] , x 2 [ n ] , x 2 [ n + 1 ] ) ,
I y = r c r c h 2 ( x ) d x ,
I y r max r max h 2 ( x ) d x ,
I y = - r c r c 4 π 2 r c 2 ( r c 2 - x 2 ) d x = ( 16 3 π 2 ) r c .

Metrics