Abstract

We studied whether the blur/sharpness of an occlusion boundary between a sharply focused surface and a blurred surface is used as a relative depth cue. Observers judged relative depth in pairs of images that differed only in the blurriness of the common boundary between two adjoining texture regions, one blurred and one sharply focused. Two experiments were conducted; in both, observers consistently used the blur of the boundary as a cue to relative depth. However, the strength of the cue, relative to other cues, varied across observers. The occlusion edge blur cue can resolve the near/far ambiguity inherent in depth-from-focus computations.

© 1996 Optical Society of America

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. V. Bruce, P. R. Green, Visual Perception: Physiology, Psychology, and Ecology, 2nd ed. (Erlbaum, Hillsdale, N.J., 1990).
  2. L. R. Wanger, J. A. Ferwerda, D. P. Greenberg, “Perceiving spatial relationships in computer-generated images,” IEEE Comput. Graphics Applic. 12, 44–58 (1992).
    [Crossref]
  3. A. P. Pentland, “The focal gradient: optics ecologically salient,” Invest. Ophthalmol. Vis. Sci. Suppl. 26, 243 (1985).
  4. P. Grossman, “Depth from focus,” Pattern Recognition Lett. 5, 63–69 (1987).
    [Crossref]
  5. K. Nakayama, S. Shimojo, G. H. Silverman, “Stereoscopic depth: its relation to image segmentation, grouping, and the recognition of occluded objects,” Perception 18, 55–68 (1989).
    [Crossref] [PubMed]
  6. M. S. Landy, L. T. Maloney, E. B. Johnston, M. Young, “Measurement and modeling of depth cue combination: in defense of weak fusion,” Vision Res. 35, 389–412 (1995).
    [Crossref] [PubMed]
  7. V. Klymenko, N. Weisstein, “Spatial frequency differences can determine figure–ground organization,” J. Exp. Psychol. Hum. Percept. Perf. 12, 324–330 (1986).
    [Crossref]
  8. N. Weisstein, E. Wong, “Figure – ground organization and the spatial and temporal responses of the visual system” in Pattern Recognition by Humans and Machines: Visual Perception, E. C. Schwab, H. C. Nusbaum, eds. (Academic, Orlando, Fla., 1986), Vol. 2, pp. 31–64.
  9. E. Wong, N. Weisstein, “Sharp targets are detected better against a figure and blurred targets are detected better against a ground,” J. Exp. Psychol. Hum. Percept. Perf. 9, 194–202 (1983).
    [Crossref]
  10. J. M. Brown, N. Weisstein, “A spatial frequency effect on perceived depth,” Percept. Psychophys. 44, 157–166 (1988).
    [Crossref] [PubMed]
  11. E. Rubin, “Figure and ground,” reprinted in Readings in Perception, D. C. Beardslee, M. Wertheimer, eds. (Van Nostrand, Princeton, N.J., 1958), pp. 194–203 (1921).
  12. T. Darrell, K. Wohn, “Pyramid based depth from focus,” in Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, Ann Arbor, Mich., 1988 (IEEE Computer Society Press, Washington, D.C., 1988), pp. 504–509.
  13. J. Ens, P. Lawrence, “An investigation of methods for determining depth from focus,” IEEE Trans. Patt. Anal. Mach. Intell. 15, 97–108 (1993).
    [Crossref]
  14. B. K. P. Horn, “Focusing,” Artificial Intelligence Memo 160 (Massachusetts Institute of Technology, Cambridge, Mass., 1968).
  15. R. A. Jarvis, “A perspective on range finding techniques for computer vision,” IEEE Trans. Patt. Anal. Mach. Intell. PAMI-5, 122–139 (1983).
    [Crossref]
  16. S.-H. Lai, C. W. Fu, S. Chang, “A generalized depth estimation algorithm with a single image,” IEEE Trans. Patt. Anal. Mach. Intell. 14, 405–411 (1992).
    [Crossref]
  17. A. P. Pentland, “A new sense for depth of field,” IEEE Trans. Patt. Anal. Mach. Intell. PAMI-9, 523–531 (1987).
    [Crossref]
  18. M. Subbarao, “Parallel depth recovery by changing camera parameters,” in Proceedings of the IEEE 2nd International Conference on Computer Vision (IEEE Computer Society Press, Washington, D.C., 1988), pp. 149–155.
  19. M. Subbarao, N. Gurumoorthy, “Depth recovery from blurred edges,” in Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, Ann Arbor, Mich., 1988 (IEEE Computer Society Press, Washington, D.C., 1988), pp. 498–503.
  20. J. A. Marshall, K. E. Martin, D. Ariely, C. A. Burbeck, J. P. Rolland, “Blur attachment as a visual depth cue,” Invest. Ophthalmol. Vis. Sci. Suppl. 33/4, 1369 (1992).

1995 (1)

M. S. Landy, L. T. Maloney, E. B. Johnston, M. Young, “Measurement and modeling of depth cue combination: in defense of weak fusion,” Vision Res. 35, 389–412 (1995).
[Crossref] [PubMed]

1993 (1)

J. Ens, P. Lawrence, “An investigation of methods for determining depth from focus,” IEEE Trans. Patt. Anal. Mach. Intell. 15, 97–108 (1993).
[Crossref]

1992 (3)

L. R. Wanger, J. A. Ferwerda, D. P. Greenberg, “Perceiving spatial relationships in computer-generated images,” IEEE Comput. Graphics Applic. 12, 44–58 (1992).
[Crossref]

S.-H. Lai, C. W. Fu, S. Chang, “A generalized depth estimation algorithm with a single image,” IEEE Trans. Patt. Anal. Mach. Intell. 14, 405–411 (1992).
[Crossref]

J. A. Marshall, K. E. Martin, D. Ariely, C. A. Burbeck, J. P. Rolland, “Blur attachment as a visual depth cue,” Invest. Ophthalmol. Vis. Sci. Suppl. 33/4, 1369 (1992).

1989 (1)

K. Nakayama, S. Shimojo, G. H. Silverman, “Stereoscopic depth: its relation to image segmentation, grouping, and the recognition of occluded objects,” Perception 18, 55–68 (1989).
[Crossref] [PubMed]

1988 (1)

J. M. Brown, N. Weisstein, “A spatial frequency effect on perceived depth,” Percept. Psychophys. 44, 157–166 (1988).
[Crossref] [PubMed]

1987 (2)

A. P. Pentland, “A new sense for depth of field,” IEEE Trans. Patt. Anal. Mach. Intell. PAMI-9, 523–531 (1987).
[Crossref]

P. Grossman, “Depth from focus,” Pattern Recognition Lett. 5, 63–69 (1987).
[Crossref]

1986 (1)

V. Klymenko, N. Weisstein, “Spatial frequency differences can determine figure–ground organization,” J. Exp. Psychol. Hum. Percept. Perf. 12, 324–330 (1986).
[Crossref]

1985 (1)

A. P. Pentland, “The focal gradient: optics ecologically salient,” Invest. Ophthalmol. Vis. Sci. Suppl. 26, 243 (1985).

1983 (2)

E. Wong, N. Weisstein, “Sharp targets are detected better against a figure and blurred targets are detected better against a ground,” J. Exp. Psychol. Hum. Percept. Perf. 9, 194–202 (1983).
[Crossref]

R. A. Jarvis, “A perspective on range finding techniques for computer vision,” IEEE Trans. Patt. Anal. Mach. Intell. PAMI-5, 122–139 (1983).
[Crossref]

Ariely, D.

J. A. Marshall, K. E. Martin, D. Ariely, C. A. Burbeck, J. P. Rolland, “Blur attachment as a visual depth cue,” Invest. Ophthalmol. Vis. Sci. Suppl. 33/4, 1369 (1992).

Brown, J. M.

J. M. Brown, N. Weisstein, “A spatial frequency effect on perceived depth,” Percept. Psychophys. 44, 157–166 (1988).
[Crossref] [PubMed]

Bruce, V.

V. Bruce, P. R. Green, Visual Perception: Physiology, Psychology, and Ecology, 2nd ed. (Erlbaum, Hillsdale, N.J., 1990).

Burbeck, C. A.

J. A. Marshall, K. E. Martin, D. Ariely, C. A. Burbeck, J. P. Rolland, “Blur attachment as a visual depth cue,” Invest. Ophthalmol. Vis. Sci. Suppl. 33/4, 1369 (1992).

Chang, S.

S.-H. Lai, C. W. Fu, S. Chang, “A generalized depth estimation algorithm with a single image,” IEEE Trans. Patt. Anal. Mach. Intell. 14, 405–411 (1992).
[Crossref]

Darrell, T.

T. Darrell, K. Wohn, “Pyramid based depth from focus,” in Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, Ann Arbor, Mich., 1988 (IEEE Computer Society Press, Washington, D.C., 1988), pp. 504–509.

Ens, J.

J. Ens, P. Lawrence, “An investigation of methods for determining depth from focus,” IEEE Trans. Patt. Anal. Mach. Intell. 15, 97–108 (1993).
[Crossref]

Ferwerda, J. A.

L. R. Wanger, J. A. Ferwerda, D. P. Greenberg, “Perceiving spatial relationships in computer-generated images,” IEEE Comput. Graphics Applic. 12, 44–58 (1992).
[Crossref]

Fu, C. W.

S.-H. Lai, C. W. Fu, S. Chang, “A generalized depth estimation algorithm with a single image,” IEEE Trans. Patt. Anal. Mach. Intell. 14, 405–411 (1992).
[Crossref]

Green, P. R.

V. Bruce, P. R. Green, Visual Perception: Physiology, Psychology, and Ecology, 2nd ed. (Erlbaum, Hillsdale, N.J., 1990).

Greenberg, D. P.

L. R. Wanger, J. A. Ferwerda, D. P. Greenberg, “Perceiving spatial relationships in computer-generated images,” IEEE Comput. Graphics Applic. 12, 44–58 (1992).
[Crossref]

Grossman, P.

P. Grossman, “Depth from focus,” Pattern Recognition Lett. 5, 63–69 (1987).
[Crossref]

Gurumoorthy, N.

M. Subbarao, N. Gurumoorthy, “Depth recovery from blurred edges,” in Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, Ann Arbor, Mich., 1988 (IEEE Computer Society Press, Washington, D.C., 1988), pp. 498–503.

Horn, B. K. P.

B. K. P. Horn, “Focusing,” Artificial Intelligence Memo 160 (Massachusetts Institute of Technology, Cambridge, Mass., 1968).

Jarvis, R. A.

R. A. Jarvis, “A perspective on range finding techniques for computer vision,” IEEE Trans. Patt. Anal. Mach. Intell. PAMI-5, 122–139 (1983).
[Crossref]

Johnston, E. B.

M. S. Landy, L. T. Maloney, E. B. Johnston, M. Young, “Measurement and modeling of depth cue combination: in defense of weak fusion,” Vision Res. 35, 389–412 (1995).
[Crossref] [PubMed]

Klymenko, V.

V. Klymenko, N. Weisstein, “Spatial frequency differences can determine figure–ground organization,” J. Exp. Psychol. Hum. Percept. Perf. 12, 324–330 (1986).
[Crossref]

Lai, S.-H.

S.-H. Lai, C. W. Fu, S. Chang, “A generalized depth estimation algorithm with a single image,” IEEE Trans. Patt. Anal. Mach. Intell. 14, 405–411 (1992).
[Crossref]

Landy, M. S.

M. S. Landy, L. T. Maloney, E. B. Johnston, M. Young, “Measurement and modeling of depth cue combination: in defense of weak fusion,” Vision Res. 35, 389–412 (1995).
[Crossref] [PubMed]

Lawrence, P.

J. Ens, P. Lawrence, “An investigation of methods for determining depth from focus,” IEEE Trans. Patt. Anal. Mach. Intell. 15, 97–108 (1993).
[Crossref]

Maloney, L. T.

M. S. Landy, L. T. Maloney, E. B. Johnston, M. Young, “Measurement and modeling of depth cue combination: in defense of weak fusion,” Vision Res. 35, 389–412 (1995).
[Crossref] [PubMed]

Marshall, J. A.

J. A. Marshall, K. E. Martin, D. Ariely, C. A. Burbeck, J. P. Rolland, “Blur attachment as a visual depth cue,” Invest. Ophthalmol. Vis. Sci. Suppl. 33/4, 1369 (1992).

Martin, K. E.

J. A. Marshall, K. E. Martin, D. Ariely, C. A. Burbeck, J. P. Rolland, “Blur attachment as a visual depth cue,” Invest. Ophthalmol. Vis. Sci. Suppl. 33/4, 1369 (1992).

Nakayama, K.

K. Nakayama, S. Shimojo, G. H. Silverman, “Stereoscopic depth: its relation to image segmentation, grouping, and the recognition of occluded objects,” Perception 18, 55–68 (1989).
[Crossref] [PubMed]

Pentland, A. P.

A. P. Pentland, “A new sense for depth of field,” IEEE Trans. Patt. Anal. Mach. Intell. PAMI-9, 523–531 (1987).
[Crossref]

A. P. Pentland, “The focal gradient: optics ecologically salient,” Invest. Ophthalmol. Vis. Sci. Suppl. 26, 243 (1985).

Rolland, J. P.

J. A. Marshall, K. E. Martin, D. Ariely, C. A. Burbeck, J. P. Rolland, “Blur attachment as a visual depth cue,” Invest. Ophthalmol. Vis. Sci. Suppl. 33/4, 1369 (1992).

Rubin, E.

E. Rubin, “Figure and ground,” reprinted in Readings in Perception, D. C. Beardslee, M. Wertheimer, eds. (Van Nostrand, Princeton, N.J., 1958), pp. 194–203 (1921).

Shimojo, S.

K. Nakayama, S. Shimojo, G. H. Silverman, “Stereoscopic depth: its relation to image segmentation, grouping, and the recognition of occluded objects,” Perception 18, 55–68 (1989).
[Crossref] [PubMed]

Silverman, G. H.

K. Nakayama, S. Shimojo, G. H. Silverman, “Stereoscopic depth: its relation to image segmentation, grouping, and the recognition of occluded objects,” Perception 18, 55–68 (1989).
[Crossref] [PubMed]

Subbarao, M.

M. Subbarao, “Parallel depth recovery by changing camera parameters,” in Proceedings of the IEEE 2nd International Conference on Computer Vision (IEEE Computer Society Press, Washington, D.C., 1988), pp. 149–155.

M. Subbarao, N. Gurumoorthy, “Depth recovery from blurred edges,” in Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, Ann Arbor, Mich., 1988 (IEEE Computer Society Press, Washington, D.C., 1988), pp. 498–503.

Wanger, L. R.

L. R. Wanger, J. A. Ferwerda, D. P. Greenberg, “Perceiving spatial relationships in computer-generated images,” IEEE Comput. Graphics Applic. 12, 44–58 (1992).
[Crossref]

Weisstein, N.

J. M. Brown, N. Weisstein, “A spatial frequency effect on perceived depth,” Percept. Psychophys. 44, 157–166 (1988).
[Crossref] [PubMed]

V. Klymenko, N. Weisstein, “Spatial frequency differences can determine figure–ground organization,” J. Exp. Psychol. Hum. Percept. Perf. 12, 324–330 (1986).
[Crossref]

E. Wong, N. Weisstein, “Sharp targets are detected better against a figure and blurred targets are detected better against a ground,” J. Exp. Psychol. Hum. Percept. Perf. 9, 194–202 (1983).
[Crossref]

N. Weisstein, E. Wong, “Figure – ground organization and the spatial and temporal responses of the visual system” in Pattern Recognition by Humans and Machines: Visual Perception, E. C. Schwab, H. C. Nusbaum, eds. (Academic, Orlando, Fla., 1986), Vol. 2, pp. 31–64.

Wohn, K.

T. Darrell, K. Wohn, “Pyramid based depth from focus,” in Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, Ann Arbor, Mich., 1988 (IEEE Computer Society Press, Washington, D.C., 1988), pp. 504–509.

Wong, E.

E. Wong, N. Weisstein, “Sharp targets are detected better against a figure and blurred targets are detected better against a ground,” J. Exp. Psychol. Hum. Percept. Perf. 9, 194–202 (1983).
[Crossref]

N. Weisstein, E. Wong, “Figure – ground organization and the spatial and temporal responses of the visual system” in Pattern Recognition by Humans and Machines: Visual Perception, E. C. Schwab, H. C. Nusbaum, eds. (Academic, Orlando, Fla., 1986), Vol. 2, pp. 31–64.

Young, M.

M. S. Landy, L. T. Maloney, E. B. Johnston, M. Young, “Measurement and modeling of depth cue combination: in defense of weak fusion,” Vision Res. 35, 389–412 (1995).
[Crossref] [PubMed]

IEEE Comput. Graphics Applic. (1)

L. R. Wanger, J. A. Ferwerda, D. P. Greenberg, “Perceiving spatial relationships in computer-generated images,” IEEE Comput. Graphics Applic. 12, 44–58 (1992).
[Crossref]

IEEE Trans. Patt. Anal. Mach. Intell. (4)

J. Ens, P. Lawrence, “An investigation of methods for determining depth from focus,” IEEE Trans. Patt. Anal. Mach. Intell. 15, 97–108 (1993).
[Crossref]

R. A. Jarvis, “A perspective on range finding techniques for computer vision,” IEEE Trans. Patt. Anal. Mach. Intell. PAMI-5, 122–139 (1983).
[Crossref]

S.-H. Lai, C. W. Fu, S. Chang, “A generalized depth estimation algorithm with a single image,” IEEE Trans. Patt. Anal. Mach. Intell. 14, 405–411 (1992).
[Crossref]

A. P. Pentland, “A new sense for depth of field,” IEEE Trans. Patt. Anal. Mach. Intell. PAMI-9, 523–531 (1987).
[Crossref]

Invest. Ophthalmol. Vis. Sci. Suppl. (2)

A. P. Pentland, “The focal gradient: optics ecologically salient,” Invest. Ophthalmol. Vis. Sci. Suppl. 26, 243 (1985).

J. A. Marshall, K. E. Martin, D. Ariely, C. A. Burbeck, J. P. Rolland, “Blur attachment as a visual depth cue,” Invest. Ophthalmol. Vis. Sci. Suppl. 33/4, 1369 (1992).

J. Exp. Psychol. Hum. Percept. Perf. (2)

E. Wong, N. Weisstein, “Sharp targets are detected better against a figure and blurred targets are detected better against a ground,” J. Exp. Psychol. Hum. Percept. Perf. 9, 194–202 (1983).
[Crossref]

V. Klymenko, N. Weisstein, “Spatial frequency differences can determine figure–ground organization,” J. Exp. Psychol. Hum. Percept. Perf. 12, 324–330 (1986).
[Crossref]

Pattern Recognition Lett. (1)

P. Grossman, “Depth from focus,” Pattern Recognition Lett. 5, 63–69 (1987).
[Crossref]

Percept. Psychophys. (1)

J. M. Brown, N. Weisstein, “A spatial frequency effect on perceived depth,” Percept. Psychophys. 44, 157–166 (1988).
[Crossref] [PubMed]

Perception (1)

K. Nakayama, S. Shimojo, G. H. Silverman, “Stereoscopic depth: its relation to image segmentation, grouping, and the recognition of occluded objects,” Perception 18, 55–68 (1989).
[Crossref] [PubMed]

Vision Res. (1)

M. S. Landy, L. T. Maloney, E. B. Johnston, M. Young, “Measurement and modeling of depth cue combination: in defense of weak fusion,” Vision Res. 35, 389–412 (1995).
[Crossref] [PubMed]

Other (7)

E. Rubin, “Figure and ground,” reprinted in Readings in Perception, D. C. Beardslee, M. Wertheimer, eds. (Van Nostrand, Princeton, N.J., 1958), pp. 194–203 (1921).

T. Darrell, K. Wohn, “Pyramid based depth from focus,” in Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, Ann Arbor, Mich., 1988 (IEEE Computer Society Press, Washington, D.C., 1988), pp. 504–509.

N. Weisstein, E. Wong, “Figure – ground organization and the spatial and temporal responses of the visual system” in Pattern Recognition by Humans and Machines: Visual Perception, E. C. Schwab, H. C. Nusbaum, eds. (Academic, Orlando, Fla., 1986), Vol. 2, pp. 31–64.

B. K. P. Horn, “Focusing,” Artificial Intelligence Memo 160 (Massachusetts Institute of Technology, Cambridge, Mass., 1968).

M. Subbarao, “Parallel depth recovery by changing camera parameters,” in Proceedings of the IEEE 2nd International Conference on Computer Vision (IEEE Computer Society Press, Washington, D.C., 1988), pp. 149–155.

M. Subbarao, N. Gurumoorthy, “Depth recovery from blurred edges,” in Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, Ann Arbor, Mich., 1988 (IEEE Computer Society Press, Washington, D.C., 1988), pp. 498–503.

V. Bruce, P. R. Green, Visual Perception: Physiology, Psychology, and Ecology, 2nd ed. (Erlbaum, Hillsdale, N.J., 1990).

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1
Fig. 1

Which central square looks closer? The only difference between the two images is the degree of blur/sharpness of the outer edge of the central square. (a) The image was derived by simulating the lens optics that arise when one focuses on a small line-textured square in front of a circle-textured object. This gives rise to a sharp boundary. (b) The image was derived by simulating the lens optics that arise when one focuses on a line-textured object visible through a small square hole in a circle-textured object. This gives rise to an occlusion edge boundary with a blurry appearance.

Fig. 2
Fig. 2

Two kinds of boundaries at a depth discontinuity. If one object at a depth discontinuity is in sharp focus, then the other object is blurred. (a) A sharp optical boundary (left) at the depth discontinuity, and an optical boundary with occlusion edge blur (right) at the depth discontinuity. (b) The sharp boundary (left) is produced when the sharply focused object is nearer than the blurred object. The boundary with occlusion edge blur (right) is produced when the sharply focused object is farther than the blurred object. The occlusion edge blur arises because image points near the boundary receive a mixture of light rays from both the sharp and the blurred objects. The cone of light rays that converge on a sample image point near the boundary is shown by the shading. (c) The amount of contribution from the sharp and the blurred textures is plotted as a function of position across the boundary.

Fig. 3
Fig. 3

Blurred inner square.

Fig. 4
Fig. 4

Experiment 1 results: plots of the percentage of trials in which each observer judged that the image containing a sharp boundary between the inner and the outer squares appeared closer than the image containing a blurred boundary. Two conditions are shown: images in which the inner square’s texture is in sharp focus and the outer square’s texture is blurred and images in which the inner square’s texture is blurred and the outer square’s texture is sharp. (Results from two subjects that were highly variable across trial blocks are not shown.)

Fig. 5
Fig. 5

Blurred boundary between two texture regions.

Fig. 6
Fig. 6

Experiment 2 results: plots of the percentage of trials in which each subject judged that the blurred texture region was in front of the sharp-texture region, for five subjects under two conditions: images in which the boundary between the two texture regions is blurred and images in which the boundary is sharp. One observer (stars) completed 120 trials; a second observer (triangles) completed 136 trials. The other three observers completed 144 trials, as described in Subsection 3.B.

Fig. 7
Fig. 7

Detail of blur and vignetting at boundary. Left, contribution to the image from a near blurred object; right, contribution to the image from a far sharp object.

Equations (1)

Equations on this page are rendered with MathJax. Learn more.

I = G * B + ( G * B ) × S .

Metrics