Abstract

Kersten [Vision Res. 27, 1029 (1987)] reported that absolute efficiency for the detection of static, one-dimensional bandpass noise was high and approximately constant for stimulus bandwidths ranging from 1 to 6 octaves. This result implies that human observers integrated information efficiently across a wide range of spatial frequency. One interpretation of this result—and similar results obtained with auditory stimuli [J. Acoust. Soc. Am. 32, 121 (1960) ]—is that human observers, like ideal observers, can detect stimuli using an internal filter that has an adjustable bandwidth. The current experiments replicate Kersten’s findings, extend them to the case where observers are uncertain about stimulus bandwidth, and use the classification image technique to estimate the filter used to detect noise stimuli that differ in bandwidth. Our results suggest that observers do not adjust channel bandwidth to match the stimulus and that detection thresholds are consistent with the predictions of a multiple-channel model.

© 2009 Optical Society of America

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. R. DeValois and K. DeValois, Spatial Vision (Oxford Univ. Press, 1988).
  2. H. Wilson and J. Bergen, “A four mechanism model for threshold spatial vision,” Vision Res. 19, 19-32 (1979).
    [CrossRef] [PubMed]
  3. H. Wilson and D. Gelb, “Modified line-element theory for spatial-frequency and width discrimination,” J. Opt. Soc. Am. A 1, 124-131 (1984).
    [CrossRef] [PubMed]
  4. B. Wandell, Foundations of Vision (Wiley, 1995).
  5. H. Wilson and F. Wilkinson, “Evolving concepts of spatial channels in vision: From independence to nonlinear interactions,” Perception 26, 939-960 (1997).
    [CrossRef] [PubMed]
  6. N. V. S. Graham, Visual Pattern Analyzers (Oxford Univ. Press, 1989).
    [CrossRef]
  7. N. Graham and J. Nachmias, “Detection of grating patterns containing two spatial frequencies: a comparison of single-channel and multiple-channels models,” Vision Res. 11, 251-259 (1971).
    [CrossRef] [PubMed]
  8. N. Graham, J. G. Robson, and J. Nachmias, “Grating summation in fovea and periphery,” Vision Res. 18, 815-825 (1978).
    [CrossRef] [PubMed]
  9. D. Kersten, “Statistical efficiency for the detection of visual noise,” Vision Res. 27, 1029-1040 (1987).
    [CrossRef] [PubMed]
  10. D. Green, “Auditory detection of a noise signal,” in Signal Detection and Recognition by Human Observers, J.Swets, ed. (Wiley, 1960), pp. 523-547.
  11. D. Green, “Auditory detection of a noise signal,” J. Acoust. Soc. Am. 32, 121-131 (1960).
    [CrossRef]
  12. M. Eckstein, “The footprints of visual attention in the Posner cueing paradigm revealed by classification images,” J. Vision 2, 25-45 (2002).
    [CrossRef]
  13. A. Ahumada, Jr and J. Lovell, “Stimulus features in signal detection,” J. Acoust. Soc. Am. 49, 1751-1756 (1971).
    [CrossRef]
  14. D. Brainard, “The psychophysics toolbox,” Spatial Vis. 10, 443-446 (1997).
    [CrossRef]
  15. D. Pelli, “The videotoolbox software for visual psychophysics: transforming numbers into movies,” Spatial Vis. 10, 437-442 (1997).
    [CrossRef]
  16. C. W. Tyler, H. Chan, L. Liu, B. McBride, and L. L. Kontsevich, “Bit stealing: how to get 1786 or more gray levels from an 8-bit color monitor,” Proc. SPIE 1666, 351-364 (1992).
    [CrossRef]
  17. G. H. Wetherill and H. Levitt, “Sequential estimation of points on a psychometric function,” Br. J. Math. Stat. Psychol. 18, 1-10 (1965).
    [CrossRef] [PubMed]
  18. B. Efron and R. Tibshirani, Introduction to the Bootstrap (Chapman & Hall, 1994).
  19. E. Davis and N. Graham, “Spatial frequency uncertainty effects in the detection of sinusoidal gratings,” Vision Res. 21, 705-712 (1981).
    [CrossRef] [PubMed]
  20. R. Hubner, “The efficiency of different cue types for reducing spatial-frequency uncertainty,” Vision Res. 36, 401-408 (1996).
    [CrossRef] [PubMed]
  21. V. A. Lamme and P. R. Roelfsema, “The distinct modes of vision offered by feedforward and recurrent processing,” Trends Neurosci. 23, 571-579 (2000).
    [CrossRef] [PubMed]
  22. M. Banks, W. Geisler, and P. Bennett, “The physical limits of grating visibility,” Vision Res. 27, 1915-1924 (1987).
    [CrossRef] [PubMed]
  23. B. Beard and A. Ahumada, Jr., “Detection in fixed and random noise in foveal and parafoveal vision explained by template learning,” J. Opt. Soc. Am. A 16, 755-763 (1999).
    [CrossRef]
  24. J. M. Gold, R. F. Murray, P. J. Bennett, and A. B. Sekuler, “Deriving behavioural receptive fields for visually completed contours,” Curr. Biol. 10, 663-666 (2000).
    [CrossRef] [PubMed]
  25. A. B. Sekuler, C. M. Gaspar, J. M. Gold, and P. J. Bennett, “Inversion leads to quantitative, not qualitative, changes in face processing,” Curr. Biol. 14, 391-396 (2004).
    [CrossRef] [PubMed]
  26. C. K. Abbey and M. P. Eckstein, “Classification image analysis: estimation and statistical inference for two-alternative forced-choice experiments,” J. Vision 2, 66-78 (2002).
    [CrossRef]
  27. C. Abbey, M. Eckstein, and F. Bochud, “Estimation of human-observer templates in two-alternative forced-choice experiments,” Proc. SPIE 3663, 284-295 (1999).
    [CrossRef]
  28. E. Barth, B. L. Beard, and A. J. Ahumada, “Nonlinear features in vernier acuity,” Proc. SPIE 3644, 88-96 (1999).
    [CrossRef]

2004 (1)

A. B. Sekuler, C. M. Gaspar, J. M. Gold, and P. J. Bennett, “Inversion leads to quantitative, not qualitative, changes in face processing,” Curr. Biol. 14, 391-396 (2004).
[CrossRef] [PubMed]

2002 (2)

C. K. Abbey and M. P. Eckstein, “Classification image analysis: estimation and statistical inference for two-alternative forced-choice experiments,” J. Vision 2, 66-78 (2002).
[CrossRef]

M. Eckstein, “The footprints of visual attention in the Posner cueing paradigm revealed by classification images,” J. Vision 2, 25-45 (2002).
[CrossRef]

2000 (2)

V. A. Lamme and P. R. Roelfsema, “The distinct modes of vision offered by feedforward and recurrent processing,” Trends Neurosci. 23, 571-579 (2000).
[CrossRef] [PubMed]

J. M. Gold, R. F. Murray, P. J. Bennett, and A. B. Sekuler, “Deriving behavioural receptive fields for visually completed contours,” Curr. Biol. 10, 663-666 (2000).
[CrossRef] [PubMed]

1999 (3)

C. Abbey, M. Eckstein, and F. Bochud, “Estimation of human-observer templates in two-alternative forced-choice experiments,” Proc. SPIE 3663, 284-295 (1999).
[CrossRef]

E. Barth, B. L. Beard, and A. J. Ahumada, “Nonlinear features in vernier acuity,” Proc. SPIE 3644, 88-96 (1999).
[CrossRef]

B. Beard and A. Ahumada, Jr., “Detection in fixed and random noise in foveal and parafoveal vision explained by template learning,” J. Opt. Soc. Am. A 16, 755-763 (1999).
[CrossRef]

1997 (3)

H. Wilson and F. Wilkinson, “Evolving concepts of spatial channels in vision: From independence to nonlinear interactions,” Perception 26, 939-960 (1997).
[CrossRef] [PubMed]

D. Brainard, “The psychophysics toolbox,” Spatial Vis. 10, 443-446 (1997).
[CrossRef]

D. Pelli, “The videotoolbox software for visual psychophysics: transforming numbers into movies,” Spatial Vis. 10, 437-442 (1997).
[CrossRef]

1996 (1)

R. Hubner, “The efficiency of different cue types for reducing spatial-frequency uncertainty,” Vision Res. 36, 401-408 (1996).
[CrossRef] [PubMed]

1992 (1)

C. W. Tyler, H. Chan, L. Liu, B. McBride, and L. L. Kontsevich, “Bit stealing: how to get 1786 or more gray levels from an 8-bit color monitor,” Proc. SPIE 1666, 351-364 (1992).
[CrossRef]

1987 (2)

D. Kersten, “Statistical efficiency for the detection of visual noise,” Vision Res. 27, 1029-1040 (1987).
[CrossRef] [PubMed]

M. Banks, W. Geisler, and P. Bennett, “The physical limits of grating visibility,” Vision Res. 27, 1915-1924 (1987).
[CrossRef] [PubMed]

1984 (1)

1981 (1)

E. Davis and N. Graham, “Spatial frequency uncertainty effects in the detection of sinusoidal gratings,” Vision Res. 21, 705-712 (1981).
[CrossRef] [PubMed]

1979 (1)

H. Wilson and J. Bergen, “A four mechanism model for threshold spatial vision,” Vision Res. 19, 19-32 (1979).
[CrossRef] [PubMed]

1978 (1)

N. Graham, J. G. Robson, and J. Nachmias, “Grating summation in fovea and periphery,” Vision Res. 18, 815-825 (1978).
[CrossRef] [PubMed]

1971 (2)

A. Ahumada, Jr and J. Lovell, “Stimulus features in signal detection,” J. Acoust. Soc. Am. 49, 1751-1756 (1971).
[CrossRef]

N. Graham and J. Nachmias, “Detection of grating patterns containing two spatial frequencies: a comparison of single-channel and multiple-channels models,” Vision Res. 11, 251-259 (1971).
[CrossRef] [PubMed]

1965 (1)

G. H. Wetherill and H. Levitt, “Sequential estimation of points on a psychometric function,” Br. J. Math. Stat. Psychol. 18, 1-10 (1965).
[CrossRef] [PubMed]

1960 (1)

D. Green, “Auditory detection of a noise signal,” J. Acoust. Soc. Am. 32, 121-131 (1960).
[CrossRef]

Abbey, C.

C. Abbey, M. Eckstein, and F. Bochud, “Estimation of human-observer templates in two-alternative forced-choice experiments,” Proc. SPIE 3663, 284-295 (1999).
[CrossRef]

Abbey, C. K.

C. K. Abbey and M. P. Eckstein, “Classification image analysis: estimation and statistical inference for two-alternative forced-choice experiments,” J. Vision 2, 66-78 (2002).
[CrossRef]

Ahumada, A.

Ahumada, A. J.

E. Barth, B. L. Beard, and A. J. Ahumada, “Nonlinear features in vernier acuity,” Proc. SPIE 3644, 88-96 (1999).
[CrossRef]

Banks, M.

M. Banks, W. Geisler, and P. Bennett, “The physical limits of grating visibility,” Vision Res. 27, 1915-1924 (1987).
[CrossRef] [PubMed]

Barth, E.

E. Barth, B. L. Beard, and A. J. Ahumada, “Nonlinear features in vernier acuity,” Proc. SPIE 3644, 88-96 (1999).
[CrossRef]

Beard, B.

Beard, B. L.

E. Barth, B. L. Beard, and A. J. Ahumada, “Nonlinear features in vernier acuity,” Proc. SPIE 3644, 88-96 (1999).
[CrossRef]

Bennett, P.

M. Banks, W. Geisler, and P. Bennett, “The physical limits of grating visibility,” Vision Res. 27, 1915-1924 (1987).
[CrossRef] [PubMed]

Bennett, P. J.

A. B. Sekuler, C. M. Gaspar, J. M. Gold, and P. J. Bennett, “Inversion leads to quantitative, not qualitative, changes in face processing,” Curr. Biol. 14, 391-396 (2004).
[CrossRef] [PubMed]

J. M. Gold, R. F. Murray, P. J. Bennett, and A. B. Sekuler, “Deriving behavioural receptive fields for visually completed contours,” Curr. Biol. 10, 663-666 (2000).
[CrossRef] [PubMed]

Bergen, J.

H. Wilson and J. Bergen, “A four mechanism model for threshold spatial vision,” Vision Res. 19, 19-32 (1979).
[CrossRef] [PubMed]

Bochud, F.

C. Abbey, M. Eckstein, and F. Bochud, “Estimation of human-observer templates in two-alternative forced-choice experiments,” Proc. SPIE 3663, 284-295 (1999).
[CrossRef]

Brainard, D.

D. Brainard, “The psychophysics toolbox,” Spatial Vis. 10, 443-446 (1997).
[CrossRef]

Chan, H.

C. W. Tyler, H. Chan, L. Liu, B. McBride, and L. L. Kontsevich, “Bit stealing: how to get 1786 or more gray levels from an 8-bit color monitor,” Proc. SPIE 1666, 351-364 (1992).
[CrossRef]

Davis, E.

E. Davis and N. Graham, “Spatial frequency uncertainty effects in the detection of sinusoidal gratings,” Vision Res. 21, 705-712 (1981).
[CrossRef] [PubMed]

DeValois, K.

R. DeValois and K. DeValois, Spatial Vision (Oxford Univ. Press, 1988).

DeValois, R.

R. DeValois and K. DeValois, Spatial Vision (Oxford Univ. Press, 1988).

Eckstein, M.

M. Eckstein, “The footprints of visual attention in the Posner cueing paradigm revealed by classification images,” J. Vision 2, 25-45 (2002).
[CrossRef]

C. Abbey, M. Eckstein, and F. Bochud, “Estimation of human-observer templates in two-alternative forced-choice experiments,” Proc. SPIE 3663, 284-295 (1999).
[CrossRef]

Eckstein, M. P.

C. K. Abbey and M. P. Eckstein, “Classification image analysis: estimation and statistical inference for two-alternative forced-choice experiments,” J. Vision 2, 66-78 (2002).
[CrossRef]

Efron, B.

B. Efron and R. Tibshirani, Introduction to the Bootstrap (Chapman & Hall, 1994).

Gaspar, C. M.

A. B. Sekuler, C. M. Gaspar, J. M. Gold, and P. J. Bennett, “Inversion leads to quantitative, not qualitative, changes in face processing,” Curr. Biol. 14, 391-396 (2004).
[CrossRef] [PubMed]

Geisler, W.

M. Banks, W. Geisler, and P. Bennett, “The physical limits of grating visibility,” Vision Res. 27, 1915-1924 (1987).
[CrossRef] [PubMed]

Gelb, D.

Gold, J. M.

A. B. Sekuler, C. M. Gaspar, J. M. Gold, and P. J. Bennett, “Inversion leads to quantitative, not qualitative, changes in face processing,” Curr. Biol. 14, 391-396 (2004).
[CrossRef] [PubMed]

J. M. Gold, R. F. Murray, P. J. Bennett, and A. B. Sekuler, “Deriving behavioural receptive fields for visually completed contours,” Curr. Biol. 10, 663-666 (2000).
[CrossRef] [PubMed]

Graham, N.

E. Davis and N. Graham, “Spatial frequency uncertainty effects in the detection of sinusoidal gratings,” Vision Res. 21, 705-712 (1981).
[CrossRef] [PubMed]

N. Graham, J. G. Robson, and J. Nachmias, “Grating summation in fovea and periphery,” Vision Res. 18, 815-825 (1978).
[CrossRef] [PubMed]

N. Graham and J. Nachmias, “Detection of grating patterns containing two spatial frequencies: a comparison of single-channel and multiple-channels models,” Vision Res. 11, 251-259 (1971).
[CrossRef] [PubMed]

Graham, N. V. S.

N. V. S. Graham, Visual Pattern Analyzers (Oxford Univ. Press, 1989).
[CrossRef]

Green, D.

D. Green, “Auditory detection of a noise signal,” J. Acoust. Soc. Am. 32, 121-131 (1960).
[CrossRef]

D. Green, “Auditory detection of a noise signal,” in Signal Detection and Recognition by Human Observers, J.Swets, ed. (Wiley, 1960), pp. 523-547.

Hubner, R.

R. Hubner, “The efficiency of different cue types for reducing spatial-frequency uncertainty,” Vision Res. 36, 401-408 (1996).
[CrossRef] [PubMed]

Kersten, D.

D. Kersten, “Statistical efficiency for the detection of visual noise,” Vision Res. 27, 1029-1040 (1987).
[CrossRef] [PubMed]

Kontsevich, L. L.

C. W. Tyler, H. Chan, L. Liu, B. McBride, and L. L. Kontsevich, “Bit stealing: how to get 1786 or more gray levels from an 8-bit color monitor,” Proc. SPIE 1666, 351-364 (1992).
[CrossRef]

Lamme, V. A.

V. A. Lamme and P. R. Roelfsema, “The distinct modes of vision offered by feedforward and recurrent processing,” Trends Neurosci. 23, 571-579 (2000).
[CrossRef] [PubMed]

Levitt, H.

G. H. Wetherill and H. Levitt, “Sequential estimation of points on a psychometric function,” Br. J. Math. Stat. Psychol. 18, 1-10 (1965).
[CrossRef] [PubMed]

Liu, L.

C. W. Tyler, H. Chan, L. Liu, B. McBride, and L. L. Kontsevich, “Bit stealing: how to get 1786 or more gray levels from an 8-bit color monitor,” Proc. SPIE 1666, 351-364 (1992).
[CrossRef]

Lovell, J.

A. Ahumada, Jr and J. Lovell, “Stimulus features in signal detection,” J. Acoust. Soc. Am. 49, 1751-1756 (1971).
[CrossRef]

McBride, B.

C. W. Tyler, H. Chan, L. Liu, B. McBride, and L. L. Kontsevich, “Bit stealing: how to get 1786 or more gray levels from an 8-bit color monitor,” Proc. SPIE 1666, 351-364 (1992).
[CrossRef]

Murray, R. F.

J. M. Gold, R. F. Murray, P. J. Bennett, and A. B. Sekuler, “Deriving behavioural receptive fields for visually completed contours,” Curr. Biol. 10, 663-666 (2000).
[CrossRef] [PubMed]

Nachmias, J.

N. Graham, J. G. Robson, and J. Nachmias, “Grating summation in fovea and periphery,” Vision Res. 18, 815-825 (1978).
[CrossRef] [PubMed]

N. Graham and J. Nachmias, “Detection of grating patterns containing two spatial frequencies: a comparison of single-channel and multiple-channels models,” Vision Res. 11, 251-259 (1971).
[CrossRef] [PubMed]

Pelli, D.

D. Pelli, “The videotoolbox software for visual psychophysics: transforming numbers into movies,” Spatial Vis. 10, 437-442 (1997).
[CrossRef]

Robson, J. G.

N. Graham, J. G. Robson, and J. Nachmias, “Grating summation in fovea and periphery,” Vision Res. 18, 815-825 (1978).
[CrossRef] [PubMed]

Roelfsema, P. R.

V. A. Lamme and P. R. Roelfsema, “The distinct modes of vision offered by feedforward and recurrent processing,” Trends Neurosci. 23, 571-579 (2000).
[CrossRef] [PubMed]

Sekuler, A. B.

A. B. Sekuler, C. M. Gaspar, J. M. Gold, and P. J. Bennett, “Inversion leads to quantitative, not qualitative, changes in face processing,” Curr. Biol. 14, 391-396 (2004).
[CrossRef] [PubMed]

J. M. Gold, R. F. Murray, P. J. Bennett, and A. B. Sekuler, “Deriving behavioural receptive fields for visually completed contours,” Curr. Biol. 10, 663-666 (2000).
[CrossRef] [PubMed]

Tibshirani, R.

B. Efron and R. Tibshirani, Introduction to the Bootstrap (Chapman & Hall, 1994).

Tyler, C. W.

C. W. Tyler, H. Chan, L. Liu, B. McBride, and L. L. Kontsevich, “Bit stealing: how to get 1786 or more gray levels from an 8-bit color monitor,” Proc. SPIE 1666, 351-364 (1992).
[CrossRef]

Wandell, B.

B. Wandell, Foundations of Vision (Wiley, 1995).

Wetherill, G. H.

G. H. Wetherill and H. Levitt, “Sequential estimation of points on a psychometric function,” Br. J. Math. Stat. Psychol. 18, 1-10 (1965).
[CrossRef] [PubMed]

Wilkinson, F.

H. Wilson and F. Wilkinson, “Evolving concepts of spatial channels in vision: From independence to nonlinear interactions,” Perception 26, 939-960 (1997).
[CrossRef] [PubMed]

Wilson, H.

H. Wilson and F. Wilkinson, “Evolving concepts of spatial channels in vision: From independence to nonlinear interactions,” Perception 26, 939-960 (1997).
[CrossRef] [PubMed]

H. Wilson and D. Gelb, “Modified line-element theory for spatial-frequency and width discrimination,” J. Opt. Soc. Am. A 1, 124-131 (1984).
[CrossRef] [PubMed]

H. Wilson and J. Bergen, “A four mechanism model for threshold spatial vision,” Vision Res. 19, 19-32 (1979).
[CrossRef] [PubMed]

Br. J. Math. Stat. Psychol. (1)

G. H. Wetherill and H. Levitt, “Sequential estimation of points on a psychometric function,” Br. J. Math. Stat. Psychol. 18, 1-10 (1965).
[CrossRef] [PubMed]

Curr. Biol. (2)

J. M. Gold, R. F. Murray, P. J. Bennett, and A. B. Sekuler, “Deriving behavioural receptive fields for visually completed contours,” Curr. Biol. 10, 663-666 (2000).
[CrossRef] [PubMed]

A. B. Sekuler, C. M. Gaspar, J. M. Gold, and P. J. Bennett, “Inversion leads to quantitative, not qualitative, changes in face processing,” Curr. Biol. 14, 391-396 (2004).
[CrossRef] [PubMed]

J. Acoust. Soc. Am. (2)

A. Ahumada, Jr and J. Lovell, “Stimulus features in signal detection,” J. Acoust. Soc. Am. 49, 1751-1756 (1971).
[CrossRef]

D. Green, “Auditory detection of a noise signal,” J. Acoust. Soc. Am. 32, 121-131 (1960).
[CrossRef]

J. Opt. Soc. Am. A (2)

J. Vision (2)

C. K. Abbey and M. P. Eckstein, “Classification image analysis: estimation and statistical inference for two-alternative forced-choice experiments,” J. Vision 2, 66-78 (2002).
[CrossRef]

M. Eckstein, “The footprints of visual attention in the Posner cueing paradigm revealed by classification images,” J. Vision 2, 25-45 (2002).
[CrossRef]

Perception (1)

H. Wilson and F. Wilkinson, “Evolving concepts of spatial channels in vision: From independence to nonlinear interactions,” Perception 26, 939-960 (1997).
[CrossRef] [PubMed]

Proc. SPIE (3)

C. W. Tyler, H. Chan, L. Liu, B. McBride, and L. L. Kontsevich, “Bit stealing: how to get 1786 or more gray levels from an 8-bit color monitor,” Proc. SPIE 1666, 351-364 (1992).
[CrossRef]

C. Abbey, M. Eckstein, and F. Bochud, “Estimation of human-observer templates in two-alternative forced-choice experiments,” Proc. SPIE 3663, 284-295 (1999).
[CrossRef]

E. Barth, B. L. Beard, and A. J. Ahumada, “Nonlinear features in vernier acuity,” Proc. SPIE 3644, 88-96 (1999).
[CrossRef]

Spatial Vis. (2)

D. Brainard, “The psychophysics toolbox,” Spatial Vis. 10, 443-446 (1997).
[CrossRef]

D. Pelli, “The videotoolbox software for visual psychophysics: transforming numbers into movies,” Spatial Vis. 10, 437-442 (1997).
[CrossRef]

Trends Neurosci. (1)

V. A. Lamme and P. R. Roelfsema, “The distinct modes of vision offered by feedforward and recurrent processing,” Trends Neurosci. 23, 571-579 (2000).
[CrossRef] [PubMed]

Vision Res. (7)

M. Banks, W. Geisler, and P. Bennett, “The physical limits of grating visibility,” Vision Res. 27, 1915-1924 (1987).
[CrossRef] [PubMed]

H. Wilson and J. Bergen, “A four mechanism model for threshold spatial vision,” Vision Res. 19, 19-32 (1979).
[CrossRef] [PubMed]

E. Davis and N. Graham, “Spatial frequency uncertainty effects in the detection of sinusoidal gratings,” Vision Res. 21, 705-712 (1981).
[CrossRef] [PubMed]

R. Hubner, “The efficiency of different cue types for reducing spatial-frequency uncertainty,” Vision Res. 36, 401-408 (1996).
[CrossRef] [PubMed]

N. Graham and J. Nachmias, “Detection of grating patterns containing two spatial frequencies: a comparison of single-channel and multiple-channels models,” Vision Res. 11, 251-259 (1971).
[CrossRef] [PubMed]

N. Graham, J. G. Robson, and J. Nachmias, “Grating summation in fovea and periphery,” Vision Res. 18, 815-825 (1978).
[CrossRef] [PubMed]

D. Kersten, “Statistical efficiency for the detection of visual noise,” Vision Res. 27, 1029-1040 (1987).
[CrossRef] [PubMed]

Other (5)

D. Green, “Auditory detection of a noise signal,” in Signal Detection and Recognition by Human Observers, J.Swets, ed. (Wiley, 1960), pp. 523-547.

N. V. S. Graham, Visual Pattern Analyzers (Oxford Univ. Press, 1989).
[CrossRef]

B. Efron and R. Tibshirani, Introduction to the Bootstrap (Chapman & Hall, 1994).

B. Wandell, Foundations of Vision (Wiley, 1995).

R. DeValois and K. DeValois, Spatial Vision (Oxford Univ. Press, 1988).

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (15)

Fig. 1
Fig. 1

Left pattern is an example of a narrow (0.5 octaves) bandwidth stimulus, and the right pattern an example of a wide (4 octaves) bandwidth stimulus.

Fig. 2
Fig. 2

Noise detection threshold versus stimulus bandwidth for observer AMC in experiment 1. The different symbols represent thresholds obtained with different levels of masking noise. The variance of the masking noise is indicated in the legend. The dashed curves have a slope of 0.25 and have been shifted vertically to fit the data.

Fig. 3
Fig. 3

Plot of absolute efficiency [Eq. (1)] versus stimulus bandwidth plot for observer AMC from experiment 1. The different symbols represent efficiency obtained with different levels of masking noise. The variance of the masking noise is indicated in the legend.

Fig. 4
Fig. 4

Plot of absolute efficiency versus bandwidth for observer JMT from experiment 1. The different symbols represent efficiency obtained with different levels of masking noise. The variance of the masking noise is indicated in the legend.

Fig. 5
Fig. 5

Plot of absolute efficiency versus bandwidth for observer AG from experiment 1. The different symbols represent efficiency obtained with different levels of masking noise. The variance of the masking noise is indicated in the legend.

Fig. 6
Fig. 6

Detection thresholds for observer JMT in experiment 2 in the randomized-bandwidth (unfilled symbols) and blocked-bandwidth (filled symbols) conditions. Error bars (±one standard error) are plotted but are smaller than the symbols. The dashed curves have a slope of 0.25 and have been shifted vertically to fit the thresholds obtained in the randomized condition.

Fig. 7
Fig. 7

Detection thresholds for observer SST in experiment 2. Plotting conventions are the same as in Fig. 6.

Fig. 8
Fig. 8

Threshold versus bandwidth functions for the three observers in experiment 3. The dashed curves are provided as a reference and illustrate the expected slope (0.25) if the summation of spatial frequency information were optimal.

Fig. 9
Fig. 9

Absolute efficiency versus pattern bandwidth for three observers detecting 15 cy/deg center-frequency patterns.

Fig. 10
Fig. 10

Classification images for one observer (AL) in the 5 cy/deg center-frequency conditions. Each panel shows the classification image obtained with a different stimulus bandwidth.

Fig. 11
Fig. 11

Classification images for one observer (AL) in the 15 cy/deg center-frequency condition, shown in each of the four noise bandwidth conditions. The order of the panels is the same as in Fig. 10, from 1 octave to 4 octaves.

Fig. 12
Fig. 12

Classification images for one observer (AL) generated from the signal-absent noise fields.

Fig. 13
Fig. 13

Filled circles represent the TvB function for one observer. The solid curve is the TvB produced by the Wilson–Gelb model, which has no free parameters and is not shifted to fit the data. The dashed curves represent a slope of 0.25.

Fig. 14
Fig. 14

Filled circles represent the TVB function for one observer. The solid curve is the TvB produced by the model, which has no free parameters and is not shifted to fit the data. The dashed curves represent a slope of 0.25.

Fig. 15
Fig. 15

Classification images for the Wilson–Gelb model applied to the noise detection task.

Tables (4)

Tables Icon

Table 1 Threshold versus Bandwidth Slopes for Three Observers in Experiment 1

Tables Icon

Table 2 Slopes of Threshold versus Bandwidth Functions from Experiment 2

Tables Icon

Table 3 Center Frequencies and 95% Confidence Intervals for Three Observers

Tables Icon

Table 4 Octave Bandwidths and 95% Confidence Intervals for Three Observers

Equations (2)

Equations on this page are rendered with MathJax. Learn more.

η = ( c ideal c observer ) 2 ,
Power ( x | μ , σ ) = 1 log ( σ ) 2 π e ( log ( x ) μ ) 2 2 log ( σ ) 2 .

Metrics