Abstract
An influential assumption for the front end of models in vision, visual search, and object recognition is an analysis of independent features that correspond to basic image properties, such as motion, shape, and color. Empirically, one common test of independent features (a cue-summation study) measures performance with increasing available cues or features, with improving performance leading to conclusions of summation across independent features. In a study by Shimozaki, et al. [J. Vision 2, 354–370 (2002)], both ideal and human observers showed no summation with large stimulus differences, in contrast to independent-feature models and suggesting that stimulus information (as assessed by an ideal observer) might affect cue-summation studies. Extending the previous summation study, observers performed a visual search of four Gabors differing in only orientation, only spatial frequency, or both orientation and spatial frequency, across a range of target–distractor differences. An ideal observer underpredicted human summation for small differences, whereas the independent-orientation and spatial-frequency feature models overpredicted human summation for large differences. An ideal observer with channels jointly tuned to spatial frequency and orientation predicted human performance across both small and large target–distractor differences.
© 2003 Optical Society of America
Full Article | PDF ArticleMore Like This
Richard F. Murray, Brent R. Beutter, Miguel P. Eckstein, and Leland S. Stone
J. Opt. Soc. Am. A 20(7) 1356-1370 (2003)
Harriet A. Allen, Robert F. Hess, Behzad Mansouri, and Steven C. Dakin
J. Opt. Soc. Am. A 20(6) 974-986 (2003)
Steven C. Dakin
J. Opt. Soc. Am. A 18(5) 1016-1026 (2001)