The problem of how visual information such as orientation is combined across space bears on key visual abilities, such as texture perception. Orientation signals can be derived from both luminance and contrast, but it is not well understood how such information is pooled or how these different orientation signals interact in the integration process. We measured orientation discrimination thresholds for arrays of equivisible first-order and second-order Gabors. Thresholds were measured as the orientation variability in the arrays increased, and we estimated the number of samples (or efficiency) and internal noise of the mechanism being used. Observers were able to judge the mean orientation of arrays of either first- or second-order Gabors. For arrays of first-order and arrays of second-order Gabors, estimates of the number of samples used increased as the number of Gabors increased. When judging the orientation of arrays of either order, observers were able to ignore randomly oriented Gabors of the opposite order. If observers did not know which Gabor type carried the more useful orientation information, they tended to use the information from first-order Gabors (even when this was poorer information). Observers were unable to combine information from first- and second-order Gabors, though this would have improved their performance. The visual system appears to have separate integrators for combining local orientation across space for luminance- and contrast-defined features.
© 2003 Optical Society of AmericaFull Article | PDF Article