Abstract

In this paper, computational methods are proposed to compute color edge saliency based on the information content of color edges. The computational methods are evaluated on bottom-up saliency in a psychophysical experiment, and on a more complex task of salient object detection in real-world images. The psychophysical experiment demonstrates the relevance of using information theory as a saliency processing model and that the proposed methods are significantly better in predicting color saliency (with a human-method correspondence up to 74.75% and an observer agreement of 86.8%) than state-of-the-art models. Furthermore, results from salient object detection confirm that an early fusion of color and contrast provide accurate performance to compute visual saliency with a hit rate up to 95.2%.

© 2010 Optical Society of America

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. E. Titchener, Lectures on the Elementary Psychology of Feeling and Attention (Adamant Media Corporation, 2005).
  2. A. Koene and L. Zhaoping, “Feature-specific interactions in salience from combined feature contrasts: evidence for a bottom-up saliency map in V1,” J. Vision 7(7), 6 (2007).
    [CrossRef]
  3. A. Torralba, A. Oliva, M. Castelhano, and J. Henderson, “Contextual guidance of eye movements and attention in real-world scenes: the role of global features in object search,” Psychol. Rev. 113, 766-786 (2006).
    [CrossRef] [PubMed]
  4. T. Kadir, A. Zisserman, and M. Brady, “An affine invariant salient region detector,” in European Conference on Computer Vision, 2004, pp. 228-241.
  5. G. Fritz, C. Seifert, L. Paletta, and H. Bischof, “Attentive object detection using an information theoretic saliency measure,” in Attention and Performance in Computational Vision: Second International Workshop, WAPCV 2004, Revised Selected Papers, 2005, pp. 29-41.
    [CrossRef]
  6. M. Mancas, B. Unay, B. Gosselin, and D. Macq, “Computational attention for defect localisation,” in Proceedings of ICVS Workshop on Computational Attention & Applications, 2007.
  7. N. Bruce and J. Tsotsos, “Saliency based on information maximization,” Adv. Neural Inf. Process. Syst. 18, 155-162 (2006).
  8. D. Gao and N. Vasconcelos, “Discriminant saliency for visual recognition from cluttered scenes,” Adv. Neural Inf. Process. Syst. 17, 481-488 (2005).
  9. D. Gao, V. Mahadevan, and N. Vasconcelos, “On the plausibility of the discriminant center-surround hypothesis for visual saliency,” J. Vision 8(7), 13 (2008).
    [CrossRef]
  10. L. Zhang, M. Tong, T. Marks, H. Shan, and G. Cottrell, “SUN: a Bayesian framework for saliency using natural statistics,” J. Vision 8(7), 32 (2008).
    [CrossRef]
  11. D. Gao and J. Zhou, “Adaptive background estimation for real-time traffic monitoring,” in 2001 IEEE Intelligent Transportation Systems, 2001, pp. 330-333.
  12. T. Jost, N. Ouerhani, R. Wartburg, R. Muri, and H. Hugli, “Assessing the contribution of color in visual attention,” Comput. Vis. Image Underst. 100, 107-123 (2005).
    [CrossRef]
  13. F. Wichmann, L. Sharpe, and K. Gegenfurtner, “The contributions of color to recognition memory for natural scenes,” Learn. Memory 28, 509-520 (2002).
    [CrossRef]
  14. J. Wolfe and T. Horowitz, “What attributes guide the deployment of visual attention and how do they do it?” Nat. Rev. Neurosci. 5, 495-501 (2004).
    [CrossRef] [PubMed]
  15. H. Greenspan, S. Belongie, R. Goodman, P. Perona, S. Rakshit, and C. Anderson, “Overcomplete steerable pyramid filters and rotation invariance,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1994, pp. 222-228.
  16. L. Itti, C. Koch, and E. Niebur, “A model of saliency-based visual attention for rapid scene analysis,” IEEE Trans. Pattern Anal. Mach. Intell. 20, 1254-1259 (1998).
    [CrossRef]
  17. J. Wolfe, “Guided Search 4.0: current progress with a model of visual search,” in Integrated Models of Cognitive Systems, 2007, pp. 99-119.
  18. Z. Li, “A saliency map in primary visual cortex,” Trends Cogn. Sci. 6, 9-16 (2002).
    [CrossRef] [PubMed]
  19. J. Krummenacher, H. J. Muller, and D. Heller, “Visual search for dimensionally redundant pop-out targets: evidence for parallel-coactive processing of dimensions,” Percept. Psychophys. 63, 901-917 (2001).
    [CrossRef] [PubMed]
  20. J. van de Weijer, T. Gevers, and A. Bagdanov, “Boosting color saliency in image feature detection,” IEEE Trans. Pattern Anal. Mach. Intell. 28, 150-156 (2006).
    [CrossRef] [PubMed]
  21. T. Liu, J. Sun, N. Zheng, X. Tang, and H. Shum, “Learning to detect a salient object,” in Proceedings of IEEE Computer Society Conference on Computer and Vision Pattern Recognition, 2007, pp. 1-8.
  22. D. Parkhurst, K. Law, and E. Niebur, “Modeling the role of salience in the allocation of overt visual attention,” Vision Res. 42, 107-123 (2002).
    [CrossRef] [PubMed]
  23. M. J. Wright, “Saliency predicts change detection in pictures of natural scenes,” Spatial Vis. 18, 413-430 (2005).
    [CrossRef]
  24. H. C. Nothdurft, “Salience from feature contrast: additivity across dimensions,” Vision Res. 40, 1183-1201 (2000).
    [CrossRef] [PubMed]
  25. G. Wyszecki and W. Stiles, Color Science: Concepts and Methods, Quantitative Data and Formulae, 2nd ed., 2000, p. 968.
  26. M. Lucassen, P. Bijl, and J. Roelofsen, “The perception of static colored noise: detection and masking described by CIE94,” Color Res. Appl. 33, 178-191 (2008).
    [CrossRef]
  27. M. Stokes, M. Anderson, S. Chandrasekar, and R. Motta, “A standard default color space for the Internet sRGB,” Microsoft and Hewlett-Packard Joint Report, Version 1, 1996.

2008 (3)

D. Gao, V. Mahadevan, and N. Vasconcelos, “On the plausibility of the discriminant center-surround hypothesis for visual saliency,” J. Vision 8(7), 13 (2008).
[CrossRef]

L. Zhang, M. Tong, T. Marks, H. Shan, and G. Cottrell, “SUN: a Bayesian framework for saliency using natural statistics,” J. Vision 8(7), 32 (2008).
[CrossRef]

M. Lucassen, P. Bijl, and J. Roelofsen, “The perception of static colored noise: detection and masking described by CIE94,” Color Res. Appl. 33, 178-191 (2008).
[CrossRef]

2007 (1)

A. Koene and L. Zhaoping, “Feature-specific interactions in salience from combined feature contrasts: evidence for a bottom-up saliency map in V1,” J. Vision 7(7), 6 (2007).
[CrossRef]

2006 (3)

A. Torralba, A. Oliva, M. Castelhano, and J. Henderson, “Contextual guidance of eye movements and attention in real-world scenes: the role of global features in object search,” Psychol. Rev. 113, 766-786 (2006).
[CrossRef] [PubMed]

N. Bruce and J. Tsotsos, “Saliency based on information maximization,” Adv. Neural Inf. Process. Syst. 18, 155-162 (2006).

J. van de Weijer, T. Gevers, and A. Bagdanov, “Boosting color saliency in image feature detection,” IEEE Trans. Pattern Anal. Mach. Intell. 28, 150-156 (2006).
[CrossRef] [PubMed]

2005 (3)

M. J. Wright, “Saliency predicts change detection in pictures of natural scenes,” Spatial Vis. 18, 413-430 (2005).
[CrossRef]

D. Gao and N. Vasconcelos, “Discriminant saliency for visual recognition from cluttered scenes,” Adv. Neural Inf. Process. Syst. 17, 481-488 (2005).

T. Jost, N. Ouerhani, R. Wartburg, R. Muri, and H. Hugli, “Assessing the contribution of color in visual attention,” Comput. Vis. Image Underst. 100, 107-123 (2005).
[CrossRef]

2004 (1)

J. Wolfe and T. Horowitz, “What attributes guide the deployment of visual attention and how do they do it?” Nat. Rev. Neurosci. 5, 495-501 (2004).
[CrossRef] [PubMed]

2002 (3)

Z. Li, “A saliency map in primary visual cortex,” Trends Cogn. Sci. 6, 9-16 (2002).
[CrossRef] [PubMed]

D. Parkhurst, K. Law, and E. Niebur, “Modeling the role of salience in the allocation of overt visual attention,” Vision Res. 42, 107-123 (2002).
[CrossRef] [PubMed]

F. Wichmann, L. Sharpe, and K. Gegenfurtner, “The contributions of color to recognition memory for natural scenes,” Learn. Memory 28, 509-520 (2002).
[CrossRef]

2001 (1)

J. Krummenacher, H. J. Muller, and D. Heller, “Visual search for dimensionally redundant pop-out targets: evidence for parallel-coactive processing of dimensions,” Percept. Psychophys. 63, 901-917 (2001).
[CrossRef] [PubMed]

2000 (1)

H. C. Nothdurft, “Salience from feature contrast: additivity across dimensions,” Vision Res. 40, 1183-1201 (2000).
[CrossRef] [PubMed]

1998 (1)

L. Itti, C. Koch, and E. Niebur, “A model of saliency-based visual attention for rapid scene analysis,” IEEE Trans. Pattern Anal. Mach. Intell. 20, 1254-1259 (1998).
[CrossRef]

Anderson, C.

H. Greenspan, S. Belongie, R. Goodman, P. Perona, S. Rakshit, and C. Anderson, “Overcomplete steerable pyramid filters and rotation invariance,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1994, pp. 222-228.

Anderson, M.

M. Stokes, M. Anderson, S. Chandrasekar, and R. Motta, “A standard default color space for the Internet sRGB,” Microsoft and Hewlett-Packard Joint Report, Version 1, 1996.

Bagdanov, A.

J. van de Weijer, T. Gevers, and A. Bagdanov, “Boosting color saliency in image feature detection,” IEEE Trans. Pattern Anal. Mach. Intell. 28, 150-156 (2006).
[CrossRef] [PubMed]

Belongie, S.

H. Greenspan, S. Belongie, R. Goodman, P. Perona, S. Rakshit, and C. Anderson, “Overcomplete steerable pyramid filters and rotation invariance,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1994, pp. 222-228.

Bijl, P.

M. Lucassen, P. Bijl, and J. Roelofsen, “The perception of static colored noise: detection and masking described by CIE94,” Color Res. Appl. 33, 178-191 (2008).
[CrossRef]

Bischof, H.

G. Fritz, C. Seifert, L. Paletta, and H. Bischof, “Attentive object detection using an information theoretic saliency measure,” in Attention and Performance in Computational Vision: Second International Workshop, WAPCV 2004, Revised Selected Papers, 2005, pp. 29-41.
[CrossRef]

Brady, M.

T. Kadir, A. Zisserman, and M. Brady, “An affine invariant salient region detector,” in European Conference on Computer Vision, 2004, pp. 228-241.

Bruce, N.

N. Bruce and J. Tsotsos, “Saliency based on information maximization,” Adv. Neural Inf. Process. Syst. 18, 155-162 (2006).

Castelhano, M.

A. Torralba, A. Oliva, M. Castelhano, and J. Henderson, “Contextual guidance of eye movements and attention in real-world scenes: the role of global features in object search,” Psychol. Rev. 113, 766-786 (2006).
[CrossRef] [PubMed]

Chandrasekar, S.

M. Stokes, M. Anderson, S. Chandrasekar, and R. Motta, “A standard default color space for the Internet sRGB,” Microsoft and Hewlett-Packard Joint Report, Version 1, 1996.

Cottrell, G.

L. Zhang, M. Tong, T. Marks, H. Shan, and G. Cottrell, “SUN: a Bayesian framework for saliency using natural statistics,” J. Vision 8(7), 32 (2008).
[CrossRef]

Fritz, G.

G. Fritz, C. Seifert, L. Paletta, and H. Bischof, “Attentive object detection using an information theoretic saliency measure,” in Attention and Performance in Computational Vision: Second International Workshop, WAPCV 2004, Revised Selected Papers, 2005, pp. 29-41.
[CrossRef]

Gao, D.

D. Gao, V. Mahadevan, and N. Vasconcelos, “On the plausibility of the discriminant center-surround hypothesis for visual saliency,” J. Vision 8(7), 13 (2008).
[CrossRef]

D. Gao and N. Vasconcelos, “Discriminant saliency for visual recognition from cluttered scenes,” Adv. Neural Inf. Process. Syst. 17, 481-488 (2005).

D. Gao and J. Zhou, “Adaptive background estimation for real-time traffic monitoring,” in 2001 IEEE Intelligent Transportation Systems, 2001, pp. 330-333.

Gegenfurtner, K.

F. Wichmann, L. Sharpe, and K. Gegenfurtner, “The contributions of color to recognition memory for natural scenes,” Learn. Memory 28, 509-520 (2002).
[CrossRef]

Gevers, T.

J. van de Weijer, T. Gevers, and A. Bagdanov, “Boosting color saliency in image feature detection,” IEEE Trans. Pattern Anal. Mach. Intell. 28, 150-156 (2006).
[CrossRef] [PubMed]

Goodman, R.

H. Greenspan, S. Belongie, R. Goodman, P. Perona, S. Rakshit, and C. Anderson, “Overcomplete steerable pyramid filters and rotation invariance,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1994, pp. 222-228.

Gosselin, B.

M. Mancas, B. Unay, B. Gosselin, and D. Macq, “Computational attention for defect localisation,” in Proceedings of ICVS Workshop on Computational Attention & Applications, 2007.

Greenspan, H.

H. Greenspan, S. Belongie, R. Goodman, P. Perona, S. Rakshit, and C. Anderson, “Overcomplete steerable pyramid filters and rotation invariance,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1994, pp. 222-228.

Heller, D.

J. Krummenacher, H. J. Muller, and D. Heller, “Visual search for dimensionally redundant pop-out targets: evidence for parallel-coactive processing of dimensions,” Percept. Psychophys. 63, 901-917 (2001).
[CrossRef] [PubMed]

Henderson, J.

A. Torralba, A. Oliva, M. Castelhano, and J. Henderson, “Contextual guidance of eye movements and attention in real-world scenes: the role of global features in object search,” Psychol. Rev. 113, 766-786 (2006).
[CrossRef] [PubMed]

Horowitz, T.

J. Wolfe and T. Horowitz, “What attributes guide the deployment of visual attention and how do they do it?” Nat. Rev. Neurosci. 5, 495-501 (2004).
[CrossRef] [PubMed]

Hugli, H.

T. Jost, N. Ouerhani, R. Wartburg, R. Muri, and H. Hugli, “Assessing the contribution of color in visual attention,” Comput. Vis. Image Underst. 100, 107-123 (2005).
[CrossRef]

Itti, L.

L. Itti, C. Koch, and E. Niebur, “A model of saliency-based visual attention for rapid scene analysis,” IEEE Trans. Pattern Anal. Mach. Intell. 20, 1254-1259 (1998).
[CrossRef]

Jost, T.

T. Jost, N. Ouerhani, R. Wartburg, R. Muri, and H. Hugli, “Assessing the contribution of color in visual attention,” Comput. Vis. Image Underst. 100, 107-123 (2005).
[CrossRef]

Kadir, T.

T. Kadir, A. Zisserman, and M. Brady, “An affine invariant salient region detector,” in European Conference on Computer Vision, 2004, pp. 228-241.

Koch, C.

L. Itti, C. Koch, and E. Niebur, “A model of saliency-based visual attention for rapid scene analysis,” IEEE Trans. Pattern Anal. Mach. Intell. 20, 1254-1259 (1998).
[CrossRef]

Koene, A.

A. Koene and L. Zhaoping, “Feature-specific interactions in salience from combined feature contrasts: evidence for a bottom-up saliency map in V1,” J. Vision 7(7), 6 (2007).
[CrossRef]

Krummenacher, J.

J. Krummenacher, H. J. Muller, and D. Heller, “Visual search for dimensionally redundant pop-out targets: evidence for parallel-coactive processing of dimensions,” Percept. Psychophys. 63, 901-917 (2001).
[CrossRef] [PubMed]

Law, K.

D. Parkhurst, K. Law, and E. Niebur, “Modeling the role of salience in the allocation of overt visual attention,” Vision Res. 42, 107-123 (2002).
[CrossRef] [PubMed]

Li, Z.

Z. Li, “A saliency map in primary visual cortex,” Trends Cogn. Sci. 6, 9-16 (2002).
[CrossRef] [PubMed]

Liu, T.

T. Liu, J. Sun, N. Zheng, X. Tang, and H. Shum, “Learning to detect a salient object,” in Proceedings of IEEE Computer Society Conference on Computer and Vision Pattern Recognition, 2007, pp. 1-8.

Lucassen, M.

M. Lucassen, P. Bijl, and J. Roelofsen, “The perception of static colored noise: detection and masking described by CIE94,” Color Res. Appl. 33, 178-191 (2008).
[CrossRef]

Macq, D.

M. Mancas, B. Unay, B. Gosselin, and D. Macq, “Computational attention for defect localisation,” in Proceedings of ICVS Workshop on Computational Attention & Applications, 2007.

Mahadevan, V.

D. Gao, V. Mahadevan, and N. Vasconcelos, “On the plausibility of the discriminant center-surround hypothesis for visual saliency,” J. Vision 8(7), 13 (2008).
[CrossRef]

Mancas, M.

M. Mancas, B. Unay, B. Gosselin, and D. Macq, “Computational attention for defect localisation,” in Proceedings of ICVS Workshop on Computational Attention & Applications, 2007.

Marks, T.

L. Zhang, M. Tong, T. Marks, H. Shan, and G. Cottrell, “SUN: a Bayesian framework for saliency using natural statistics,” J. Vision 8(7), 32 (2008).
[CrossRef]

Motta, R.

M. Stokes, M. Anderson, S. Chandrasekar, and R. Motta, “A standard default color space for the Internet sRGB,” Microsoft and Hewlett-Packard Joint Report, Version 1, 1996.

Muller, H. J.

J. Krummenacher, H. J. Muller, and D. Heller, “Visual search for dimensionally redundant pop-out targets: evidence for parallel-coactive processing of dimensions,” Percept. Psychophys. 63, 901-917 (2001).
[CrossRef] [PubMed]

Muri, R.

T. Jost, N. Ouerhani, R. Wartburg, R. Muri, and H. Hugli, “Assessing the contribution of color in visual attention,” Comput. Vis. Image Underst. 100, 107-123 (2005).
[CrossRef]

Niebur, E.

D. Parkhurst, K. Law, and E. Niebur, “Modeling the role of salience in the allocation of overt visual attention,” Vision Res. 42, 107-123 (2002).
[CrossRef] [PubMed]

L. Itti, C. Koch, and E. Niebur, “A model of saliency-based visual attention for rapid scene analysis,” IEEE Trans. Pattern Anal. Mach. Intell. 20, 1254-1259 (1998).
[CrossRef]

Nothdurft, H. C.

H. C. Nothdurft, “Salience from feature contrast: additivity across dimensions,” Vision Res. 40, 1183-1201 (2000).
[CrossRef] [PubMed]

Oliva, A.

A. Torralba, A. Oliva, M. Castelhano, and J. Henderson, “Contextual guidance of eye movements and attention in real-world scenes: the role of global features in object search,” Psychol. Rev. 113, 766-786 (2006).
[CrossRef] [PubMed]

Ouerhani, N.

T. Jost, N. Ouerhani, R. Wartburg, R. Muri, and H. Hugli, “Assessing the contribution of color in visual attention,” Comput. Vis. Image Underst. 100, 107-123 (2005).
[CrossRef]

Paletta, L.

G. Fritz, C. Seifert, L. Paletta, and H. Bischof, “Attentive object detection using an information theoretic saliency measure,” in Attention and Performance in Computational Vision: Second International Workshop, WAPCV 2004, Revised Selected Papers, 2005, pp. 29-41.
[CrossRef]

Parkhurst, D.

D. Parkhurst, K. Law, and E. Niebur, “Modeling the role of salience in the allocation of overt visual attention,” Vision Res. 42, 107-123 (2002).
[CrossRef] [PubMed]

Perona, P.

H. Greenspan, S. Belongie, R. Goodman, P. Perona, S. Rakshit, and C. Anderson, “Overcomplete steerable pyramid filters and rotation invariance,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1994, pp. 222-228.

Rakshit, S.

H. Greenspan, S. Belongie, R. Goodman, P. Perona, S. Rakshit, and C. Anderson, “Overcomplete steerable pyramid filters and rotation invariance,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1994, pp. 222-228.

Roelofsen, J.

M. Lucassen, P. Bijl, and J. Roelofsen, “The perception of static colored noise: detection and masking described by CIE94,” Color Res. Appl. 33, 178-191 (2008).
[CrossRef]

Seifert, C.

G. Fritz, C. Seifert, L. Paletta, and H. Bischof, “Attentive object detection using an information theoretic saliency measure,” in Attention and Performance in Computational Vision: Second International Workshop, WAPCV 2004, Revised Selected Papers, 2005, pp. 29-41.
[CrossRef]

Shan, H.

L. Zhang, M. Tong, T. Marks, H. Shan, and G. Cottrell, “SUN: a Bayesian framework for saliency using natural statistics,” J. Vision 8(7), 32 (2008).
[CrossRef]

Sharpe, L.

F. Wichmann, L. Sharpe, and K. Gegenfurtner, “The contributions of color to recognition memory for natural scenes,” Learn. Memory 28, 509-520 (2002).
[CrossRef]

Shum, H.

T. Liu, J. Sun, N. Zheng, X. Tang, and H. Shum, “Learning to detect a salient object,” in Proceedings of IEEE Computer Society Conference on Computer and Vision Pattern Recognition, 2007, pp. 1-8.

Stiles, W.

G. Wyszecki and W. Stiles, Color Science: Concepts and Methods, Quantitative Data and Formulae, 2nd ed., 2000, p. 968.

Stokes, M.

M. Stokes, M. Anderson, S. Chandrasekar, and R. Motta, “A standard default color space for the Internet sRGB,” Microsoft and Hewlett-Packard Joint Report, Version 1, 1996.

Sun, J.

T. Liu, J. Sun, N. Zheng, X. Tang, and H. Shum, “Learning to detect a salient object,” in Proceedings of IEEE Computer Society Conference on Computer and Vision Pattern Recognition, 2007, pp. 1-8.

Tang, X.

T. Liu, J. Sun, N. Zheng, X. Tang, and H. Shum, “Learning to detect a salient object,” in Proceedings of IEEE Computer Society Conference on Computer and Vision Pattern Recognition, 2007, pp. 1-8.

Titchener, E.

E. Titchener, Lectures on the Elementary Psychology of Feeling and Attention (Adamant Media Corporation, 2005).

Tong, M.

L. Zhang, M. Tong, T. Marks, H. Shan, and G. Cottrell, “SUN: a Bayesian framework for saliency using natural statistics,” J. Vision 8(7), 32 (2008).
[CrossRef]

Torralba, A.

A. Torralba, A. Oliva, M. Castelhano, and J. Henderson, “Contextual guidance of eye movements and attention in real-world scenes: the role of global features in object search,” Psychol. Rev. 113, 766-786 (2006).
[CrossRef] [PubMed]

Tsotsos, J.

N. Bruce and J. Tsotsos, “Saliency based on information maximization,” Adv. Neural Inf. Process. Syst. 18, 155-162 (2006).

Unay, B.

M. Mancas, B. Unay, B. Gosselin, and D. Macq, “Computational attention for defect localisation,” in Proceedings of ICVS Workshop on Computational Attention & Applications, 2007.

van de Weijer, J.

J. van de Weijer, T. Gevers, and A. Bagdanov, “Boosting color saliency in image feature detection,” IEEE Trans. Pattern Anal. Mach. Intell. 28, 150-156 (2006).
[CrossRef] [PubMed]

Vasconcelos, N.

D. Gao, V. Mahadevan, and N. Vasconcelos, “On the plausibility of the discriminant center-surround hypothesis for visual saliency,” J. Vision 8(7), 13 (2008).
[CrossRef]

D. Gao and N. Vasconcelos, “Discriminant saliency for visual recognition from cluttered scenes,” Adv. Neural Inf. Process. Syst. 17, 481-488 (2005).

Wartburg, R.

T. Jost, N. Ouerhani, R. Wartburg, R. Muri, and H. Hugli, “Assessing the contribution of color in visual attention,” Comput. Vis. Image Underst. 100, 107-123 (2005).
[CrossRef]

Wichmann, F.

F. Wichmann, L. Sharpe, and K. Gegenfurtner, “The contributions of color to recognition memory for natural scenes,” Learn. Memory 28, 509-520 (2002).
[CrossRef]

Wolfe, J.

J. Wolfe and T. Horowitz, “What attributes guide the deployment of visual attention and how do they do it?” Nat. Rev. Neurosci. 5, 495-501 (2004).
[CrossRef] [PubMed]

J. Wolfe, “Guided Search 4.0: current progress with a model of visual search,” in Integrated Models of Cognitive Systems, 2007, pp. 99-119.

Wright, M. J.

M. J. Wright, “Saliency predicts change detection in pictures of natural scenes,” Spatial Vis. 18, 413-430 (2005).
[CrossRef]

Wyszecki, G.

G. Wyszecki and W. Stiles, Color Science: Concepts and Methods, Quantitative Data and Formulae, 2nd ed., 2000, p. 968.

Zhang, L.

L. Zhang, M. Tong, T. Marks, H. Shan, and G. Cottrell, “SUN: a Bayesian framework for saliency using natural statistics,” J. Vision 8(7), 32 (2008).
[CrossRef]

Zhaoping, L.

A. Koene and L. Zhaoping, “Feature-specific interactions in salience from combined feature contrasts: evidence for a bottom-up saliency map in V1,” J. Vision 7(7), 6 (2007).
[CrossRef]

Zheng, N.

T. Liu, J. Sun, N. Zheng, X. Tang, and H. Shum, “Learning to detect a salient object,” in Proceedings of IEEE Computer Society Conference on Computer and Vision Pattern Recognition, 2007, pp. 1-8.

Zhou, J.

D. Gao and J. Zhou, “Adaptive background estimation for real-time traffic monitoring,” in 2001 IEEE Intelligent Transportation Systems, 2001, pp. 330-333.

Zisserman, A.

T. Kadir, A. Zisserman, and M. Brady, “An affine invariant salient region detector,” in European Conference on Computer Vision, 2004, pp. 228-241.

Adv. Neural Inf. Process. Syst. (2)

N. Bruce and J. Tsotsos, “Saliency based on information maximization,” Adv. Neural Inf. Process. Syst. 18, 155-162 (2006).

D. Gao and N. Vasconcelos, “Discriminant saliency for visual recognition from cluttered scenes,” Adv. Neural Inf. Process. Syst. 17, 481-488 (2005).

Color Res. Appl. (1)

M. Lucassen, P. Bijl, and J. Roelofsen, “The perception of static colored noise: detection and masking described by CIE94,” Color Res. Appl. 33, 178-191 (2008).
[CrossRef]

Comput. Vis. Image Underst. (1)

T. Jost, N. Ouerhani, R. Wartburg, R. Muri, and H. Hugli, “Assessing the contribution of color in visual attention,” Comput. Vis. Image Underst. 100, 107-123 (2005).
[CrossRef]

IEEE Trans. Pattern Anal. Mach. Intell. (2)

L. Itti, C. Koch, and E. Niebur, “A model of saliency-based visual attention for rapid scene analysis,” IEEE Trans. Pattern Anal. Mach. Intell. 20, 1254-1259 (1998).
[CrossRef]

J. van de Weijer, T. Gevers, and A. Bagdanov, “Boosting color saliency in image feature detection,” IEEE Trans. Pattern Anal. Mach. Intell. 28, 150-156 (2006).
[CrossRef] [PubMed]

J. Vision (3)

D. Gao, V. Mahadevan, and N. Vasconcelos, “On the plausibility of the discriminant center-surround hypothesis for visual saliency,” J. Vision 8(7), 13 (2008).
[CrossRef]

L. Zhang, M. Tong, T. Marks, H. Shan, and G. Cottrell, “SUN: a Bayesian framework for saliency using natural statistics,” J. Vision 8(7), 32 (2008).
[CrossRef]

A. Koene and L. Zhaoping, “Feature-specific interactions in salience from combined feature contrasts: evidence for a bottom-up saliency map in V1,” J. Vision 7(7), 6 (2007).
[CrossRef]

Learn. Memory (1)

F. Wichmann, L. Sharpe, and K. Gegenfurtner, “The contributions of color to recognition memory for natural scenes,” Learn. Memory 28, 509-520 (2002).
[CrossRef]

Nat. Rev. Neurosci. (1)

J. Wolfe and T. Horowitz, “What attributes guide the deployment of visual attention and how do they do it?” Nat. Rev. Neurosci. 5, 495-501 (2004).
[CrossRef] [PubMed]

Percept. Psychophys. (1)

J. Krummenacher, H. J. Muller, and D. Heller, “Visual search for dimensionally redundant pop-out targets: evidence for parallel-coactive processing of dimensions,” Percept. Psychophys. 63, 901-917 (2001).
[CrossRef] [PubMed]

Psychol. Rev. (1)

A. Torralba, A. Oliva, M. Castelhano, and J. Henderson, “Contextual guidance of eye movements and attention in real-world scenes: the role of global features in object search,” Psychol. Rev. 113, 766-786 (2006).
[CrossRef] [PubMed]

Spatial Vis. (1)

M. J. Wright, “Saliency predicts change detection in pictures of natural scenes,” Spatial Vis. 18, 413-430 (2005).
[CrossRef]

Trends Cogn. Sci. (1)

Z. Li, “A saliency map in primary visual cortex,” Trends Cogn. Sci. 6, 9-16 (2002).
[CrossRef] [PubMed]

Vision Res. (2)

H. C. Nothdurft, “Salience from feature contrast: additivity across dimensions,” Vision Res. 40, 1183-1201 (2000).
[CrossRef] [PubMed]

D. Parkhurst, K. Law, and E. Niebur, “Modeling the role of salience in the allocation of overt visual attention,” Vision Res. 42, 107-123 (2002).
[CrossRef] [PubMed]

Other (10)

T. Liu, J. Sun, N. Zheng, X. Tang, and H. Shum, “Learning to detect a salient object,” in Proceedings of IEEE Computer Society Conference on Computer and Vision Pattern Recognition, 2007, pp. 1-8.

G. Wyszecki and W. Stiles, Color Science: Concepts and Methods, Quantitative Data and Formulae, 2nd ed., 2000, p. 968.

M. Stokes, M. Anderson, S. Chandrasekar, and R. Motta, “A standard default color space for the Internet sRGB,” Microsoft and Hewlett-Packard Joint Report, Version 1, 1996.

E. Titchener, Lectures on the Elementary Psychology of Feeling and Attention (Adamant Media Corporation, 2005).

J. Wolfe, “Guided Search 4.0: current progress with a model of visual search,” in Integrated Models of Cognitive Systems, 2007, pp. 99-119.

H. Greenspan, S. Belongie, R. Goodman, P. Perona, S. Rakshit, and C. Anderson, “Overcomplete steerable pyramid filters and rotation invariance,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1994, pp. 222-228.

T. Kadir, A. Zisserman, and M. Brady, “An affine invariant salient region detector,” in European Conference on Computer Vision, 2004, pp. 228-241.

G. Fritz, C. Seifert, L. Paletta, and H. Bischof, “Attentive object detection using an information theoretic saliency measure,” in Attention and Performance in Computational Vision: Second International Workshop, WAPCV 2004, Revised Selected Papers, 2005, pp. 29-41.
[CrossRef]

M. Mancas, B. Unay, B. Gosselin, and D. Macq, “Computational attention for defect localisation,” in Proceedings of ICVS Workshop on Computational Attention & Applications, 2007.

D. Gao and J. Zhou, “Adaptive background estimation for real-time traffic monitoring,” in 2001 IEEE Intelligent Transportation Systems, 2001, pp. 330-333.

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1
Fig. 1

Histogram of the distribution of RGB derivatives computed for the 40,000 images of the COREL image data set. The iso-salient derivatives form an ellipsoidlike distribution, of which the longest axis is along the luminance direction.

Fig. 2
Fig. 2

(a) Original image, (b) global color saliency, (c) local color saliency, (d) multiscale local color saliency. Global saliency amplifies the red edges of the flag. Based on the local statistics of this image, the local saliency increases the saliency of the pie. The multiscale approach suppresses the colorful edges of the American flag further; therefore, the pie is better detected. This corresponds to the part of the scene selected as the most salient by humans [21].

Fig. 3
Fig. 3

(a) Example of a synthetic image with specified distribution of color transitions in CIELab color space, which forms the surround. (b) Two different transformations of the color distribution shown in (a) form two different centers. (c) Layout of the psychophysical experiment, showing two center-surround color patterns side by side. The surrounds are identical, the centers are different. Subjects indicate which of the two centers stands out most from the surround.

Fig. 4
Fig. 4

Relative saliency (in descending order) of the 13 centers for the surrounds S L , S a , S b , and S eq averaged over observers. Error bars represent the standard error of the mean. The images on the right-hand side show the most salient (top) and least salient (bottom) centers, in a small portion of the surround.

Fig. 5
Fig. 5

Correspondence as computed with Eq. (10) between computational saliency models. The different computational models are sorted on descending correspondence. Error bars indicate standard error of the mean (eight subjects).

Fig. 6
Fig. 6

Labeled images from image set B consisting of 5000 images that were labeled by 9 users obtained from [21].

Fig. 7
Fig. 7

Color saliency example. First row: original image. Second row: RGB edges. Third row: computational global saliency M o c (see Table 2). Fourth row: M o h (see Table 2). The overlap between learned and computational maps over all images reaches 97.43%, whereas the overlapping with the RGB edges is 83.11%.

Tables (3)

Tables Icon

Table 1 Surrounds with Systematic Changes in the Standard Deviations ( σ ) Along the L * , a * , and b * Axes of Perceptual Color Space a

Tables Icon

Table 2 Results Obtained for Learned Global Saliency Measure M o h , Computational Global Saliency M o c , and Computational Local Saliency M l c a

Tables Icon

Table 3 Hit and Miss Values Obtained in the Test Set for All Proposed Saliency Transformations as Well as for RGB Edges, Itti Saliency Measure [16], and a Random Selection of the Most Salient Location

Equations (12)

Equations on this page are rendered with MathJax. Learn more.

I = log [ p ( f x ) ] .
p ( f x ) = p ( f x ) | g ( f x ) | = | g ( f x ) | ,
p ( f x ) < p ( f x ) | g ( f x ) | > | g ( f x ) | .
N = f x ( f x ) t ¯ = ( R x R x ¯ R x G x ¯ R x B x ¯ R x G x ¯ G x G x ¯ G x B x ¯ R x B x ¯ G x B x ¯ B x B x ¯ ) ,
R x R x ¯ = i S x X i R x ( x ) R x ( x ) ,
g ( f x ) = Λ 1 U t f x .
g ( f x ) [ g ( f x ) ] t = Λ 1 U t U Λ Λ U t U Λ 1 = I ,
U t = ( 1 2 1 2 0 1 6 1 6 2 6 1 3 1 3 1 3 ) .
s ( x ) = σ Σ x N ( x ) M σ [ f σ ( x ) f σ ( x ) ] ,
( σ L σ a σ b ) = ( α 0 0 0 β 0 0 0 γ ) ( σ L σ a σ b ) .
Cor ( s , m ) = 100 i = 1 468 a i 468 ,
P M i = A ( b i ) f M i A ( f i ) b M i .

Metrics