Abstract

In this paper, we propose a salient object detection algorithm that considers both background and foreground cues. It integrates both coarse salient region extraction and a top-down background weight map measure via boundary label propagation into a unified optimization framework to acquire a refined salient map. The coarse saliency map is additionally fused by three prior components: a local contrast map with greater alignment to physiological law, a global focus prior map, and a global color prior map. During the formation of the background weight map, we first construct an affinity matrix and select nodes existing on the border as labels to represent the background. Then we perform a propagation to generate the regional background weight map. Our proposed model was verified on four benchmark datasets, and the experimental results demonstrate that our method has excellent performance.

© 2017 Optical Society of America

Full Article  |  PDF Article
OSA Recommended Articles
Collaborative multicue fusion using the cross-diffusion process for salient object detection

Jin-Gang Yu, Changxin Gao, and Jinwen Tian
J. Opt. Soc. Am. A 33(3) 404-415 (2016)

Toward adaptive fusion of multiple cues for salient region detection

Hong Li, Enhua Wu, and Wen Wu
J. Opt. Soc. Am. A 33(12) 2365-2375 (2016)

Salient object detection fusing global and local information based on nonsubsampled contourlet transform

Dongmei Liu, Faliang Chang, and Chunsheng Liu
J. Opt. Soc. Am. A 33(8) 1430-1441 (2016)

References

  • View by:
  • |
  • |
  • |

  1. A. Borji, M. M. Cheng, H. Jiang, and J. Li, “Salient object detection: a benchmark,” IEEE Trans. Image Process. 24, 5706–5722 (2015).
    [Crossref]
  2. J. Yang and M. H. Yang, “Top-down visual saliency via joint CRF and dictionary learning,” in Computer Vision and Pattern Recognition (CVPR) (IEEE, 2012), pp. 2296–2303.
  3. A. Borji, “Boosting bottom-up and top-down visual features for saliency estimation,” in Computer Vision and Pattern Recognition (CVPR) (IEEE, 2012), pp. 438–445.
  4. N. Bruce and J. Tsotsos, “Saliency based on information maximization,” in Advances in Neural Information Processing Systems (NIPS) (2005), Vol. 18, pp. 155–162.
  5. T. Liu, Z. Yuan, J. Sun, J. Wang, N. Zheng, X. Tang, and H.-Y. Shum, “Learning to detect a salient object,” IEEE Trans. Pattern Anal. Mach. Intell. 33, 353–367 (2011).
    [Crossref]
  6. D. Gao, V. Mahadevan, and N. Vasconcelos, “The discriminant center-surround hypothesis for bottom-up saliency,” in Proceedings of the Advances in Neural Information Processing Systems (NIPS) (2007), pp. 497–504.
  7. L. Itti, C. Koch, and E. Niebur, “A model of saliency-based visual attention for rapid scene analysis,” IEEE Trans. Pattern Anal. Mach. Intell. 20, 1254–1259 (1998).
    [Crossref]
  8. J. Harel, C. Koch, and P. Perona, “Graph-based visual saliency,” in Advances in Neural Information Processing Systems (2006), 545–552.
  9. R. Achanta, F. Estrada, P. Wils, and S. Süsstrunk, “Salient region detection and segmentation,” in Computer Vision Systems (Springer, 2008), pp. 66–75.
  10. S. Goferman, L. Zelnik-Manor, and A. Tal, “Context-aware saliency detection,” IEEE Trans. Pattern Anal. Mach. Intell. 34, 1915–1926 (2012).
    [Crossref]
  11. L. Wang, H. Lu, X. Ruan, and M. H. Yang, “Deep networks for saliency detection via local estimation and global search,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (ICCVPR) (IEEE, 2015), pp. 3183–3192.
  12. R. Achanta, S. Hemami, F. Estrada, and S. Susstrunk, “Frequency-tuned salient region detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (ICCVPR) (IEEE, 2009), pp. 1597–1604.
  13. M.-M. Cheng, N. J. Mitra, X. Huang, P. H. S. Torr, and S. M. Hu, “Global contrast based salient region detection,” IEEE Trans. Pattern Anal. Mach. Intell. 37, 569–582 (2015).
    [Crossref]
  14. W. Zhu, S. Liang, Y. Wei, and J. Sun, “Saliency optimization from robust background detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (ICCVPR) (IEEE, 2014), pp. 2814–2821.
  15. X. Li, H. Lu, L. Zhang, X. Ruan, and M. H. Yang, “Saliency detection via dense and sparse reconstruction,” in Proceedings of the IEEE International Conference on Computer Vision (ICCVPR) (IEEE, 2013), pp. 2976–2983.
  16. B. Rasolzadeh, A. T. Targhi, and J. O. Eklundh, “An attentional system combining top-down and bottom-up influences,” in International Workshop on Attention in Cognitive Systems (Springer-Verlag, 2007), pp. 123–140.
  17. H. Tian, Y. Fang, Y. Zhao, W. Lin, R. Ni, and Z. Zhu, “Salient region detection by fusing bottom-up and top-down features extracted from a single image,” IEEE Trans. Image Process. 23, 4389–4398 (2014).
    [Crossref]
  18. L. Xu, Q. Yan, Y. Xia, and J. Jia, “Structure extraction from texture via relative total variation,” ACM Trans. Graph. 31, 139 (2012).
  19. F. W. Campbell and J. G. Robson, “Application of Fourier analysis to the visibility of gratings,” J. Physiol. 197, 551–566 (1968).
    [Crossref]
  20. J. Chen, S. Shan, C. He, G. Zhao, M. Pietikainen, X. Chen, and W. Gao, “WLD: a robust local image descriptor,” IEEE Trans. Pattern Anal. Mach. Intell. 32, 1705–1720 (2010).
    [Crossref]
  21. A. K. Jain, Fundamentals of Digital Image Processing (Prentice-Hall, 1989).
  22. R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. Süsstrunk, “SLIC superpixels compared to state-of-the-art superpixel methods,” IEEE Trans. Pattern Anal. Mach. Intell. 34, 2274–2282 (2012).
    [Crossref]
  23. L. Zhang, Z. Gu, and H. Li, “SDSP: a novel saliency detection method by combining simple priors,” in IEEE International Conference on Image Processing (IEEE, 2013), pp. 171–175.
  24. P. Jiang, H. Ling, J. Yu, and J. Peng, “Salient region detection by UFO: uniqueness, focusness and objectness,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV) (IEEE, 2013), pp. 976–1983.
  25. J. MacQueen, “Some methods for classification and analysis of multivariate observations,” in Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability (1967), pp. 281–297.
  26. Y. Wei, F. Wen, W. Zhu, and J. Sun, “Geodesic saliency using background priors,” in European Conference on Computer Vision (2012), pp. 29–42.
  27. D. R. Martin, C. C. Fowlkes, and J. Malik, “Learning to detect natural image boundaries using local brightness, color, and texture cues,” IEEE Trans. Pattern Anal. Mach. Intell. 26, 530–549 (2004).
    [Crossref]
  28. N. Otsu, “A threshold selection method from gray-level histograms,” Automatica 11, 285–296 (1975).
    [Crossref]
  29. H. Li, H. Lu, Z. Lin, X. Shen, and B. Price, “Inner and inter label propagation: salient object detection in the wild,” IEEE Trans. Image Process. 24, 3176–3186 (2015).
    [Crossref]
  30. X. Li, Y. Li, C. Shen, A. Dick, and A. Van Den Hengel, “Contextual hypergraph modeling for salient object detection,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV) (IEEE, 2013), pp. 3328–3335.
  31. H. Jiang, J. Wang, Z. Yuan, Y. Wu, N. Zheng, and S. Li, “Salient object detection: a discriminative regional feature integration approach,” in IEEE Computer Vision and Pattern Recognition (CVPR) (IEEE, 2013), pp. 2083–2090.
  32. C. Yang, L. Zhang, H. Lu, X. Ruan, and M.-H. Yang, “Saliency detection via graph-based manifold ranking,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (ICCVPR) (IEEE, 2013), pp. 3166–3173.
  33. Q. Yan, L. Xu, J. Shi, and J. Jia, “Hierarchical saliency detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (ICCVPR) (IEEE, 2013), pp. 1155–1162.
  34. J. Lou, M. Ren, and H. Wang, “Regional principal color based saliency detection,” PloS ONE 9, e112475 (2014).
    [Crossref]
  35. Y. Xie, H. Lu, and M.-H. Yang, “Bayesian saliency via low and mid level cues,” IEEE Trans. Image Process. 22, 1689–1698 (2013).
    [Crossref]
  36. A. Borji, “What is a salient object? A dataset and a baseline model for salient object detection,” IEEE Trans. Image Process. 24, 742–756 (2015).
    [Crossref]
  37. A. Borji, D. N. Sihite, and L. Itti, “Salient object detection: a benchmark,” in European Conference on Computer Vision (ECCV) (2012), pp. 414–429.

2015 (4)

M.-M. Cheng, N. J. Mitra, X. Huang, P. H. S. Torr, and S. M. Hu, “Global contrast based salient region detection,” IEEE Trans. Pattern Anal. Mach. Intell. 37, 569–582 (2015).
[Crossref]

A. Borji, M. M. Cheng, H. Jiang, and J. Li, “Salient object detection: a benchmark,” IEEE Trans. Image Process. 24, 5706–5722 (2015).
[Crossref]

H. Li, H. Lu, Z. Lin, X. Shen, and B. Price, “Inner and inter label propagation: salient object detection in the wild,” IEEE Trans. Image Process. 24, 3176–3186 (2015).
[Crossref]

A. Borji, “What is a salient object? A dataset and a baseline model for salient object detection,” IEEE Trans. Image Process. 24, 742–756 (2015).
[Crossref]

2014 (2)

J. Lou, M. Ren, and H. Wang, “Regional principal color based saliency detection,” PloS ONE 9, e112475 (2014).
[Crossref]

H. Tian, Y. Fang, Y. Zhao, W. Lin, R. Ni, and Z. Zhu, “Salient region detection by fusing bottom-up and top-down features extracted from a single image,” IEEE Trans. Image Process. 23, 4389–4398 (2014).
[Crossref]

2013 (1)

Y. Xie, H. Lu, and M.-H. Yang, “Bayesian saliency via low and mid level cues,” IEEE Trans. Image Process. 22, 1689–1698 (2013).
[Crossref]

2012 (3)

R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. Süsstrunk, “SLIC superpixels compared to state-of-the-art superpixel methods,” IEEE Trans. Pattern Anal. Mach. Intell. 34, 2274–2282 (2012).
[Crossref]

L. Xu, Q. Yan, Y. Xia, and J. Jia, “Structure extraction from texture via relative total variation,” ACM Trans. Graph. 31, 139 (2012).

S. Goferman, L. Zelnik-Manor, and A. Tal, “Context-aware saliency detection,” IEEE Trans. Pattern Anal. Mach. Intell. 34, 1915–1926 (2012).
[Crossref]

2011 (1)

T. Liu, Z. Yuan, J. Sun, J. Wang, N. Zheng, X. Tang, and H.-Y. Shum, “Learning to detect a salient object,” IEEE Trans. Pattern Anal. Mach. Intell. 33, 353–367 (2011).
[Crossref]

2010 (1)

J. Chen, S. Shan, C. He, G. Zhao, M. Pietikainen, X. Chen, and W. Gao, “WLD: a robust local image descriptor,” IEEE Trans. Pattern Anal. Mach. Intell. 32, 1705–1720 (2010).
[Crossref]

2004 (1)

D. R. Martin, C. C. Fowlkes, and J. Malik, “Learning to detect natural image boundaries using local brightness, color, and texture cues,” IEEE Trans. Pattern Anal. Mach. Intell. 26, 530–549 (2004).
[Crossref]

1998 (1)

L. Itti, C. Koch, and E. Niebur, “A model of saliency-based visual attention for rapid scene analysis,” IEEE Trans. Pattern Anal. Mach. Intell. 20, 1254–1259 (1998).
[Crossref]

1975 (1)

N. Otsu, “A threshold selection method from gray-level histograms,” Automatica 11, 285–296 (1975).
[Crossref]

1968 (1)

F. W. Campbell and J. G. Robson, “Application of Fourier analysis to the visibility of gratings,” J. Physiol. 197, 551–566 (1968).
[Crossref]

Achanta, R.

R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. Süsstrunk, “SLIC superpixels compared to state-of-the-art superpixel methods,” IEEE Trans. Pattern Anal. Mach. Intell. 34, 2274–2282 (2012).
[Crossref]

R. Achanta, F. Estrada, P. Wils, and S. Süsstrunk, “Salient region detection and segmentation,” in Computer Vision Systems (Springer, 2008), pp. 66–75.

R. Achanta, S. Hemami, F. Estrada, and S. Susstrunk, “Frequency-tuned salient region detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (ICCVPR) (IEEE, 2009), pp. 1597–1604.

Borji, A.

A. Borji, M. M. Cheng, H. Jiang, and J. Li, “Salient object detection: a benchmark,” IEEE Trans. Image Process. 24, 5706–5722 (2015).
[Crossref]

A. Borji, “What is a salient object? A dataset and a baseline model for salient object detection,” IEEE Trans. Image Process. 24, 742–756 (2015).
[Crossref]

A. Borji, D. N. Sihite, and L. Itti, “Salient object detection: a benchmark,” in European Conference on Computer Vision (ECCV) (2012), pp. 414–429.

A. Borji, “Boosting bottom-up and top-down visual features for saliency estimation,” in Computer Vision and Pattern Recognition (CVPR) (IEEE, 2012), pp. 438–445.

Bruce, N.

N. Bruce and J. Tsotsos, “Saliency based on information maximization,” in Advances in Neural Information Processing Systems (NIPS) (2005), Vol. 18, pp. 155–162.

Campbell, F. W.

F. W. Campbell and J. G. Robson, “Application of Fourier analysis to the visibility of gratings,” J. Physiol. 197, 551–566 (1968).
[Crossref]

Chen, J.

J. Chen, S. Shan, C. He, G. Zhao, M. Pietikainen, X. Chen, and W. Gao, “WLD: a robust local image descriptor,” IEEE Trans. Pattern Anal. Mach. Intell. 32, 1705–1720 (2010).
[Crossref]

Chen, X.

J. Chen, S. Shan, C. He, G. Zhao, M. Pietikainen, X. Chen, and W. Gao, “WLD: a robust local image descriptor,” IEEE Trans. Pattern Anal. Mach. Intell. 32, 1705–1720 (2010).
[Crossref]

Cheng, M. M.

A. Borji, M. M. Cheng, H. Jiang, and J. Li, “Salient object detection: a benchmark,” IEEE Trans. Image Process. 24, 5706–5722 (2015).
[Crossref]

Cheng, M.-M.

M.-M. Cheng, N. J. Mitra, X. Huang, P. H. S. Torr, and S. M. Hu, “Global contrast based salient region detection,” IEEE Trans. Pattern Anal. Mach. Intell. 37, 569–582 (2015).
[Crossref]

Dick, A.

X. Li, Y. Li, C. Shen, A. Dick, and A. Van Den Hengel, “Contextual hypergraph modeling for salient object detection,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV) (IEEE, 2013), pp. 3328–3335.

Eklundh, J. O.

B. Rasolzadeh, A. T. Targhi, and J. O. Eklundh, “An attentional system combining top-down and bottom-up influences,” in International Workshop on Attention in Cognitive Systems (Springer-Verlag, 2007), pp. 123–140.

Estrada, F.

R. Achanta, S. Hemami, F. Estrada, and S. Susstrunk, “Frequency-tuned salient region detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (ICCVPR) (IEEE, 2009), pp. 1597–1604.

R. Achanta, F. Estrada, P. Wils, and S. Süsstrunk, “Salient region detection and segmentation,” in Computer Vision Systems (Springer, 2008), pp. 66–75.

Fang, Y.

H. Tian, Y. Fang, Y. Zhao, W. Lin, R. Ni, and Z. Zhu, “Salient region detection by fusing bottom-up and top-down features extracted from a single image,” IEEE Trans. Image Process. 23, 4389–4398 (2014).
[Crossref]

Fowlkes, C. C.

D. R. Martin, C. C. Fowlkes, and J. Malik, “Learning to detect natural image boundaries using local brightness, color, and texture cues,” IEEE Trans. Pattern Anal. Mach. Intell. 26, 530–549 (2004).
[Crossref]

Fua, P.

R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. Süsstrunk, “SLIC superpixels compared to state-of-the-art superpixel methods,” IEEE Trans. Pattern Anal. Mach. Intell. 34, 2274–2282 (2012).
[Crossref]

Gao, D.

D. Gao, V. Mahadevan, and N. Vasconcelos, “The discriminant center-surround hypothesis for bottom-up saliency,” in Proceedings of the Advances in Neural Information Processing Systems (NIPS) (2007), pp. 497–504.

Gao, W.

J. Chen, S. Shan, C. He, G. Zhao, M. Pietikainen, X. Chen, and W. Gao, “WLD: a robust local image descriptor,” IEEE Trans. Pattern Anal. Mach. Intell. 32, 1705–1720 (2010).
[Crossref]

Goferman, S.

S. Goferman, L. Zelnik-Manor, and A. Tal, “Context-aware saliency detection,” IEEE Trans. Pattern Anal. Mach. Intell. 34, 1915–1926 (2012).
[Crossref]

Gu, Z.

L. Zhang, Z. Gu, and H. Li, “SDSP: a novel saliency detection method by combining simple priors,” in IEEE International Conference on Image Processing (IEEE, 2013), pp. 171–175.

Harel, J.

J. Harel, C. Koch, and P. Perona, “Graph-based visual saliency,” in Advances in Neural Information Processing Systems (2006), 545–552.

He, C.

J. Chen, S. Shan, C. He, G. Zhao, M. Pietikainen, X. Chen, and W. Gao, “WLD: a robust local image descriptor,” IEEE Trans. Pattern Anal. Mach. Intell. 32, 1705–1720 (2010).
[Crossref]

Hemami, S.

R. Achanta, S. Hemami, F. Estrada, and S. Susstrunk, “Frequency-tuned salient region detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (ICCVPR) (IEEE, 2009), pp. 1597–1604.

Hu, S. M.

M.-M. Cheng, N. J. Mitra, X. Huang, P. H. S. Torr, and S. M. Hu, “Global contrast based salient region detection,” IEEE Trans. Pattern Anal. Mach. Intell. 37, 569–582 (2015).
[Crossref]

Huang, X.

M.-M. Cheng, N. J. Mitra, X. Huang, P. H. S. Torr, and S. M. Hu, “Global contrast based salient region detection,” IEEE Trans. Pattern Anal. Mach. Intell. 37, 569–582 (2015).
[Crossref]

Itti, L.

L. Itti, C. Koch, and E. Niebur, “A model of saliency-based visual attention for rapid scene analysis,” IEEE Trans. Pattern Anal. Mach. Intell. 20, 1254–1259 (1998).
[Crossref]

A. Borji, D. N. Sihite, and L. Itti, “Salient object detection: a benchmark,” in European Conference on Computer Vision (ECCV) (2012), pp. 414–429.

Jain, A. K.

A. K. Jain, Fundamentals of Digital Image Processing (Prentice-Hall, 1989).

Jia, J.

L. Xu, Q. Yan, Y. Xia, and J. Jia, “Structure extraction from texture via relative total variation,” ACM Trans. Graph. 31, 139 (2012).

Q. Yan, L. Xu, J. Shi, and J. Jia, “Hierarchical saliency detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (ICCVPR) (IEEE, 2013), pp. 1155–1162.

Jiang, H.

A. Borji, M. M. Cheng, H. Jiang, and J. Li, “Salient object detection: a benchmark,” IEEE Trans. Image Process. 24, 5706–5722 (2015).
[Crossref]

H. Jiang, J. Wang, Z. Yuan, Y. Wu, N. Zheng, and S. Li, “Salient object detection: a discriminative regional feature integration approach,” in IEEE Computer Vision and Pattern Recognition (CVPR) (IEEE, 2013), pp. 2083–2090.

Jiang, P.

P. Jiang, H. Ling, J. Yu, and J. Peng, “Salient region detection by UFO: uniqueness, focusness and objectness,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV) (IEEE, 2013), pp. 976–1983.

Koch, C.

L. Itti, C. Koch, and E. Niebur, “A model of saliency-based visual attention for rapid scene analysis,” IEEE Trans. Pattern Anal. Mach. Intell. 20, 1254–1259 (1998).
[Crossref]

J. Harel, C. Koch, and P. Perona, “Graph-based visual saliency,” in Advances in Neural Information Processing Systems (2006), 545–552.

Li, H.

H. Li, H. Lu, Z. Lin, X. Shen, and B. Price, “Inner and inter label propagation: salient object detection in the wild,” IEEE Trans. Image Process. 24, 3176–3186 (2015).
[Crossref]

L. Zhang, Z. Gu, and H. Li, “SDSP: a novel saliency detection method by combining simple priors,” in IEEE International Conference on Image Processing (IEEE, 2013), pp. 171–175.

Li, J.

A. Borji, M. M. Cheng, H. Jiang, and J. Li, “Salient object detection: a benchmark,” IEEE Trans. Image Process. 24, 5706–5722 (2015).
[Crossref]

Li, S.

H. Jiang, J. Wang, Z. Yuan, Y. Wu, N. Zheng, and S. Li, “Salient object detection: a discriminative regional feature integration approach,” in IEEE Computer Vision and Pattern Recognition (CVPR) (IEEE, 2013), pp. 2083–2090.

Li, X.

X. Li, Y. Li, C. Shen, A. Dick, and A. Van Den Hengel, “Contextual hypergraph modeling for salient object detection,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV) (IEEE, 2013), pp. 3328–3335.

X. Li, H. Lu, L. Zhang, X. Ruan, and M. H. Yang, “Saliency detection via dense and sparse reconstruction,” in Proceedings of the IEEE International Conference on Computer Vision (ICCVPR) (IEEE, 2013), pp. 2976–2983.

Li, Y.

X. Li, Y. Li, C. Shen, A. Dick, and A. Van Den Hengel, “Contextual hypergraph modeling for salient object detection,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV) (IEEE, 2013), pp. 3328–3335.

Liang, S.

W. Zhu, S. Liang, Y. Wei, and J. Sun, “Saliency optimization from robust background detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (ICCVPR) (IEEE, 2014), pp. 2814–2821.

Lin, W.

H. Tian, Y. Fang, Y. Zhao, W. Lin, R. Ni, and Z. Zhu, “Salient region detection by fusing bottom-up and top-down features extracted from a single image,” IEEE Trans. Image Process. 23, 4389–4398 (2014).
[Crossref]

Lin, Z.

H. Li, H. Lu, Z. Lin, X. Shen, and B. Price, “Inner and inter label propagation: salient object detection in the wild,” IEEE Trans. Image Process. 24, 3176–3186 (2015).
[Crossref]

Ling, H.

P. Jiang, H. Ling, J. Yu, and J. Peng, “Salient region detection by UFO: uniqueness, focusness and objectness,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV) (IEEE, 2013), pp. 976–1983.

Liu, T.

T. Liu, Z. Yuan, J. Sun, J. Wang, N. Zheng, X. Tang, and H.-Y. Shum, “Learning to detect a salient object,” IEEE Trans. Pattern Anal. Mach. Intell. 33, 353–367 (2011).
[Crossref]

Lou, J.

J. Lou, M. Ren, and H. Wang, “Regional principal color based saliency detection,” PloS ONE 9, e112475 (2014).
[Crossref]

Lu, H.

H. Li, H. Lu, Z. Lin, X. Shen, and B. Price, “Inner and inter label propagation: salient object detection in the wild,” IEEE Trans. Image Process. 24, 3176–3186 (2015).
[Crossref]

Y. Xie, H. Lu, and M.-H. Yang, “Bayesian saliency via low and mid level cues,” IEEE Trans. Image Process. 22, 1689–1698 (2013).
[Crossref]

C. Yang, L. Zhang, H. Lu, X. Ruan, and M.-H. Yang, “Saliency detection via graph-based manifold ranking,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (ICCVPR) (IEEE, 2013), pp. 3166–3173.

L. Wang, H. Lu, X. Ruan, and M. H. Yang, “Deep networks for saliency detection via local estimation and global search,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (ICCVPR) (IEEE, 2015), pp. 3183–3192.

X. Li, H. Lu, L. Zhang, X. Ruan, and M. H. Yang, “Saliency detection via dense and sparse reconstruction,” in Proceedings of the IEEE International Conference on Computer Vision (ICCVPR) (IEEE, 2013), pp. 2976–2983.

Lucchi, A.

R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. Süsstrunk, “SLIC superpixels compared to state-of-the-art superpixel methods,” IEEE Trans. Pattern Anal. Mach. Intell. 34, 2274–2282 (2012).
[Crossref]

MacQueen, J.

J. MacQueen, “Some methods for classification and analysis of multivariate observations,” in Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability (1967), pp. 281–297.

Mahadevan, V.

D. Gao, V. Mahadevan, and N. Vasconcelos, “The discriminant center-surround hypothesis for bottom-up saliency,” in Proceedings of the Advances in Neural Information Processing Systems (NIPS) (2007), pp. 497–504.

Malik, J.

D. R. Martin, C. C. Fowlkes, and J. Malik, “Learning to detect natural image boundaries using local brightness, color, and texture cues,” IEEE Trans. Pattern Anal. Mach. Intell. 26, 530–549 (2004).
[Crossref]

Martin, D. R.

D. R. Martin, C. C. Fowlkes, and J. Malik, “Learning to detect natural image boundaries using local brightness, color, and texture cues,” IEEE Trans. Pattern Anal. Mach. Intell. 26, 530–549 (2004).
[Crossref]

Mitra, N. J.

M.-M. Cheng, N. J. Mitra, X. Huang, P. H. S. Torr, and S. M. Hu, “Global contrast based salient region detection,” IEEE Trans. Pattern Anal. Mach. Intell. 37, 569–582 (2015).
[Crossref]

Ni, R.

H. Tian, Y. Fang, Y. Zhao, W. Lin, R. Ni, and Z. Zhu, “Salient region detection by fusing bottom-up and top-down features extracted from a single image,” IEEE Trans. Image Process. 23, 4389–4398 (2014).
[Crossref]

Niebur, E.

L. Itti, C. Koch, and E. Niebur, “A model of saliency-based visual attention for rapid scene analysis,” IEEE Trans. Pattern Anal. Mach. Intell. 20, 1254–1259 (1998).
[Crossref]

Otsu, N.

N. Otsu, “A threshold selection method from gray-level histograms,” Automatica 11, 285–296 (1975).
[Crossref]

Peng, J.

P. Jiang, H. Ling, J. Yu, and J. Peng, “Salient region detection by UFO: uniqueness, focusness and objectness,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV) (IEEE, 2013), pp. 976–1983.

Perona, P.

J. Harel, C. Koch, and P. Perona, “Graph-based visual saliency,” in Advances in Neural Information Processing Systems (2006), 545–552.

Pietikainen, M.

J. Chen, S. Shan, C. He, G. Zhao, M. Pietikainen, X. Chen, and W. Gao, “WLD: a robust local image descriptor,” IEEE Trans. Pattern Anal. Mach. Intell. 32, 1705–1720 (2010).
[Crossref]

Price, B.

H. Li, H. Lu, Z. Lin, X. Shen, and B. Price, “Inner and inter label propagation: salient object detection in the wild,” IEEE Trans. Image Process. 24, 3176–3186 (2015).
[Crossref]

Rasolzadeh, B.

B. Rasolzadeh, A. T. Targhi, and J. O. Eklundh, “An attentional system combining top-down and bottom-up influences,” in International Workshop on Attention in Cognitive Systems (Springer-Verlag, 2007), pp. 123–140.

Ren, M.

J. Lou, M. Ren, and H. Wang, “Regional principal color based saliency detection,” PloS ONE 9, e112475 (2014).
[Crossref]

Robson, J. G.

F. W. Campbell and J. G. Robson, “Application of Fourier analysis to the visibility of gratings,” J. Physiol. 197, 551–566 (1968).
[Crossref]

Ruan, X.

X. Li, H. Lu, L. Zhang, X. Ruan, and M. H. Yang, “Saliency detection via dense and sparse reconstruction,” in Proceedings of the IEEE International Conference on Computer Vision (ICCVPR) (IEEE, 2013), pp. 2976–2983.

L. Wang, H. Lu, X. Ruan, and M. H. Yang, “Deep networks for saliency detection via local estimation and global search,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (ICCVPR) (IEEE, 2015), pp. 3183–3192.

C. Yang, L. Zhang, H. Lu, X. Ruan, and M.-H. Yang, “Saliency detection via graph-based manifold ranking,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (ICCVPR) (IEEE, 2013), pp. 3166–3173.

Shaji, A.

R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. Süsstrunk, “SLIC superpixels compared to state-of-the-art superpixel methods,” IEEE Trans. Pattern Anal. Mach. Intell. 34, 2274–2282 (2012).
[Crossref]

Shan, S.

J. Chen, S. Shan, C. He, G. Zhao, M. Pietikainen, X. Chen, and W. Gao, “WLD: a robust local image descriptor,” IEEE Trans. Pattern Anal. Mach. Intell. 32, 1705–1720 (2010).
[Crossref]

Shen, C.

X. Li, Y. Li, C. Shen, A. Dick, and A. Van Den Hengel, “Contextual hypergraph modeling for salient object detection,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV) (IEEE, 2013), pp. 3328–3335.

Shen, X.

H. Li, H. Lu, Z. Lin, X. Shen, and B. Price, “Inner and inter label propagation: salient object detection in the wild,” IEEE Trans. Image Process. 24, 3176–3186 (2015).
[Crossref]

Shi, J.

Q. Yan, L. Xu, J. Shi, and J. Jia, “Hierarchical saliency detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (ICCVPR) (IEEE, 2013), pp. 1155–1162.

Shum, H.-Y.

T. Liu, Z. Yuan, J. Sun, J. Wang, N. Zheng, X. Tang, and H.-Y. Shum, “Learning to detect a salient object,” IEEE Trans. Pattern Anal. Mach. Intell. 33, 353–367 (2011).
[Crossref]

Sihite, D. N.

A. Borji, D. N. Sihite, and L. Itti, “Salient object detection: a benchmark,” in European Conference on Computer Vision (ECCV) (2012), pp. 414–429.

Smith, K.

R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. Süsstrunk, “SLIC superpixels compared to state-of-the-art superpixel methods,” IEEE Trans. Pattern Anal. Mach. Intell. 34, 2274–2282 (2012).
[Crossref]

Sun, J.

T. Liu, Z. Yuan, J. Sun, J. Wang, N. Zheng, X. Tang, and H.-Y. Shum, “Learning to detect a salient object,” IEEE Trans. Pattern Anal. Mach. Intell. 33, 353–367 (2011).
[Crossref]

W. Zhu, S. Liang, Y. Wei, and J. Sun, “Saliency optimization from robust background detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (ICCVPR) (IEEE, 2014), pp. 2814–2821.

Y. Wei, F. Wen, W. Zhu, and J. Sun, “Geodesic saliency using background priors,” in European Conference on Computer Vision (2012), pp. 29–42.

Susstrunk, S.

R. Achanta, S. Hemami, F. Estrada, and S. Susstrunk, “Frequency-tuned salient region detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (ICCVPR) (IEEE, 2009), pp. 1597–1604.

Süsstrunk, S.

R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. Süsstrunk, “SLIC superpixels compared to state-of-the-art superpixel methods,” IEEE Trans. Pattern Anal. Mach. Intell. 34, 2274–2282 (2012).
[Crossref]

R. Achanta, F. Estrada, P. Wils, and S. Süsstrunk, “Salient region detection and segmentation,” in Computer Vision Systems (Springer, 2008), pp. 66–75.

Tal, A.

S. Goferman, L. Zelnik-Manor, and A. Tal, “Context-aware saliency detection,” IEEE Trans. Pattern Anal. Mach. Intell. 34, 1915–1926 (2012).
[Crossref]

Tang, X.

T. Liu, Z. Yuan, J. Sun, J. Wang, N. Zheng, X. Tang, and H.-Y. Shum, “Learning to detect a salient object,” IEEE Trans. Pattern Anal. Mach. Intell. 33, 353–367 (2011).
[Crossref]

Targhi, A. T.

B. Rasolzadeh, A. T. Targhi, and J. O. Eklundh, “An attentional system combining top-down and bottom-up influences,” in International Workshop on Attention in Cognitive Systems (Springer-Verlag, 2007), pp. 123–140.

Tian, H.

H. Tian, Y. Fang, Y. Zhao, W. Lin, R. Ni, and Z. Zhu, “Salient region detection by fusing bottom-up and top-down features extracted from a single image,” IEEE Trans. Image Process. 23, 4389–4398 (2014).
[Crossref]

Torr, P. H. S.

M.-M. Cheng, N. J. Mitra, X. Huang, P. H. S. Torr, and S. M. Hu, “Global contrast based salient region detection,” IEEE Trans. Pattern Anal. Mach. Intell. 37, 569–582 (2015).
[Crossref]

Tsotsos, J.

N. Bruce and J. Tsotsos, “Saliency based on information maximization,” in Advances in Neural Information Processing Systems (NIPS) (2005), Vol. 18, pp. 155–162.

Van Den Hengel, A.

X. Li, Y. Li, C. Shen, A. Dick, and A. Van Den Hengel, “Contextual hypergraph modeling for salient object detection,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV) (IEEE, 2013), pp. 3328–3335.

Vasconcelos, N.

D. Gao, V. Mahadevan, and N. Vasconcelos, “The discriminant center-surround hypothesis for bottom-up saliency,” in Proceedings of the Advances in Neural Information Processing Systems (NIPS) (2007), pp. 497–504.

Wang, H.

J. Lou, M. Ren, and H. Wang, “Regional principal color based saliency detection,” PloS ONE 9, e112475 (2014).
[Crossref]

Wang, J.

T. Liu, Z. Yuan, J. Sun, J. Wang, N. Zheng, X. Tang, and H.-Y. Shum, “Learning to detect a salient object,” IEEE Trans. Pattern Anal. Mach. Intell. 33, 353–367 (2011).
[Crossref]

H. Jiang, J. Wang, Z. Yuan, Y. Wu, N. Zheng, and S. Li, “Salient object detection: a discriminative regional feature integration approach,” in IEEE Computer Vision and Pattern Recognition (CVPR) (IEEE, 2013), pp. 2083–2090.

Wang, L.

L. Wang, H. Lu, X. Ruan, and M. H. Yang, “Deep networks for saliency detection via local estimation and global search,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (ICCVPR) (IEEE, 2015), pp. 3183–3192.

Wei, Y.

W. Zhu, S. Liang, Y. Wei, and J. Sun, “Saliency optimization from robust background detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (ICCVPR) (IEEE, 2014), pp. 2814–2821.

Y. Wei, F. Wen, W. Zhu, and J. Sun, “Geodesic saliency using background priors,” in European Conference on Computer Vision (2012), pp. 29–42.

Wen, F.

Y. Wei, F. Wen, W. Zhu, and J. Sun, “Geodesic saliency using background priors,” in European Conference on Computer Vision (2012), pp. 29–42.

Wils, P.

R. Achanta, F. Estrada, P. Wils, and S. Süsstrunk, “Salient region detection and segmentation,” in Computer Vision Systems (Springer, 2008), pp. 66–75.

Wu, Y.

H. Jiang, J. Wang, Z. Yuan, Y. Wu, N. Zheng, and S. Li, “Salient object detection: a discriminative regional feature integration approach,” in IEEE Computer Vision and Pattern Recognition (CVPR) (IEEE, 2013), pp. 2083–2090.

Xia, Y.

L. Xu, Q. Yan, Y. Xia, and J. Jia, “Structure extraction from texture via relative total variation,” ACM Trans. Graph. 31, 139 (2012).

Xie, Y.

Y. Xie, H. Lu, and M.-H. Yang, “Bayesian saliency via low and mid level cues,” IEEE Trans. Image Process. 22, 1689–1698 (2013).
[Crossref]

Xu, L.

L. Xu, Q. Yan, Y. Xia, and J. Jia, “Structure extraction from texture via relative total variation,” ACM Trans. Graph. 31, 139 (2012).

Q. Yan, L. Xu, J. Shi, and J. Jia, “Hierarchical saliency detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (ICCVPR) (IEEE, 2013), pp. 1155–1162.

Yan, Q.

L. Xu, Q. Yan, Y. Xia, and J. Jia, “Structure extraction from texture via relative total variation,” ACM Trans. Graph. 31, 139 (2012).

Q. Yan, L. Xu, J. Shi, and J. Jia, “Hierarchical saliency detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (ICCVPR) (IEEE, 2013), pp. 1155–1162.

Yang, C.

C. Yang, L. Zhang, H. Lu, X. Ruan, and M.-H. Yang, “Saliency detection via graph-based manifold ranking,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (ICCVPR) (IEEE, 2013), pp. 3166–3173.

Yang, J.

J. Yang and M. H. Yang, “Top-down visual saliency via joint CRF and dictionary learning,” in Computer Vision and Pattern Recognition (CVPR) (IEEE, 2012), pp. 2296–2303.

Yang, M. H.

J. Yang and M. H. Yang, “Top-down visual saliency via joint CRF and dictionary learning,” in Computer Vision and Pattern Recognition (CVPR) (IEEE, 2012), pp. 2296–2303.

L. Wang, H. Lu, X. Ruan, and M. H. Yang, “Deep networks for saliency detection via local estimation and global search,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (ICCVPR) (IEEE, 2015), pp. 3183–3192.

X. Li, H. Lu, L. Zhang, X. Ruan, and M. H. Yang, “Saliency detection via dense and sparse reconstruction,” in Proceedings of the IEEE International Conference on Computer Vision (ICCVPR) (IEEE, 2013), pp. 2976–2983.

Yang, M.-H.

Y. Xie, H. Lu, and M.-H. Yang, “Bayesian saliency via low and mid level cues,” IEEE Trans. Image Process. 22, 1689–1698 (2013).
[Crossref]

C. Yang, L. Zhang, H. Lu, X. Ruan, and M.-H. Yang, “Saliency detection via graph-based manifold ranking,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (ICCVPR) (IEEE, 2013), pp. 3166–3173.

Yu, J.

P. Jiang, H. Ling, J. Yu, and J. Peng, “Salient region detection by UFO: uniqueness, focusness and objectness,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV) (IEEE, 2013), pp. 976–1983.

Yuan, Z.

T. Liu, Z. Yuan, J. Sun, J. Wang, N. Zheng, X. Tang, and H.-Y. Shum, “Learning to detect a salient object,” IEEE Trans. Pattern Anal. Mach. Intell. 33, 353–367 (2011).
[Crossref]

H. Jiang, J. Wang, Z. Yuan, Y. Wu, N. Zheng, and S. Li, “Salient object detection: a discriminative regional feature integration approach,” in IEEE Computer Vision and Pattern Recognition (CVPR) (IEEE, 2013), pp. 2083–2090.

Zelnik-Manor, L.

S. Goferman, L. Zelnik-Manor, and A. Tal, “Context-aware saliency detection,” IEEE Trans. Pattern Anal. Mach. Intell. 34, 1915–1926 (2012).
[Crossref]

Zhang, L.

X. Li, H. Lu, L. Zhang, X. Ruan, and M. H. Yang, “Saliency detection via dense and sparse reconstruction,” in Proceedings of the IEEE International Conference on Computer Vision (ICCVPR) (IEEE, 2013), pp. 2976–2983.

L. Zhang, Z. Gu, and H. Li, “SDSP: a novel saliency detection method by combining simple priors,” in IEEE International Conference on Image Processing (IEEE, 2013), pp. 171–175.

C. Yang, L. Zhang, H. Lu, X. Ruan, and M.-H. Yang, “Saliency detection via graph-based manifold ranking,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (ICCVPR) (IEEE, 2013), pp. 3166–3173.

Zhao, G.

J. Chen, S. Shan, C. He, G. Zhao, M. Pietikainen, X. Chen, and W. Gao, “WLD: a robust local image descriptor,” IEEE Trans. Pattern Anal. Mach. Intell. 32, 1705–1720 (2010).
[Crossref]

Zhao, Y.

H. Tian, Y. Fang, Y. Zhao, W. Lin, R. Ni, and Z. Zhu, “Salient region detection by fusing bottom-up and top-down features extracted from a single image,” IEEE Trans. Image Process. 23, 4389–4398 (2014).
[Crossref]

Zheng, N.

T. Liu, Z. Yuan, J. Sun, J. Wang, N. Zheng, X. Tang, and H.-Y. Shum, “Learning to detect a salient object,” IEEE Trans. Pattern Anal. Mach. Intell. 33, 353–367 (2011).
[Crossref]

H. Jiang, J. Wang, Z. Yuan, Y. Wu, N. Zheng, and S. Li, “Salient object detection: a discriminative regional feature integration approach,” in IEEE Computer Vision and Pattern Recognition (CVPR) (IEEE, 2013), pp. 2083–2090.

Zhu, W.

Y. Wei, F. Wen, W. Zhu, and J. Sun, “Geodesic saliency using background priors,” in European Conference on Computer Vision (2012), pp. 29–42.

W. Zhu, S. Liang, Y. Wei, and J. Sun, “Saliency optimization from robust background detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (ICCVPR) (IEEE, 2014), pp. 2814–2821.

Zhu, Z.

H. Tian, Y. Fang, Y. Zhao, W. Lin, R. Ni, and Z. Zhu, “Salient region detection by fusing bottom-up and top-down features extracted from a single image,” IEEE Trans. Image Process. 23, 4389–4398 (2014).
[Crossref]

ACM Trans. Graph. (1)

L. Xu, Q. Yan, Y. Xia, and J. Jia, “Structure extraction from texture via relative total variation,” ACM Trans. Graph. 31, 139 (2012).

Automatica (1)

N. Otsu, “A threshold selection method from gray-level histograms,” Automatica 11, 285–296 (1975).
[Crossref]

IEEE Trans. Image Process. (5)

H. Li, H. Lu, Z. Lin, X. Shen, and B. Price, “Inner and inter label propagation: salient object detection in the wild,” IEEE Trans. Image Process. 24, 3176–3186 (2015).
[Crossref]

Y. Xie, H. Lu, and M.-H. Yang, “Bayesian saliency via low and mid level cues,” IEEE Trans. Image Process. 22, 1689–1698 (2013).
[Crossref]

A. Borji, “What is a salient object? A dataset and a baseline model for salient object detection,” IEEE Trans. Image Process. 24, 742–756 (2015).
[Crossref]

H. Tian, Y. Fang, Y. Zhao, W. Lin, R. Ni, and Z. Zhu, “Salient region detection by fusing bottom-up and top-down features extracted from a single image,” IEEE Trans. Image Process. 23, 4389–4398 (2014).
[Crossref]

A. Borji, M. M. Cheng, H. Jiang, and J. Li, “Salient object detection: a benchmark,” IEEE Trans. Image Process. 24, 5706–5722 (2015).
[Crossref]

IEEE Trans. Pattern Anal. Mach. Intell. (7)

T. Liu, Z. Yuan, J. Sun, J. Wang, N. Zheng, X. Tang, and H.-Y. Shum, “Learning to detect a salient object,” IEEE Trans. Pattern Anal. Mach. Intell. 33, 353–367 (2011).
[Crossref]

L. Itti, C. Koch, and E. Niebur, “A model of saliency-based visual attention for rapid scene analysis,” IEEE Trans. Pattern Anal. Mach. Intell. 20, 1254–1259 (1998).
[Crossref]

S. Goferman, L. Zelnik-Manor, and A. Tal, “Context-aware saliency detection,” IEEE Trans. Pattern Anal. Mach. Intell. 34, 1915–1926 (2012).
[Crossref]

M.-M. Cheng, N. J. Mitra, X. Huang, P. H. S. Torr, and S. M. Hu, “Global contrast based salient region detection,” IEEE Trans. Pattern Anal. Mach. Intell. 37, 569–582 (2015).
[Crossref]

D. R. Martin, C. C. Fowlkes, and J. Malik, “Learning to detect natural image boundaries using local brightness, color, and texture cues,” IEEE Trans. Pattern Anal. Mach. Intell. 26, 530–549 (2004).
[Crossref]

J. Chen, S. Shan, C. He, G. Zhao, M. Pietikainen, X. Chen, and W. Gao, “WLD: a robust local image descriptor,” IEEE Trans. Pattern Anal. Mach. Intell. 32, 1705–1720 (2010).
[Crossref]

R. Achanta, A. Shaji, K. Smith, A. Lucchi, P. Fua, and S. Süsstrunk, “SLIC superpixels compared to state-of-the-art superpixel methods,” IEEE Trans. Pattern Anal. Mach. Intell. 34, 2274–2282 (2012).
[Crossref]

J. Physiol. (1)

F. W. Campbell and J. G. Robson, “Application of Fourier analysis to the visibility of gratings,” J. Physiol. 197, 551–566 (1968).
[Crossref]

PloS ONE (1)

J. Lou, M. Ren, and H. Wang, “Regional principal color based saliency detection,” PloS ONE 9, e112475 (2014).
[Crossref]

Other (21)

A. Borji, D. N. Sihite, and L. Itti, “Salient object detection: a benchmark,” in European Conference on Computer Vision (ECCV) (2012), pp. 414–429.

X. Li, Y. Li, C. Shen, A. Dick, and A. Van Den Hengel, “Contextual hypergraph modeling for salient object detection,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV) (IEEE, 2013), pp. 3328–3335.

H. Jiang, J. Wang, Z. Yuan, Y. Wu, N. Zheng, and S. Li, “Salient object detection: a discriminative regional feature integration approach,” in IEEE Computer Vision and Pattern Recognition (CVPR) (IEEE, 2013), pp. 2083–2090.

C. Yang, L. Zhang, H. Lu, X. Ruan, and M.-H. Yang, “Saliency detection via graph-based manifold ranking,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (ICCVPR) (IEEE, 2013), pp. 3166–3173.

Q. Yan, L. Xu, J. Shi, and J. Jia, “Hierarchical saliency detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (ICCVPR) (IEEE, 2013), pp. 1155–1162.

L. Zhang, Z. Gu, and H. Li, “SDSP: a novel saliency detection method by combining simple priors,” in IEEE International Conference on Image Processing (IEEE, 2013), pp. 171–175.

P. Jiang, H. Ling, J. Yu, and J. Peng, “Salient region detection by UFO: uniqueness, focusness and objectness,” in Proceedings of the IEEE International Conference on Computer Vision (ICCV) (IEEE, 2013), pp. 976–1983.

J. MacQueen, “Some methods for classification and analysis of multivariate observations,” in Proceedings of the Fifth Berkeley Symposium on Mathematical Statistics and Probability (1967), pp. 281–297.

Y. Wei, F. Wen, W. Zhu, and J. Sun, “Geodesic saliency using background priors,” in European Conference on Computer Vision (2012), pp. 29–42.

A. K. Jain, Fundamentals of Digital Image Processing (Prentice-Hall, 1989).

L. Wang, H. Lu, X. Ruan, and M. H. Yang, “Deep networks for saliency detection via local estimation and global search,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (ICCVPR) (IEEE, 2015), pp. 3183–3192.

R. Achanta, S. Hemami, F. Estrada, and S. Susstrunk, “Frequency-tuned salient region detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (ICCVPR) (IEEE, 2009), pp. 1597–1604.

W. Zhu, S. Liang, Y. Wei, and J. Sun, “Saliency optimization from robust background detection,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (ICCVPR) (IEEE, 2014), pp. 2814–2821.

X. Li, H. Lu, L. Zhang, X. Ruan, and M. H. Yang, “Saliency detection via dense and sparse reconstruction,” in Proceedings of the IEEE International Conference on Computer Vision (ICCVPR) (IEEE, 2013), pp. 2976–2983.

B. Rasolzadeh, A. T. Targhi, and J. O. Eklundh, “An attentional system combining top-down and bottom-up influences,” in International Workshop on Attention in Cognitive Systems (Springer-Verlag, 2007), pp. 123–140.

J. Harel, C. Koch, and P. Perona, “Graph-based visual saliency,” in Advances in Neural Information Processing Systems (2006), 545–552.

R. Achanta, F. Estrada, P. Wils, and S. Süsstrunk, “Salient region detection and segmentation,” in Computer Vision Systems (Springer, 2008), pp. 66–75.

D. Gao, V. Mahadevan, and N. Vasconcelos, “The discriminant center-surround hypothesis for bottom-up saliency,” in Proceedings of the Advances in Neural Information Processing Systems (NIPS) (2007), pp. 497–504.

J. Yang and M. H. Yang, “Top-down visual saliency via joint CRF and dictionary learning,” in Computer Vision and Pattern Recognition (CVPR) (IEEE, 2012), pp. 2296–2303.

A. Borji, “Boosting bottom-up and top-down visual features for saliency estimation,” in Computer Vision and Pattern Recognition (CVPR) (IEEE, 2012), pp. 438–445.

N. Bruce and J. Tsotsos, “Saliency based on information maximization,” in Advances in Neural Information Processing Systems (NIPS) (2005), Vol. 18, pp. 155–162.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (15)

Fig. 1.
Fig. 1.

Comparison of different approaches to saliency: (a) input, (b) GT, (c) CA [10], (d) IG [12], (e) method in [5], and (f) proposed.

Fig. 2.
Fig. 2.

Framework of our proposed saliency detection algorithm.

Fig. 3.
Fig. 3.

Examples of the proposed methods: (a) Input, (b) coarse saliency extraction, (c) background weight map, and (d) final result.

Fig. 4.
Fig. 4.

Texture suppression.

Fig. 5.
Fig. 5.

Filters in WLD.

Fig. 6.
Fig. 6.

Context-based smoothing.

Fig. 7.
Fig. 7.

Graph model with superpixels as nodes. (The purple and green lines represent connections to first- and second-order neighbors, respectively. The red line means that the all boundary nodes are connected among themselves.)

Fig. 8.
Fig. 8.

Boundary nodes selection.

Fig. 9.
Fig. 9.

Quantitative evaluations by (a) PR curves, (b) adaptive threshold, and (c) MAE on dataset MSRA1000.

Fig. 10.
Fig. 10.

Quantitative evaluations by (a) PR curves, (b) adaptive threshold, and (c) MAE on dataset ECSSD.

Fig. 11.
Fig. 11.

Quantitative evaluations by (a) PR curves, (b) adaptive threshold, and (c) MAE on dataset Judd.

Fig. 12.
Fig. 12.

Quantitative evaluations by (a) PR curves, (b) adaptive threshold, and (c) MAE on dataset SED2.

Fig. 13.
Fig. 13.

Qualitative comparisons on MSRA1000 (from the 1st column to the 15th column are the images of input, GT, AC, CA, CHM, DRFI, DSR, GB, GMR, HC, HS, IG, RPC, Xie, and ours, respectively). All the input images are available on http://mmcheng.net/zh/.

Fig. 14.
Fig. 14.

Qualitative comparisons on ECSSD, Judd, and SED2 (from the 1st column to the 15th column are the images of input, GT, AC, CA, CHM, DRFI, DSR, GB, GMR, HC, HS, IG, RPC, Xie, and ours, respectively). All the images available on http://mmcheng.net/zh/.

Fig. 15.
Fig. 15.

Validation of individual module combination.

Tables (2)

Tables Icon

Algorithm 1. Label Propagation via Boundary Nodes

Tables Icon

Table 1. Execution Time Comparison

Equations (24)

Equations on this page are rendered with MathJax. Learn more.

d f 00 = i = 0 n 1 ( Δ p i ) = i = 0 n 1 ( p i p c ) .
ξ ( p c ) = arctan [ d f 00 d f 01 ] = arctan [ i = 0 n 1 ( p i p c p c ) ] .
φ ( p c ) = arctan ( p f 11 p f 10 ) ,
S ( s i ) = ( j = 1 K i ω i j f ( d ( s i , n j ) ) ) × t ( x , y ) × h ( u ) .
f ( φ ) = log ( 1 φ ) .
t ( x , y ) = exp ( ( x x 0 ) 2 / ( 2 σ x 2 ) ( y y 0 ) 2 / ( 2 σ y 2 ) ) ,
h ( u ) = { exp ( u λ × E ) u E η 0 otherwise ,
I a ( x ) = I a ( x ) min _ a max _ a min _ a          a n d          I b ( x ) = I b ( x ) min _ b max _ b min _ b ,
S C ( x ) = 1 exp { I a 2 ( x ) + I b 2 ( x ) σ C 2 } ,
f = f ( j ) ( j 1 ) , j = 2 , , 16 .
F r ( K i ) = 1 m i p B i | g ( p ) | · exp ( 1 m i + n i q ( B i E i ) F p ( q ) ) ,
SM c = exp ( CPM + FPM ) × LCM ,
γ i ˜ = ρ j = 1 N c ω i l j γ l j ˜ + ( 1 ρ ) γ i ,
ω i l j = exp ( x i x l j 2 2 σ x 2 ) ( 1 δ ( l j i ) ) j = 1 N c exp ( x i x l j 2 2 σ x 2 ) ,
w i j = { exp ( c i c j σ 2 ) j N ( i ) or    i , j B 0 i = j or    otherwise ,
A = D 1 · W .
f t + 1 ( s i ) = j = 1 N a i j f t ( s j ) ,
s * = argmin s ( i = 1 N SM c i ( s i 1 ) 2 + i = 1 N BG i s i 2 + i , j ω i j ( s i s j ) 2 ) ,
ω i j = exp ( d app 2 ( p i , p j ) 2 σ 2 ) + τ .
Precision = | B G T | | B | , Recall = | B G T | | G T | ,
F β = ( 1 + β 2 ) Precision × Recall β 2 × Precision + Recall ,
MAE = 1 W × H i = 1 W j = 1 H | Sal ( i , j ) G T ( i , j ) | .
T a = 2 W × H i = 1 W j = 1 H | Sal ( i , j ) | .
S ( s i ) = ( j = 1 K i ω i j f ( d ( s i , n j ) ) ) × h ( u ) .

Metrics