Abstract

The effects of design decisions in the development of systems that generate images for human consumption, such as cameras and displays, are often evaluated using real-world images. However, human observers can react differently to complex pictorial stimuli over the course of a lengthy experiment. This study was conducted to develop understanding of the optimal design of pictorial stimuli for effective and efficient perceptual experiments. The goals were to understand the impact of image content on visual attention and consistency of experimental results and apply this understanding to develop guidelines for pictorial target design for perceptual image comparison experiments. The efficacy of the proposed guidelines was evaluated. While the fixation consistency results were generally as expected, fixation consistency did not always equate to experimental response consistency. Along with scene complexity, the image modifications and the difficulty of the image equivalency decisions played a role in the experimental response.

© 2014 Optical Society of America

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. D. M. Green and J. A. Swets, Signal Detection Theory and Psychophysics (Wiley, 1966) pp. 137–176.
  2. T. Judd, F. Durand, and A. Torralba, “Fixations on low-resolution images,” J. Vis. 11(4), 14 (2011).
    [CrossRef]
  3. S. J. Luck and E. K. Vogel, “The capacity of visual working memory for features and conjunctions,” Nature 390, 279–281 (1997).
    [CrossRef]
  4. G. A. Miller, “The magical number seven plus or minus two: some limits on our capacity for processing information,” Psycholog. Rev. 63, 81–97 (1956).
    [CrossRef]
  5. G. A. Alvarez and P. Cavanagh, “The capacity of visual short-term memory is set both by visual information load and by number of objects,” Psychol. Sci. 15, 106–111 (2004).
  6. J. Duncan, “Selective attention and the organization of visual information,” J. Exp. Psychol. 113, 501–517 (1984).
    [CrossRef]
  7. W. Einhäuser, U. Rutishauser, and C. Koch, “Task-demands can immediately reverse the effects of sensory-driven saliency in complex visual stimuli,” J. Vis. 8(2), 2 (2008).
    [CrossRef]
  8. I. Biederman, R. J. Mezzanotte, and J. C. Rabinowitz, “Scene perception: detecting and judging objects undergoing relational violations,” Cogn. Psychol. 14, 143–177 (1982).
  9. P. G. Engeldrum, Psychometric Scaling: A Toolkit for Imaging Systems, (Imcotek, 2000).
  10. J. S. Babcock, J. B. Pelz, and M. D. Fairchild, “Eye tracking observers during color image evaluation tasks,” in Proceedings of SPIE-IS&T Human Vision and Electronic Imaging VIII (International Society for Optics and Photonics, 2003) pp. 218–230.
  11. J. S. Babcock, J. B. Pelz, and M. D. Fairchild, “Eye tracking observers during rank order, paired comparison, and graphical rating tasks,” in Image Processing, Image Quality, Image Capture Systems Conference (PICS) Proceedings (Society for Imaging Science and Technology, 2003) pp. 10–15.
  12. M. I. Posner, C. R. R. Snyder, and B. J. Davidson, “Attention and the detection of signals,” J. Exp. Psychol. 109, 160–174 (1980).
    [CrossRef]
  13. F. S. Frey and S. P. Farnand, “Benchmarking art image interchange cycles: final report,” 2011, http://artimaging.rit.edu .
  14. S. Werner and B. Thies, “Is ‘change blindness’ attenuated by domain-specific expertise? An expert-novices comparison of change detection in football images,” Vis. Cogn. 7, 163–174 (2000).
  15. S. P. Farnand, J. Jiang, and F. S. Frey, “Comparing hardcopy and softcopy results in the study of the impact of workflow on perceived reproduction quality of fine art images,” Proc. SPIE 7867, 786705 (2011).
    [CrossRef]
  16. H. Kivinen, M. Nuutinen, and P. Oittinen, “Comparison of colour difference methods for natural images,” in Proceedings of Conference on Color in Graphics, Image and Vision (CGIV) (Society for Imaging Science and Technology, 2010) pp. 510–515.
  17. S. P. Farnand, Designing Pictorial Stimuli for Perceptual Image Difference Experiments (RIT, 2013).
  18. J. Rigau, M. Feixas, and M. Sbert, “An information theoretic framework for image complexity,” in Computational Aesthetics in Graphics, Visualization and Imaging (Eurographics Association, 2005).
  19. L. G. Ugarriza, E. Saber, S. R. Vantaram, V. Amuso, M. Shaw, and R. Bhaskar, “Automatic image segmentation by dynamic region growth and multiresolution merging,” IEEE Trans. Image Process. 18, 2275–2288 (2009).
    [CrossRef]
  20. G. T. Buswell, How People Look at Pictures: A Study of the Psychology of Perception in Art (University of Chicago, 1935).
  21. A. Yarbus, Eye Movements and Vision (Plenum, 1967).
  22. L. Itti, C. Koch, and E. Niebur, “A model of saliency-based visual attention for rapid scene analysis,” IEEE Trans. Pattern Analysis Machine Intellig. 20, 1254–1259 (1998).
    [CrossRef]
  23. C. A. Rothkopf, D. H. Ballard, and M. M. Hayhoe, “Task and context determine where you look,” J. Vis. 7(14), 16 (2007).
    [CrossRef]
  24. W. Einhauser, M. Spain, and P. Perona, “Objects predict fixations better than early saliency,” J. Vis. 8(14), 1–26 (2008).
  25. S. Triantaphillidou, E. Allen, and R. E. Jacobson, “Image quality comparison between JPEG and JPEG2000. II. Scene dependency, scene analysis, and classification,” J. Imaging Sci. Technol. 51, 259–270 (2007).
    [CrossRef]
  26. J. Redi, H. Liu, R. Zunino, and H. Heynderickx, “Interactions of visual attention and quality perception,” in SPIE Proceedings EI104, Human Vision and Electronic Imaging VXI (International Society for Optics and Photonics, 2011).
  27. G. J. Zelinsky, R. P. N. Rao, M. M. Hayhoe, and D. H. Ballard, “Eye movements and visual search in natural scenes,” in Proceedings of IS&T/OSA Optics and Imaging in the Information Age (Optical Society of America, 1996), pp. 1–5.
  28. K. Rayner, “Eye movements in reading and information processing: 20 years of research,” Psychol. Bull. 124, 372–422 (1998).
    [CrossRef]
  29. C. Fredembach, J. Wang, and G. J. Woolfe, “Saliency, visual attention and image quality,” in Proceedings of the Eighteenth Color Imaging Conference, (IS&T, 2010) pp. 128–133.
  30. D. Parkhurst and E. Niebur, “Texture contrast attracts overt visual attention in natural scenes,” Eur. J. Neurosci. 19, 783–789 (2004).
    [CrossRef]
  31. C. Fredembach, “Saliency as compact regions for local image enhancement,” in Proceedings of the Nineteenth Color and Imaging Conference: Color Science and Engineering Systems, Technologies, and Applications (IS&T, 2011).

2011 (2)

T. Judd, F. Durand, and A. Torralba, “Fixations on low-resolution images,” J. Vis. 11(4), 14 (2011).
[CrossRef]

S. P. Farnand, J. Jiang, and F. S. Frey, “Comparing hardcopy and softcopy results in the study of the impact of workflow on perceived reproduction quality of fine art images,” Proc. SPIE 7867, 786705 (2011).
[CrossRef]

2009 (1)

L. G. Ugarriza, E. Saber, S. R. Vantaram, V. Amuso, M. Shaw, and R. Bhaskar, “Automatic image segmentation by dynamic region growth and multiresolution merging,” IEEE Trans. Image Process. 18, 2275–2288 (2009).
[CrossRef]

2008 (2)

W. Einhäuser, U. Rutishauser, and C. Koch, “Task-demands can immediately reverse the effects of sensory-driven saliency in complex visual stimuli,” J. Vis. 8(2), 2 (2008).
[CrossRef]

W. Einhauser, M. Spain, and P. Perona, “Objects predict fixations better than early saliency,” J. Vis. 8(14), 1–26 (2008).

2007 (2)

S. Triantaphillidou, E. Allen, and R. E. Jacobson, “Image quality comparison between JPEG and JPEG2000. II. Scene dependency, scene analysis, and classification,” J. Imaging Sci. Technol. 51, 259–270 (2007).
[CrossRef]

C. A. Rothkopf, D. H. Ballard, and M. M. Hayhoe, “Task and context determine where you look,” J. Vis. 7(14), 16 (2007).
[CrossRef]

2004 (2)

D. Parkhurst and E. Niebur, “Texture contrast attracts overt visual attention in natural scenes,” Eur. J. Neurosci. 19, 783–789 (2004).
[CrossRef]

G. A. Alvarez and P. Cavanagh, “The capacity of visual short-term memory is set both by visual information load and by number of objects,” Psychol. Sci. 15, 106–111 (2004).

2000 (1)

S. Werner and B. Thies, “Is ‘change blindness’ attenuated by domain-specific expertise? An expert-novices comparison of change detection in football images,” Vis. Cogn. 7, 163–174 (2000).

1998 (2)

K. Rayner, “Eye movements in reading and information processing: 20 years of research,” Psychol. Bull. 124, 372–422 (1998).
[CrossRef]

L. Itti, C. Koch, and E. Niebur, “A model of saliency-based visual attention for rapid scene analysis,” IEEE Trans. Pattern Analysis Machine Intellig. 20, 1254–1259 (1998).
[CrossRef]

1997 (1)

S. J. Luck and E. K. Vogel, “The capacity of visual working memory for features and conjunctions,” Nature 390, 279–281 (1997).
[CrossRef]

1984 (1)

J. Duncan, “Selective attention and the organization of visual information,” J. Exp. Psychol. 113, 501–517 (1984).
[CrossRef]

1982 (1)

I. Biederman, R. J. Mezzanotte, and J. C. Rabinowitz, “Scene perception: detecting and judging objects undergoing relational violations,” Cogn. Psychol. 14, 143–177 (1982).

1980 (1)

M. I. Posner, C. R. R. Snyder, and B. J. Davidson, “Attention and the detection of signals,” J. Exp. Psychol. 109, 160–174 (1980).
[CrossRef]

1956 (1)

G. A. Miller, “The magical number seven plus or minus two: some limits on our capacity for processing information,” Psycholog. Rev. 63, 81–97 (1956).
[CrossRef]

Allen, E.

S. Triantaphillidou, E. Allen, and R. E. Jacobson, “Image quality comparison between JPEG and JPEG2000. II. Scene dependency, scene analysis, and classification,” J. Imaging Sci. Technol. 51, 259–270 (2007).
[CrossRef]

Alvarez, G. A.

G. A. Alvarez and P. Cavanagh, “The capacity of visual short-term memory is set both by visual information load and by number of objects,” Psychol. Sci. 15, 106–111 (2004).

Amuso, V.

L. G. Ugarriza, E. Saber, S. R. Vantaram, V. Amuso, M. Shaw, and R. Bhaskar, “Automatic image segmentation by dynamic region growth and multiresolution merging,” IEEE Trans. Image Process. 18, 2275–2288 (2009).
[CrossRef]

Babcock, J. S.

J. S. Babcock, J. B. Pelz, and M. D. Fairchild, “Eye tracking observers during color image evaluation tasks,” in Proceedings of SPIE-IS&T Human Vision and Electronic Imaging VIII (International Society for Optics and Photonics, 2003) pp. 218–230.

J. S. Babcock, J. B. Pelz, and M. D. Fairchild, “Eye tracking observers during rank order, paired comparison, and graphical rating tasks,” in Image Processing, Image Quality, Image Capture Systems Conference (PICS) Proceedings (Society for Imaging Science and Technology, 2003) pp. 10–15.

Ballard, D. H.

C. A. Rothkopf, D. H. Ballard, and M. M. Hayhoe, “Task and context determine where you look,” J. Vis. 7(14), 16 (2007).
[CrossRef]

G. J. Zelinsky, R. P. N. Rao, M. M. Hayhoe, and D. H. Ballard, “Eye movements and visual search in natural scenes,” in Proceedings of IS&T/OSA Optics and Imaging in the Information Age (Optical Society of America, 1996), pp. 1–5.

Bhaskar, R.

L. G. Ugarriza, E. Saber, S. R. Vantaram, V. Amuso, M. Shaw, and R. Bhaskar, “Automatic image segmentation by dynamic region growth and multiresolution merging,” IEEE Trans. Image Process. 18, 2275–2288 (2009).
[CrossRef]

Biederman, I.

I. Biederman, R. J. Mezzanotte, and J. C. Rabinowitz, “Scene perception: detecting and judging objects undergoing relational violations,” Cogn. Psychol. 14, 143–177 (1982).

Buswell, G. T.

G. T. Buswell, How People Look at Pictures: A Study of the Psychology of Perception in Art (University of Chicago, 1935).

Cavanagh, P.

G. A. Alvarez and P. Cavanagh, “The capacity of visual short-term memory is set both by visual information load and by number of objects,” Psychol. Sci. 15, 106–111 (2004).

Davidson, B. J.

M. I. Posner, C. R. R. Snyder, and B. J. Davidson, “Attention and the detection of signals,” J. Exp. Psychol. 109, 160–174 (1980).
[CrossRef]

Duncan, J.

J. Duncan, “Selective attention and the organization of visual information,” J. Exp. Psychol. 113, 501–517 (1984).
[CrossRef]

Durand, F.

T. Judd, F. Durand, and A. Torralba, “Fixations on low-resolution images,” J. Vis. 11(4), 14 (2011).
[CrossRef]

Einhauser, W.

W. Einhauser, M. Spain, and P. Perona, “Objects predict fixations better than early saliency,” J. Vis. 8(14), 1–26 (2008).

Einhäuser, W.

W. Einhäuser, U. Rutishauser, and C. Koch, “Task-demands can immediately reverse the effects of sensory-driven saliency in complex visual stimuli,” J. Vis. 8(2), 2 (2008).
[CrossRef]

Engeldrum, P. G.

P. G. Engeldrum, Psychometric Scaling: A Toolkit for Imaging Systems, (Imcotek, 2000).

Fairchild, M. D.

J. S. Babcock, J. B. Pelz, and M. D. Fairchild, “Eye tracking observers during rank order, paired comparison, and graphical rating tasks,” in Image Processing, Image Quality, Image Capture Systems Conference (PICS) Proceedings (Society for Imaging Science and Technology, 2003) pp. 10–15.

J. S. Babcock, J. B. Pelz, and M. D. Fairchild, “Eye tracking observers during color image evaluation tasks,” in Proceedings of SPIE-IS&T Human Vision and Electronic Imaging VIII (International Society for Optics and Photonics, 2003) pp. 218–230.

Farnand, S. P.

S. P. Farnand, J. Jiang, and F. S. Frey, “Comparing hardcopy and softcopy results in the study of the impact of workflow on perceived reproduction quality of fine art images,” Proc. SPIE 7867, 786705 (2011).
[CrossRef]

S. P. Farnand, Designing Pictorial Stimuli for Perceptual Image Difference Experiments (RIT, 2013).

Feixas, M.

J. Rigau, M. Feixas, and M. Sbert, “An information theoretic framework for image complexity,” in Computational Aesthetics in Graphics, Visualization and Imaging (Eurographics Association, 2005).

Fredembach, C.

C. Fredembach, J. Wang, and G. J. Woolfe, “Saliency, visual attention and image quality,” in Proceedings of the Eighteenth Color Imaging Conference, (IS&T, 2010) pp. 128–133.

C. Fredembach, “Saliency as compact regions for local image enhancement,” in Proceedings of the Nineteenth Color and Imaging Conference: Color Science and Engineering Systems, Technologies, and Applications (IS&T, 2011).

Frey, F. S.

S. P. Farnand, J. Jiang, and F. S. Frey, “Comparing hardcopy and softcopy results in the study of the impact of workflow on perceived reproduction quality of fine art images,” Proc. SPIE 7867, 786705 (2011).
[CrossRef]

Green, D. M.

D. M. Green and J. A. Swets, Signal Detection Theory and Psychophysics (Wiley, 1966) pp. 137–176.

Hayhoe, M. M.

C. A. Rothkopf, D. H. Ballard, and M. M. Hayhoe, “Task and context determine where you look,” J. Vis. 7(14), 16 (2007).
[CrossRef]

G. J. Zelinsky, R. P. N. Rao, M. M. Hayhoe, and D. H. Ballard, “Eye movements and visual search in natural scenes,” in Proceedings of IS&T/OSA Optics and Imaging in the Information Age (Optical Society of America, 1996), pp. 1–5.

Heynderickx, H.

J. Redi, H. Liu, R. Zunino, and H. Heynderickx, “Interactions of visual attention and quality perception,” in SPIE Proceedings EI104, Human Vision and Electronic Imaging VXI (International Society for Optics and Photonics, 2011).

Itti, L.

L. Itti, C. Koch, and E. Niebur, “A model of saliency-based visual attention for rapid scene analysis,” IEEE Trans. Pattern Analysis Machine Intellig. 20, 1254–1259 (1998).
[CrossRef]

Jacobson, R. E.

S. Triantaphillidou, E. Allen, and R. E. Jacobson, “Image quality comparison between JPEG and JPEG2000. II. Scene dependency, scene analysis, and classification,” J. Imaging Sci. Technol. 51, 259–270 (2007).
[CrossRef]

Jiang, J.

S. P. Farnand, J. Jiang, and F. S. Frey, “Comparing hardcopy and softcopy results in the study of the impact of workflow on perceived reproduction quality of fine art images,” Proc. SPIE 7867, 786705 (2011).
[CrossRef]

Judd, T.

T. Judd, F. Durand, and A. Torralba, “Fixations on low-resolution images,” J. Vis. 11(4), 14 (2011).
[CrossRef]

Kivinen, H.

H. Kivinen, M. Nuutinen, and P. Oittinen, “Comparison of colour difference methods for natural images,” in Proceedings of Conference on Color in Graphics, Image and Vision (CGIV) (Society for Imaging Science and Technology, 2010) pp. 510–515.

Koch, C.

W. Einhäuser, U. Rutishauser, and C. Koch, “Task-demands can immediately reverse the effects of sensory-driven saliency in complex visual stimuli,” J. Vis. 8(2), 2 (2008).
[CrossRef]

L. Itti, C. Koch, and E. Niebur, “A model of saliency-based visual attention for rapid scene analysis,” IEEE Trans. Pattern Analysis Machine Intellig. 20, 1254–1259 (1998).
[CrossRef]

Liu, H.

J. Redi, H. Liu, R. Zunino, and H. Heynderickx, “Interactions of visual attention and quality perception,” in SPIE Proceedings EI104, Human Vision and Electronic Imaging VXI (International Society for Optics and Photonics, 2011).

Luck, S. J.

S. J. Luck and E. K. Vogel, “The capacity of visual working memory for features and conjunctions,” Nature 390, 279–281 (1997).
[CrossRef]

Mezzanotte, R. J.

I. Biederman, R. J. Mezzanotte, and J. C. Rabinowitz, “Scene perception: detecting and judging objects undergoing relational violations,” Cogn. Psychol. 14, 143–177 (1982).

Miller, G. A.

G. A. Miller, “The magical number seven plus or minus two: some limits on our capacity for processing information,” Psycholog. Rev. 63, 81–97 (1956).
[CrossRef]

Niebur, E.

D. Parkhurst and E. Niebur, “Texture contrast attracts overt visual attention in natural scenes,” Eur. J. Neurosci. 19, 783–789 (2004).
[CrossRef]

L. Itti, C. Koch, and E. Niebur, “A model of saliency-based visual attention for rapid scene analysis,” IEEE Trans. Pattern Analysis Machine Intellig. 20, 1254–1259 (1998).
[CrossRef]

Nuutinen, M.

H. Kivinen, M. Nuutinen, and P. Oittinen, “Comparison of colour difference methods for natural images,” in Proceedings of Conference on Color in Graphics, Image and Vision (CGIV) (Society for Imaging Science and Technology, 2010) pp. 510–515.

Oittinen, P.

H. Kivinen, M. Nuutinen, and P. Oittinen, “Comparison of colour difference methods for natural images,” in Proceedings of Conference on Color in Graphics, Image and Vision (CGIV) (Society for Imaging Science and Technology, 2010) pp. 510–515.

Parkhurst, D.

D. Parkhurst and E. Niebur, “Texture contrast attracts overt visual attention in natural scenes,” Eur. J. Neurosci. 19, 783–789 (2004).
[CrossRef]

Pelz, J. B.

J. S. Babcock, J. B. Pelz, and M. D. Fairchild, “Eye tracking observers during rank order, paired comparison, and graphical rating tasks,” in Image Processing, Image Quality, Image Capture Systems Conference (PICS) Proceedings (Society for Imaging Science and Technology, 2003) pp. 10–15.

J. S. Babcock, J. B. Pelz, and M. D. Fairchild, “Eye tracking observers during color image evaluation tasks,” in Proceedings of SPIE-IS&T Human Vision and Electronic Imaging VIII (International Society for Optics and Photonics, 2003) pp. 218–230.

Perona, P.

W. Einhauser, M. Spain, and P. Perona, “Objects predict fixations better than early saliency,” J. Vis. 8(14), 1–26 (2008).

Posner, M. I.

M. I. Posner, C. R. R. Snyder, and B. J. Davidson, “Attention and the detection of signals,” J. Exp. Psychol. 109, 160–174 (1980).
[CrossRef]

Rabinowitz, J. C.

I. Biederman, R. J. Mezzanotte, and J. C. Rabinowitz, “Scene perception: detecting and judging objects undergoing relational violations,” Cogn. Psychol. 14, 143–177 (1982).

Rao, R. P. N.

G. J. Zelinsky, R. P. N. Rao, M. M. Hayhoe, and D. H. Ballard, “Eye movements and visual search in natural scenes,” in Proceedings of IS&T/OSA Optics and Imaging in the Information Age (Optical Society of America, 1996), pp. 1–5.

Rayner, K.

K. Rayner, “Eye movements in reading and information processing: 20 years of research,” Psychol. Bull. 124, 372–422 (1998).
[CrossRef]

Redi, J.

J. Redi, H. Liu, R. Zunino, and H. Heynderickx, “Interactions of visual attention and quality perception,” in SPIE Proceedings EI104, Human Vision and Electronic Imaging VXI (International Society for Optics and Photonics, 2011).

Rigau, J.

J. Rigau, M. Feixas, and M. Sbert, “An information theoretic framework for image complexity,” in Computational Aesthetics in Graphics, Visualization and Imaging (Eurographics Association, 2005).

Rothkopf, C. A.

C. A. Rothkopf, D. H. Ballard, and M. M. Hayhoe, “Task and context determine where you look,” J. Vis. 7(14), 16 (2007).
[CrossRef]

Rutishauser, U.

W. Einhäuser, U. Rutishauser, and C. Koch, “Task-demands can immediately reverse the effects of sensory-driven saliency in complex visual stimuli,” J. Vis. 8(2), 2 (2008).
[CrossRef]

Saber, E.

L. G. Ugarriza, E. Saber, S. R. Vantaram, V. Amuso, M. Shaw, and R. Bhaskar, “Automatic image segmentation by dynamic region growth and multiresolution merging,” IEEE Trans. Image Process. 18, 2275–2288 (2009).
[CrossRef]

Sbert, M.

J. Rigau, M. Feixas, and M. Sbert, “An information theoretic framework for image complexity,” in Computational Aesthetics in Graphics, Visualization and Imaging (Eurographics Association, 2005).

Shaw, M.

L. G. Ugarriza, E. Saber, S. R. Vantaram, V. Amuso, M. Shaw, and R. Bhaskar, “Automatic image segmentation by dynamic region growth and multiresolution merging,” IEEE Trans. Image Process. 18, 2275–2288 (2009).
[CrossRef]

Snyder, C. R. R.

M. I. Posner, C. R. R. Snyder, and B. J. Davidson, “Attention and the detection of signals,” J. Exp. Psychol. 109, 160–174 (1980).
[CrossRef]

Spain, M.

W. Einhauser, M. Spain, and P. Perona, “Objects predict fixations better than early saliency,” J. Vis. 8(14), 1–26 (2008).

Swets, J. A.

D. M. Green and J. A. Swets, Signal Detection Theory and Psychophysics (Wiley, 1966) pp. 137–176.

Thies, B.

S. Werner and B. Thies, “Is ‘change blindness’ attenuated by domain-specific expertise? An expert-novices comparison of change detection in football images,” Vis. Cogn. 7, 163–174 (2000).

Torralba, A.

T. Judd, F. Durand, and A. Torralba, “Fixations on low-resolution images,” J. Vis. 11(4), 14 (2011).
[CrossRef]

Triantaphillidou, S.

S. Triantaphillidou, E. Allen, and R. E. Jacobson, “Image quality comparison between JPEG and JPEG2000. II. Scene dependency, scene analysis, and classification,” J. Imaging Sci. Technol. 51, 259–270 (2007).
[CrossRef]

Ugarriza, L. G.

L. G. Ugarriza, E. Saber, S. R. Vantaram, V. Amuso, M. Shaw, and R. Bhaskar, “Automatic image segmentation by dynamic region growth and multiresolution merging,” IEEE Trans. Image Process. 18, 2275–2288 (2009).
[CrossRef]

Vantaram, S. R.

L. G. Ugarriza, E. Saber, S. R. Vantaram, V. Amuso, M. Shaw, and R. Bhaskar, “Automatic image segmentation by dynamic region growth and multiresolution merging,” IEEE Trans. Image Process. 18, 2275–2288 (2009).
[CrossRef]

Vogel, E. K.

S. J. Luck and E. K. Vogel, “The capacity of visual working memory for features and conjunctions,” Nature 390, 279–281 (1997).
[CrossRef]

Wang, J.

C. Fredembach, J. Wang, and G. J. Woolfe, “Saliency, visual attention and image quality,” in Proceedings of the Eighteenth Color Imaging Conference, (IS&T, 2010) pp. 128–133.

Werner, S.

S. Werner and B. Thies, “Is ‘change blindness’ attenuated by domain-specific expertise? An expert-novices comparison of change detection in football images,” Vis. Cogn. 7, 163–174 (2000).

Woolfe, G. J.

C. Fredembach, J. Wang, and G. J. Woolfe, “Saliency, visual attention and image quality,” in Proceedings of the Eighteenth Color Imaging Conference, (IS&T, 2010) pp. 128–133.

Yarbus, A.

A. Yarbus, Eye Movements and Vision (Plenum, 1967).

Zelinsky, G. J.

G. J. Zelinsky, R. P. N. Rao, M. M. Hayhoe, and D. H. Ballard, “Eye movements and visual search in natural scenes,” in Proceedings of IS&T/OSA Optics and Imaging in the Information Age (Optical Society of America, 1996), pp. 1–5.

Zunino, R.

J. Redi, H. Liu, R. Zunino, and H. Heynderickx, “Interactions of visual attention and quality perception,” in SPIE Proceedings EI104, Human Vision and Electronic Imaging VXI (International Society for Optics and Photonics, 2011).

Cogn. Psychol. (1)

I. Biederman, R. J. Mezzanotte, and J. C. Rabinowitz, “Scene perception: detecting and judging objects undergoing relational violations,” Cogn. Psychol. 14, 143–177 (1982).

Eur. J. Neurosci. (1)

D. Parkhurst and E. Niebur, “Texture contrast attracts overt visual attention in natural scenes,” Eur. J. Neurosci. 19, 783–789 (2004).
[CrossRef]

IEEE Trans. Image Process. (1)

L. G. Ugarriza, E. Saber, S. R. Vantaram, V. Amuso, M. Shaw, and R. Bhaskar, “Automatic image segmentation by dynamic region growth and multiresolution merging,” IEEE Trans. Image Process. 18, 2275–2288 (2009).
[CrossRef]

IEEE Trans. Pattern Analysis Machine Intellig. (1)

L. Itti, C. Koch, and E. Niebur, “A model of saliency-based visual attention for rapid scene analysis,” IEEE Trans. Pattern Analysis Machine Intellig. 20, 1254–1259 (1998).
[CrossRef]

J. Exp. Psychol. (2)

J. Duncan, “Selective attention and the organization of visual information,” J. Exp. Psychol. 113, 501–517 (1984).
[CrossRef]

M. I. Posner, C. R. R. Snyder, and B. J. Davidson, “Attention and the detection of signals,” J. Exp. Psychol. 109, 160–174 (1980).
[CrossRef]

J. Imaging Sci. Technol. (1)

S. Triantaphillidou, E. Allen, and R. E. Jacobson, “Image quality comparison between JPEG and JPEG2000. II. Scene dependency, scene analysis, and classification,” J. Imaging Sci. Technol. 51, 259–270 (2007).
[CrossRef]

J. Vis. (4)

C. A. Rothkopf, D. H. Ballard, and M. M. Hayhoe, “Task and context determine where you look,” J. Vis. 7(14), 16 (2007).
[CrossRef]

W. Einhauser, M. Spain, and P. Perona, “Objects predict fixations better than early saliency,” J. Vis. 8(14), 1–26 (2008).

W. Einhäuser, U. Rutishauser, and C. Koch, “Task-demands can immediately reverse the effects of sensory-driven saliency in complex visual stimuli,” J. Vis. 8(2), 2 (2008).
[CrossRef]

T. Judd, F. Durand, and A. Torralba, “Fixations on low-resolution images,” J. Vis. 11(4), 14 (2011).
[CrossRef]

Nature (1)

S. J. Luck and E. K. Vogel, “The capacity of visual working memory for features and conjunctions,” Nature 390, 279–281 (1997).
[CrossRef]

Proc. SPIE (1)

S. P. Farnand, J. Jiang, and F. S. Frey, “Comparing hardcopy and softcopy results in the study of the impact of workflow on perceived reproduction quality of fine art images,” Proc. SPIE 7867, 786705 (2011).
[CrossRef]

Psychol. Bull. (1)

K. Rayner, “Eye movements in reading and information processing: 20 years of research,” Psychol. Bull. 124, 372–422 (1998).
[CrossRef]

Psychol. Sci. (1)

G. A. Alvarez and P. Cavanagh, “The capacity of visual short-term memory is set both by visual information load and by number of objects,” Psychol. Sci. 15, 106–111 (2004).

Psycholog. Rev. (1)

G. A. Miller, “The magical number seven plus or minus two: some limits on our capacity for processing information,” Psycholog. Rev. 63, 81–97 (1956).
[CrossRef]

Vis. Cogn. (1)

S. Werner and B. Thies, “Is ‘change blindness’ attenuated by domain-specific expertise? An expert-novices comparison of change detection in football images,” Vis. Cogn. 7, 163–174 (2000).

Other (14)

C. Fredembach, “Saliency as compact regions for local image enhancement,” in Proceedings of the Nineteenth Color and Imaging Conference: Color Science and Engineering Systems, Technologies, and Applications (IS&T, 2011).

C. Fredembach, J. Wang, and G. J. Woolfe, “Saliency, visual attention and image quality,” in Proceedings of the Eighteenth Color Imaging Conference, (IS&T, 2010) pp. 128–133.

J. Redi, H. Liu, R. Zunino, and H. Heynderickx, “Interactions of visual attention and quality perception,” in SPIE Proceedings EI104, Human Vision and Electronic Imaging VXI (International Society for Optics and Photonics, 2011).

G. J. Zelinsky, R. P. N. Rao, M. M. Hayhoe, and D. H. Ballard, “Eye movements and visual search in natural scenes,” in Proceedings of IS&T/OSA Optics and Imaging in the Information Age (Optical Society of America, 1996), pp. 1–5.

G. T. Buswell, How People Look at Pictures: A Study of the Psychology of Perception in Art (University of Chicago, 1935).

A. Yarbus, Eye Movements and Vision (Plenum, 1967).

D. M. Green and J. A. Swets, Signal Detection Theory and Psychophysics (Wiley, 1966) pp. 137–176.

P. G. Engeldrum, Psychometric Scaling: A Toolkit for Imaging Systems, (Imcotek, 2000).

J. S. Babcock, J. B. Pelz, and M. D. Fairchild, “Eye tracking observers during color image evaluation tasks,” in Proceedings of SPIE-IS&T Human Vision and Electronic Imaging VIII (International Society for Optics and Photonics, 2003) pp. 218–230.

J. S. Babcock, J. B. Pelz, and M. D. Fairchild, “Eye tracking observers during rank order, paired comparison, and graphical rating tasks,” in Image Processing, Image Quality, Image Capture Systems Conference (PICS) Proceedings (Society for Imaging Science and Technology, 2003) pp. 10–15.

H. Kivinen, M. Nuutinen, and P. Oittinen, “Comparison of colour difference methods for natural images,” in Proceedings of Conference on Color in Graphics, Image and Vision (CGIV) (Society for Imaging Science and Technology, 2010) pp. 510–515.

S. P. Farnand, Designing Pictorial Stimuli for Perceptual Image Difference Experiments (RIT, 2013).

J. Rigau, M. Feixas, and M. Sbert, “An information theoretic framework for image complexity,” in Computational Aesthetics in Graphics, Visualization and Imaging (Eurographics Association, 2005).

F. S. Frey and S. P. Farnand, “Benchmarking art image interchange cycles: final report,” 2011, http://artimaging.rit.edu .

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1.
Fig. 1.

N5 target from the large gamut image set (ISO/CD 12640-3 CIELAB SCID).

Fig. 2.
Fig. 2.

Firelight painting used in the Current Practices in Fine Art Reproduction study, sponsored by The Andrew W. Mellon Foundation.

Fig. 3.
Fig. 3.

Examples of scenes used in Experiment I. The full scene is on the left, followed by the midcropped image and the two closely cropped images. Examples are from the Landscape (top, photograph contributed by Dr. Mark Fairchild) and Still Life (bottom, obtained from the Corel® database at RIT) categories.

Fig. 4.
Fig. 4.

Stimulus triplet format. The top image was identified to observers as the “original.” The task was to select which of the bottom two images more closely resembled the original image. This figure also shows the scanpaths of five observers. The circles represent fixations with circle size indicating the fixation duration. The stimuli were 1660 × 1040 pixel, 8 bit tiff files.

Fig. 5.
Fig. 5.

Fixation count averaged over all observers by crop (left). Fixation consistency as a function of scene crop as represented by the percentage of unique areas that were included in the top fixated areas (right).

Fig. 6.
Fig. 6.

Percentage of images in Experiment 2 that (1) were described by observers using the same word, (2) had the same Main AOI, (3) had neither of these commonalities, or (4) had both.

Fig. 7.
Fig. 7.

Examples of high (left) and low (right) observer scanpath consistency, qualitatively assessed.

Fig. 8.
Fig. 8.

Fixation time percentage on the main AOI by category.

Fig. 9.
Fig. 9.

Number of the eight possible areas in a given scene represented in the Top 3 AOIs by scene category (left) and the number of different areas named, on average, for the scenes in each category (right).

Fig. 10.
Fig. 10.

Frequency that each AOI was in the top-three AOIs. Lighter shades represent more frequent occurrence and darker shades lower occurrence of being in the top-three AOIs.

Fig. 11.
Fig. 11.

Number of experimental inconsistencies (wraps and switches) versus the fixation time percentage on the main interest area. An example of a wrap is choosing the contrast rendition over hue, hue over red, and red over contrast.

Fig. 12.
Fig. 12.

Example of scene having standard (left) and blurred (right) renditions. (Scene obtained from the Corel database at RIT.)

Tables (2)

Tables Icon

Table 1. Experiments, Objectives, and Results Used in Subsequent Testing and Guideline Generation

Tables Icon

Table 2. Image Characteristics to Include and Avoid in Pictorial Stimuli for Image Comparison Experiments

Metrics