Abstract

Quality estimators aspire to quantify the perceptual resemblance, but not the usefulness, of a distorted image when compared to a reference natural image. However, humans can successfully accomplish tasks (e.g., object identification) using visibly distorted images that are not necessarily of high quality. A suite of novel subjective experiments reveals that quality does not accurately predict utility (i.e., usefulness). Thus, even accurate quality estimators cannot accurately estimate utility. In the absence of utility estimators, leading quality estimators are assessed as both quality and utility estimators and dismantled to understand those image characteristics that distinguish utility from quality. A newly proposed utility estimator demonstrates that a measure of contour degradation is sufficient to accurately estimate utility and is argued to be compatible with shape-based theories of object perception.

© 2011 Optical Society of America

Full Article  |  PDF Article

References

  • View by:
  • |
  • |
  • |

  1. In this paper, “natural images” are formed using imaging devices that sense the natural environment over the visible portion of the electromagnetic spectrum (e.g., digital cameras). Computer-generated images and other types of synthetic images are not considered natural images.
  2. C. G. Ford, M. A. McFarland, and I. W. Stange, “Subjective video quality assessment methods for recognition tasks,” Proc. SPIE 7240, 72400Z (2009).
    [CrossRef]
  3. C.Ford, P.Raush, and K.Davis, eds., Video Quality in Public Safety Conference (Institute for Telecommunication Sciences, 2009).
  4. A. M. Burton, S. Wilson, M. Cowan, and V. Bruce, “Face recognition in poor-quality video; evidence from security surveillance,” Psychol. Sci. 10, 243-248 (1999).
    [CrossRef]
  5. J. K. Petersen, Understanding Surveillance Technologies (CRC, 2001).
  6. J. P. Davis and T. Valentine, “CCTV on trial: matching video images with the defendant in the dock,” Appl. Cogn. Psychol. 23, 482-505 (2009).
    [CrossRef]
  7. J. C. Leachtenauer and R. G. Driggers, Surveillance and Reconnaissance Imaging Systems (Artech House, 2001).
  8. J. Johnson, “Analysis of image forming systems,” in Image Intensifier Symposium (Fort Belvoir, 1958).
  9. L.M.Biberman, ed., Perception of Displayed Information(Plenum, 1973).
  10. A. van Meeteren, “Characterization of task performance with viewing instruments,” J. Opt. Soc. Am. A 7, 2016-2023 (1990).
    [CrossRef]
  11. J. C. Leachtenauer, “Resolution requirements and the Johnson criteria revisited,” Proc. SPIE 1-15 (2003).
    [CrossRef]
  12. R. H. Vollmerhausen, E. Jacobs, and R. G. Driggers, “New metric for predicting target acquisition performance,” Opt. Eng. 43, 2806-2818 (2004).
    [CrossRef]
  13. J. M. Irvine, B. A. Eckstein, R. A. Hummel, R. J. Peters, and R. Ritzel, “Evaluation of the tactical utility of compressed imagery,” Opt. Eng. 41, 1262-1273 (2002).
    [CrossRef]
  14. P. D. O'Shea, E. L. Jacobs, and R. L. Espinola, “Effects of image compression on sensor performance,” Opt. Eng. 47, 013202(2008).
    [CrossRef]
  15. T. Stockham, “Image processing in the context of a visual model,” Proc. IEEE 60, 828-842 (1972).
    [CrossRef]
  16. J. L. Mannos, “The effects of a visual fidelity criterion on the encoding of images,” IEEE Trans. Inf. Theory 20, 525-536(1974).
    [CrossRef]
  17. D. Granrath, “The role of human visual models in image processing,” Proc. IEEE 69, 552-561 (1981).
    [CrossRef]
  18. H. de Ridder and G. M. Majoor, “Numerical category scaling: an efficient method for assessing digital image coding impairments,” Proc. SPIE 1249, 65-77 (1990).
    [CrossRef]
  19. J. A. J. Roufs, “Perceptual image quality: concept and measurement,” Philips J. Res. 47, 35-62 (1992).
  20. S. A. Klein, “Image quality and image compression: a psychophysicist's viewpoint,” in Digital Images and Human Vision, A.B.Watson, ed. (MIT, 1993), pp. 73-88.
  21. T. N. Pappas and R. J. Safranek, “Perceptual criteria for image quality evaluation,” in Handbook of Image and Video Processing, A.C.Bovik, ed. (Academic, 2000).
  22. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600-612 (2004).
    [CrossRef]
  23. H. R. Sheikh and A. C. Bovik, “Image information and visual quality,” IEEE Trans. Image Process. 15, 430-444(2006).
    [CrossRef]
  24. The National Imagery Interpretability Rating Scale (NIIRS) has been associated with image quality . However, the NIIRS characterizes an image's quality based on the ability of a photo interpreter to detect, recognize, and identify objects in an image. Various versions of the NIIRS have been designed for specific image applications. The NIIRS is more compatible with the definition of utility used in this paper.
  25. H. R. Sheikh, Z. Wang, L. Cormack, and A. C. Bovik, “LIVE image quality assessment database release 2,” http://live.ece.utexas.edu/research/quality.
  26. D. Chandler, “The CSIQ database,” http://vision.okstate.edu/index.php?loc=csiq.
  27. A visually lossless image is visually indistinguishable from a reference image.
  28. T. M. Murphy and L. H. Finkel, “Shape representation by a network of V4-like cells,” Neural Netw. 20, 851-867 (2007).
    [CrossRef]
  29. G. Loffler, “Perception of contours and shapes: low and intermediate stage mechanisms,” Vis. Res. 48, 2106-2127(2008).
    [CrossRef]
  30. S. O. Dumoulin, S. C. Dakin, and R. F. Hess, “Sparsely distributed contours dominate extra-striate responses to complex scenes,” NeuroImage 42, 890-901 (2008).
    [CrossRef]
  31. The experiments described in this paper augment the experiments described in previous publications by the authors .
  32. D. M. Chandler and S. S. Hemami, “Effects of natural images on the detectability of simple and compound wavelet subband quantization distortions,” J. Opt. Soc. Am. A 20, 1164-1180(2003).
    [CrossRef]
  33. W. B. Pennebaker and J. L. Mitchell, JPEG: Still Image Data Compression Standard (Van Nostrand Reinhold, 1993).
  34. “Independent JPEG Group,” http://www.ijg.org.
  35. International Organization for Standardization, “Information technology--digital compression and coding of continuous-tone still images--requirements and guidelines,” ITU-T T.81 (International Telecommunication Union, 1992).
  36. D. S. Taubman and M. W. MarcellinJPEG2000: Image Compression Fundamentals, Standards, and Practice (Kluwer Academic, 2002).
  37. L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithm,” Physica D (Amsterdam) 60, 259-268 (1992).
    [CrossRef]
  38. G. Steidl, J. Weickert, T. Brox, P. Mrazek, and M. Welk, “On the equivalence of soft wavelet shrinkage, total variation diffusion, total variation regularization, and SIDEs,” SIAM J. Numer. Anal. 42, 686-713 (2004).
    [CrossRef]
  39. J.-L. Starck, M. Elad, and D. L. Donoho, “Image decomposition via the combination of sparse representations and a variational approach,” IEEE Trans. Image Process. 14, 1570-1582 (2005).
    [CrossRef]
  40. D. M. Rouse and S. S. Hemami, “Analyzing the role of visual structure in the recognition of natural image content with multi-scale SSIM,” Proc. SPIE 6806, 680615.1-680615.14 (2008).
  41. J. S. Bruner and M. C. Potter, “Interference in visual recognition,” Science 144, 424-425 (1964).
    [CrossRef]
  42. R. A. Bradley and M. E. Terry, “The rank analysis of incomplete block designs I: The method of paired comparisons,” Biometrika 39, 324-345 (1952).
  43. D. E. Critchlow and M. A. Fligner, “Paired comparisons, triple comparisons, and ranking experiments as generalized linear models, and their implementation on GLIM,” Psychometrika 56, 517-533 (1991).
    [CrossRef]
  44. D. Strohmeier and G. Tech, “Sharp, bright, three-dimensional: open profiling of quality for mobile 3DTV coding methods,” Proc. SPIE 75420T (2010).
    [CrossRef]
  45. International Telecommunication Union, “Subjective video quality assessment methods for multimedia applications,” ITU-U P.910 (International Telecommunication Union, 2008).
  46. Numerical category scaling , adjective category scale , and categorical sort are alternative names describing the ACR test method. The subjective assessment methodology for video quality (SAMVIQ) generally obtains more accurate perceived quality scores and avoids many problems where observers avoid using the ends of the quality scale. Both ACR and SAMVIQ yield very similar perceived quality scores for our collection of distorted images .
  47. “Multimedia group test plan” (2008), draft version 1.21., http://www.vqeg.org.
  48. Prior work in the context of perceived quality often denotes a perceived quality score as a mean opinion score.
  49. The perceived quality of unrecognizable images with perceived utility scores less than −15 range from 1 to 1.4 with the average, standard deviation, and median being 1.07, 0.089, and 1.04, respectively.
  50. G. W. Snedecor and W. G. Cochran, Statistical Methods, 8th ed. (Iowa State, 1989).
  51. C. M. Jarque and A. K. Bera, “Efficient tests for normality, homoscedasticity, and serial independence of regression residuals,” Econ. Lett. 6, 255-259 (1980).
    [CrossRef]
  52. E. C. Fieller, H. O. Hartley, and E. S. Pearson, “Tests for rank correlation coefficients. I,” Biometrika 44, 470-481(1957).
  53. J. L. Devore, Probability and Statistics for Engineering and the Sciences, 5th ed. (Duxbury, 2000).
  54. Only six BLOCK distorted images have perceived utility scores greater than −15, so results corresponding to the BLOCK distorted images provide little insight into the relationship between quality and utility. Furthermore, these images have perceived quality scores in the range [1,1.3] (i.e., “bad” quality) and perceived utility scores in the range [−13,4] (i.e., effectively useless).
  55. Values of Conf(STS(γ)>STS+HPF(γ)) less than 0.025 and greater than 0.975 indicate that the subjective scores for TS and TS+HPF distorted images with equal γ are statistically different at the 95% confidence level (i.e., a two-sided z test). Values of Conf(STS(γ)>STS+HPF(γ)) less than 0.05 indicate that the subjective score for the TS distorted image is statistically smaller than the subjective score for a TS+HPF distorted image formed from the same reference image using the same γ at the 95% confidence level (i.e., a one-sided z test). Similarly, values of Conf(STS(γ)>STS+HPF(γ)) greater than 0.95 indicate that the subjective score for the TS distorted image is statistically greater than the subjective score for a TS+HPF distorted image with the same γ.
  56. D. J. Field, “Relations between the statistics of natural images and the response properties of cortical cells,” J. Opt. Soc. Am. A 4, 2379-2394 (1987).
    [CrossRef]
  57. C. A. Párraga, T. Troscianko, and D. J. Tolhurst, “The effects of amplitude-spectrum statistics on foveal and peripheral discrimination of changes in natural images, and a multi-resolution model,” Vis. Res. 45, 3145-3168 (2005).
    [CrossRef]
  58. C. Poynton, “The rehabilitation of gamma,” Proc. SPIE 3299, 232-249 (1998).
    [CrossRef]
  59. A. M. Eskicioglu and P. S. Fisher, “Image quality measures and their performance,” IEEE Trans. Commun. 43, 2959-2965(1995).
    [CrossRef]
  60. I. Avcıbaş, B. Sankur, and K. Sayood, “Statistical evaluation of image quality measures,” J. Electron. Imaging 11, 206-233(2002).
    [CrossRef]
  61. R. L. De Valois and K. K. De Valois, Spatial Vision (Oxford University, 1990).
  62. G. Legge and J. Foley, “Contrast masking in human vision,” J. Opt. Soc. Am. 70, 1458-1470 (1980).
    [CrossRef]
  63. M. A. Georgeson and G. D. Sullivan, “Contrast constancy: debluring in human vision by spatial frequency channels,” J. Physiol. 252, 627-656 (1975).
  64. N. Brady and D. J. Field, “What's constant in contrast constancy? The effects of scaling on the perceived contrast of bandpass patterns,” Vis. Res. 35, 739-756 (1995).
    [CrossRef]
  65. W. A. Pearlman, “A visual system model and a new distortion measure in the context of image processing,” J. Opt. Soc. Am. 68, 374-386 (1978).
    [CrossRef]
  66. R. J. Safranek and J. D. Johnston, “A perceptually tuned sub-band image coder with image dependent quantization and post-quantization data compression,” in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (IEEE, 1989), pp. 1945-1948.
  67. S. J. Daly, “The visible difference predictor: an algorithm for the assessment of image fidelity,” in Digital Images and Human Vision, A.B.Watson, ed. (MIT, 1993), pp. 179-206.
  68. J. Lubin, “The use of psychophysical data and models in the analysis of display system performance,” in Digital Images and Human Vision, A.B.Watson, ed. (MIT, 1993), pp. 163-178.
  69. A. B. Watson, “DCT quantization matrices visually optimized for individual images,” Proc. SPIE 1913, 202-216 (1993).
    [CrossRef]
  70. P. Teo and D. Heeger, “Perceptual image distortion,” Proc. SPIE 2179, 127-141 (1994).
    [CrossRef]
  71. A. B. Watson, G. Y. Yang, J. A. Solomon, and J. Villasenor, “Visibility of wavelet quantization noise,” IEEE Trans. Image Process. 6, 1164-1175 (1997).
    [CrossRef]
  72. N. Damera-Venkata, T. D. Kite, W. S. Geisler, B. L. Evans, and A. C. Bovik, “Image quality assessment based on a degradation model,” IEEE Trans. Image Process. 9, 636-650 (2000).
    [CrossRef]
  73. Z. Wang, E. P. Simoncelli, and A. C. Bovik, “Multi-scale structural similarity for image quality assessment,” in Proceedings of the 37th IEEE Asilomar Conference on Signals, Systems, and Computers (IEEE, 2003), Vol. 2, pp. 1398-1402.
  74. D. M. Chandler and S. S. Hemami, “VSNR: a wavelet-based visual signal-to-noise ratio for natural images,” IEEE Trans. Image Process. 16, 2284-2298 (2007).
    [CrossRef]
  75. M. Carnec, P. Le Callet, and D. Barba, “Objective quality assessment of color images based on a generic perceptual reduced reference,” Signal Process., Image Commun. 23, 239-256(2008).
    [CrossRef]
  76. D. Navon, “Forest before trees: the precedence of global features in visual perception,” Cogn. Psychol. 9, 353-383(1977).
    [CrossRef]
  77. D. M. Rouse and S. S. Hemami, “Understanding and simplifying the structural similarity metric,” in Proceedings of the IEEE International Conference on Image Processing (IEEE, 2008), pp. 1188-1191.
  78. D. Rouse, R. Pepion, S. Hemami, and P. Le Callet, “Image utility assessment and a relationship with image quality assessment,” Proc. SPIE 7240 (2009).
    [CrossRef]
  79. K. Grill-Spector, “The neural basis of object perception,” Curr. Opin. Neurobiol. 13, 159-166 (2003).
    [CrossRef]
  80. I. Biderman and G. Ju, “Surface versus edge-based determinants of visual recognition,” Cogn. Psychol. 20, 38-64 (1988).
    [CrossRef]
  81. D. M. Rouse and S. S. Hemami, “Quantifying the use of structure in cognitive tasks,” Proc. SPIE 6492, 64921O (2007).
    [CrossRef]
  82. D. M. Rouse and S. S. Hemami, “Natural image utility assessment using image contours,” in Proceedings of the IEEE International Conference on Image Processing (IEEE, 2009), pp. 2217-2220.
  83. W. K. Pratt, Digital Image Processing: PIKS Inside, 3rd ed.(Wiley, 2001).
  84. C. Giardina and E. DoughertyMorphological Methods in Image and Signal Processing (Prentice Hall, 1998).
  85. The Hamming distance counts the number of dissimilar elements between two vectors .
  86. D. Marr and E. Hildreth, “Theory of edge detection,” Proc. R. Soc. Lond. Ser. B 207, 187-217 (1980).
    [CrossRef]
  87. J. Canny, “A computational approach to edge detection,” IEEE Trans. Pattern Anal. Mach. Intell. PAMI-8, 679-698 (1986).
    [CrossRef]
  88. E. P. Simoncelli and W. T. Freeman, “The steerable pyramid: a flexible architecture for multi-scale derivative computation,” in Proceedings of the IEEE International Conference on Image Processing (IEEE, 1995), Vol. 3, pp. 444-447.
  89. The high-pass residual generated by the steerable pyramid is not used.
  90. S. Mallat and S. Zhong, “Characterization of signals from multiscale edges,” IEEE Trans. Pattern Anal. Mach. Intell. 14, 710-732 (1992).
    [CrossRef]
  91. M. D. Gaubatz, D. M. Rouse, and S. S. Hemami, “MeTriX MuX,” http://foulard.ece.cornell.edu/gaubatz/metrix_mux.
  92. Video Quality Experts Group, “VQEG final report of FR-TV phase II validation test” (2003), http://www.vqeg.org.
  93. Video Quality Experts Group, “Final report from the VQEG on the validation of objective models of multimedia quality assessment, phase I,” (2008), version 2.6., http://www.vqeg.org.
  94. M. H. Brill, J. Lubin, P. Costa, S. Wolf, and J. Pearson, “Accuracy and cross-calibration of video quality metrics: new methods from ATIS/T1A1,” Signal Process., Image Commun. 19, 101-107 (2004).
    [CrossRef]
  95. M. B. Brown and A. B. Forsythe, “Robust tests for the equality of variances,” J. Am. Stat. Assoc. 69, 364-367 (1974).
    [CrossRef]
  96. D. M. Green and J. A. Swets, Signal Detection Theory and Psychophysics (Peninsula, 1988).
  97. J. A. Hanley and B. J. McNeil, “The meaning and use of the area under a receiver operating characteristic (ROC) curve,” Radiology (Oak Brook, Ill.) 143, 29-36 (1982).
  98. T. Fawcett, “An introduction to ROC analysis,” Pattern Recogn. Lett. 27, 861-874 (2006).
    [CrossRef]
  99. The notation MS-NICES≤2 is used to refer to both MS-NICE1 and MS-NICE2.
  100. D. Martin, C. Fowlkes, D. Tal and J. Malik, “A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics,” in Proceedings of the 8th International Conference of Computer Vision (IEEE, 2001), pp. 416-423.
  101. The local variance comparison used by SSIM corresponds to an analysis of high-frequency content and does not need to be removed.
  102. H. R. Sheikh, M. F. Sabir, and A. C. Bovik, “A statistical evaluation of recent full reference image quality assessment algorithms,” IEEE Trans. Image Process. 15, 3440-3451(2006).
    [CrossRef]
  103. E. C. Larson and D. M. Chandler, “The most apparent distortion: a dual strategy for full reference image quality,” Proc. SPIE 7242, 72420S (2009).
    [CrossRef]
  104. We use “NICE” to generically refer to both the single-scale and multiscale implementations of NICE, and specific implementations of NICE (e.g., NICECanny) will be identified when necessary.
  105. Using the fine-scale steerable pyramid filters to identify image contours for MS-NICE lead to statistically similar performance to the single-scale implementation of NICE using the Sobel Canny edge detectors.
  106. A. B. Watson and J. A. Solomon, “Model of visual contrast gain control and pattern masking,” J. Opt. Soc. Am. A 14, 2379-2391 (1997).
    [CrossRef]
  107. H. R. Sheikh, A. C. Bovik, and G. de Veciana, “An information fidelity criterion for image quality assessment using natural scene statistics,” IEEE Trans. Image Process. 14, 2117-2128(2005).
    [CrossRef]
  108. The subscript k for Nk accounts for decimated wavelet decompositions, such as the steerable pyramid, whose channels in coarser image scales have fewer coefficients than channels in finer image scales.
  109. U. Polat and D. Sagi, “Lateral interactions between spatial channels: suppression and facilitation revealed by lateral masking experiments,” Vis. Res. 33, 993-999 (1993).
    [CrossRef]
  110. U. Polat and D. Sagi, “The architecture of perceptual spatial interactions,” Vis. Res. 34, 73-78 (1994).
    [CrossRef]
  111. V. Kayargadde and J.-B. Martens, “Perceptual characterization of images degraded by blur and noise: experiments,” J. Opt. Soc. Am. A 13, 1166-1177 (1996).
    [CrossRef]
  112. D. M. Chandler, K. H. Lim, and S. S. Hemami, “Effects of spatial correlations and global precedence on the visual fidelity of distorted images,” Proc. SPIE 6057, 60570F (2006).
    [CrossRef]
  113. S. Konishi, A. L. Yuille, J. M. Coughlan, and S. C. Zhu, “Statistical edge detection: learning and evaluating edge cues,” IEEE Trans. Pattern Anal. Mach. Intell. 25, 57-74 (2003).
    [CrossRef]
  114. W. Ma and B. S. Manjunath, “Edgeflow: a technique for boundary detection and segmentation,” IEEE Trans. Image Process. 9, 1375-1388 (2000).
    [CrossRef]
  115. E. Rosch, C. Mervis, W. Gray, D. Johnson, and P. Boyes-Braem, “Basic objects in natural categories,” Cogn. Psychol. 8, 382-439 (1976).
    [CrossRef]
  116. C. A. Collin and P. A. McMullen, “Subordinate-level categorization relies on high spatial frequencies to a greater degree than basic-level categorization,” Percept. Psychophys. 67, 354-364 (2005).
  117. C. A. Collin, “Spatial-frequency thresholds for object categorisation at basic and subordinate levels,” Perception 35, 41-52(2006).
    [CrossRef]
  118. A. Torralba, “How many pixels make an image?,” Vis. Neurosci. 26, 123-131 (2009).
    [CrossRef]
  119. F. A. Rosell and R. H. Willson, “Recent psychophysical experiments and the display signal-to-noise ratio concept,” in Perception of Displayed Information, L.Biberman, ed. (Plenum, 1973), pp. 167-232.
  120. The Johnson criteria were based on a study with a specific set of objects, and it is possible that different objects would suggest different criteria for object recognition .
  121. S. Ullman, High-Level Vision: Object Recognition and Visual Cognition (MIT, 1996).
  122. S. Panis, J. De Winter, J. Vandekerckhove, and J. Wagemans, “Identification of everyday objects on the basis of fragmented outline versions,” Perception 37, 271-289(2008).
    [CrossRef]
  123. D. J. Field, A. Hayes, and R. Hess, “Contour integration by the human visual system: evidence for a local “association field”,” Vis. Res. 33, 173-193 (1993).
    [CrossRef]
  124. The subscript k for Nk accounts for decimated wavelet decompositions, such as the steerable pyramid, whose channels in coarser image scales have fewer coefficients than channels in finer image scales.
  125. M. J. Wainwright and E. P. Simoncelli, “Scale mixtures of Gaussians and the statistics of natural images,” in Advances in Neural Information Processing Systems, S.A.Solla, T.K.Leen, and K.-R.Miller, eds. (MIT, 2000), pp. 855-861.
  126. M. J. Wainwright, E. P. Simoncelli, and A. S. Willsky, “Random cascades on wavelet trees and their use in analyzing and modeling natural images,” Appl. Comput. Harmon. Anal. 11, 89-123 (2001).
    [CrossRef]
  127. B. W. Keelan, Handbook of Image Quality: Characterization and Prediction (CRC, 2002).
  128. D. Rouse, R. Pepion, P. Le Callet, and S. Hemami, “Tradeoffs in subjective testing methods for image video quality assessment,” Proc. SPIE 7527, 75270F (2010).
    [CrossRef]
  129. R. W. Hamming, “Error detecting for error correcting codes,” Bell Syst. Tech. J. 29, 147-160 (1950).

2010 (2)

D. Strohmeier and G. Tech, “Sharp, bright, three-dimensional: open profiling of quality for mobile 3DTV coding methods,” Proc. SPIE 75420T (2010).
[CrossRef]

D. Rouse, R. Pepion, P. Le Callet, and S. Hemami, “Tradeoffs in subjective testing methods for image video quality assessment,” Proc. SPIE 7527, 75270F (2010).
[CrossRef]

2009 (5)

A. Torralba, “How many pixels make an image?,” Vis. Neurosci. 26, 123-131 (2009).
[CrossRef]

E. C. Larson and D. M. Chandler, “The most apparent distortion: a dual strategy for full reference image quality,” Proc. SPIE 7242, 72420S (2009).
[CrossRef]

C. G. Ford, M. A. McFarland, and I. W. Stange, “Subjective video quality assessment methods for recognition tasks,” Proc. SPIE 7240, 72400Z (2009).
[CrossRef]

J. P. Davis and T. Valentine, “CCTV on trial: matching video images with the defendant in the dock,” Appl. Cogn. Psychol. 23, 482-505 (2009).
[CrossRef]

D. Rouse, R. Pepion, S. Hemami, and P. Le Callet, “Image utility assessment and a relationship with image quality assessment,” Proc. SPIE 7240 (2009).
[CrossRef]

2008 (6)

M. Carnec, P. Le Callet, and D. Barba, “Objective quality assessment of color images based on a generic perceptual reduced reference,” Signal Process., Image Commun. 23, 239-256(2008).
[CrossRef]

D. M. Rouse and S. S. Hemami, “Analyzing the role of visual structure in the recognition of natural image content with multi-scale SSIM,” Proc. SPIE 6806, 680615.1-680615.14 (2008).

P. D. O'Shea, E. L. Jacobs, and R. L. Espinola, “Effects of image compression on sensor performance,” Opt. Eng. 47, 013202(2008).
[CrossRef]

G. Loffler, “Perception of contours and shapes: low and intermediate stage mechanisms,” Vis. Res. 48, 2106-2127(2008).
[CrossRef]

S. O. Dumoulin, S. C. Dakin, and R. F. Hess, “Sparsely distributed contours dominate extra-striate responses to complex scenes,” NeuroImage 42, 890-901 (2008).
[CrossRef]

S. Panis, J. De Winter, J. Vandekerckhove, and J. Wagemans, “Identification of everyday objects on the basis of fragmented outline versions,” Perception 37, 271-289(2008).
[CrossRef]

2007 (3)

T. M. Murphy and L. H. Finkel, “Shape representation by a network of V4-like cells,” Neural Netw. 20, 851-867 (2007).
[CrossRef]

D. M. Chandler and S. S. Hemami, “VSNR: a wavelet-based visual signal-to-noise ratio for natural images,” IEEE Trans. Image Process. 16, 2284-2298 (2007).
[CrossRef]

D. M. Rouse and S. S. Hemami, “Quantifying the use of structure in cognitive tasks,” Proc. SPIE 6492, 64921O (2007).
[CrossRef]

2006 (5)

H. R. Sheikh and A. C. Bovik, “Image information and visual quality,” IEEE Trans. Image Process. 15, 430-444(2006).
[CrossRef]

C. A. Collin, “Spatial-frequency thresholds for object categorisation at basic and subordinate levels,” Perception 35, 41-52(2006).
[CrossRef]

D. M. Chandler, K. H. Lim, and S. S. Hemami, “Effects of spatial correlations and global precedence on the visual fidelity of distorted images,” Proc. SPIE 6057, 60570F (2006).
[CrossRef]

T. Fawcett, “An introduction to ROC analysis,” Pattern Recogn. Lett. 27, 861-874 (2006).
[CrossRef]

H. R. Sheikh, M. F. Sabir, and A. C. Bovik, “A statistical evaluation of recent full reference image quality assessment algorithms,” IEEE Trans. Image Process. 15, 3440-3451(2006).
[CrossRef]

2005 (4)

C. A. Párraga, T. Troscianko, and D. J. Tolhurst, “The effects of amplitude-spectrum statistics on foveal and peripheral discrimination of changes in natural images, and a multi-resolution model,” Vis. Res. 45, 3145-3168 (2005).
[CrossRef]

C. A. Collin and P. A. McMullen, “Subordinate-level categorization relies on high spatial frequencies to a greater degree than basic-level categorization,” Percept. Psychophys. 67, 354-364 (2005).

H. R. Sheikh, A. C. Bovik, and G. de Veciana, “An information fidelity criterion for image quality assessment using natural scene statistics,” IEEE Trans. Image Process. 14, 2117-2128(2005).
[CrossRef]

J.-L. Starck, M. Elad, and D. L. Donoho, “Image decomposition via the combination of sparse representations and a variational approach,” IEEE Trans. Image Process. 14, 1570-1582 (2005).
[CrossRef]

2004 (4)

G. Steidl, J. Weickert, T. Brox, P. Mrazek, and M. Welk, “On the equivalence of soft wavelet shrinkage, total variation diffusion, total variation regularization, and SIDEs,” SIAM J. Numer. Anal. 42, 686-713 (2004).
[CrossRef]

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600-612 (2004).
[CrossRef]

R. H. Vollmerhausen, E. Jacobs, and R. G. Driggers, “New metric for predicting target acquisition performance,” Opt. Eng. 43, 2806-2818 (2004).
[CrossRef]

M. H. Brill, J. Lubin, P. Costa, S. Wolf, and J. Pearson, “Accuracy and cross-calibration of video quality metrics: new methods from ATIS/T1A1,” Signal Process., Image Commun. 19, 101-107 (2004).
[CrossRef]

2003 (4)

S. Konishi, A. L. Yuille, J. M. Coughlan, and S. C. Zhu, “Statistical edge detection: learning and evaluating edge cues,” IEEE Trans. Pattern Anal. Mach. Intell. 25, 57-74 (2003).
[CrossRef]

J. C. Leachtenauer, “Resolution requirements and the Johnson criteria revisited,” Proc. SPIE 1-15 (2003).
[CrossRef]

K. Grill-Spector, “The neural basis of object perception,” Curr. Opin. Neurobiol. 13, 159-166 (2003).
[CrossRef]

D. M. Chandler and S. S. Hemami, “Effects of natural images on the detectability of simple and compound wavelet subband quantization distortions,” J. Opt. Soc. Am. A 20, 1164-1180(2003).
[CrossRef]

2002 (2)

J. M. Irvine, B. A. Eckstein, R. A. Hummel, R. J. Peters, and R. Ritzel, “Evaluation of the tactical utility of compressed imagery,” Opt. Eng. 41, 1262-1273 (2002).
[CrossRef]

I. Avcıbaş, B. Sankur, and K. Sayood, “Statistical evaluation of image quality measures,” J. Electron. Imaging 11, 206-233(2002).
[CrossRef]

2001 (1)

M. J. Wainwright, E. P. Simoncelli, and A. S. Willsky, “Random cascades on wavelet trees and their use in analyzing and modeling natural images,” Appl. Comput. Harmon. Anal. 11, 89-123 (2001).
[CrossRef]

2000 (2)

W. Ma and B. S. Manjunath, “Edgeflow: a technique for boundary detection and segmentation,” IEEE Trans. Image Process. 9, 1375-1388 (2000).
[CrossRef]

N. Damera-Venkata, T. D. Kite, W. S. Geisler, B. L. Evans, and A. C. Bovik, “Image quality assessment based on a degradation model,” IEEE Trans. Image Process. 9, 636-650 (2000).
[CrossRef]

1999 (1)

A. M. Burton, S. Wilson, M. Cowan, and V. Bruce, “Face recognition in poor-quality video; evidence from security surveillance,” Psychol. Sci. 10, 243-248 (1999).
[CrossRef]

1998 (1)

C. Poynton, “The rehabilitation of gamma,” Proc. SPIE 3299, 232-249 (1998).
[CrossRef]

1997 (2)

A. B. Watson and J. A. Solomon, “Model of visual contrast gain control and pattern masking,” J. Opt. Soc. Am. A 14, 2379-2391 (1997).
[CrossRef]

A. B. Watson, G. Y. Yang, J. A. Solomon, and J. Villasenor, “Visibility of wavelet quantization noise,” IEEE Trans. Image Process. 6, 1164-1175 (1997).
[CrossRef]

1996 (1)

1995 (2)

N. Brady and D. J. Field, “What's constant in contrast constancy? The effects of scaling on the perceived contrast of bandpass patterns,” Vis. Res. 35, 739-756 (1995).
[CrossRef]

A. M. Eskicioglu and P. S. Fisher, “Image quality measures and their performance,” IEEE Trans. Commun. 43, 2959-2965(1995).
[CrossRef]

1994 (2)

U. Polat and D. Sagi, “The architecture of perceptual spatial interactions,” Vis. Res. 34, 73-78 (1994).
[CrossRef]

P. Teo and D. Heeger, “Perceptual image distortion,” Proc. SPIE 2179, 127-141 (1994).
[CrossRef]

1993 (3)

A. B. Watson, “DCT quantization matrices visually optimized for individual images,” Proc. SPIE 1913, 202-216 (1993).
[CrossRef]

U. Polat and D. Sagi, “Lateral interactions between spatial channels: suppression and facilitation revealed by lateral masking experiments,” Vis. Res. 33, 993-999 (1993).
[CrossRef]

D. J. Field, A. Hayes, and R. Hess, “Contour integration by the human visual system: evidence for a local “association field”,” Vis. Res. 33, 173-193 (1993).
[CrossRef]

1992 (3)

S. Mallat and S. Zhong, “Characterization of signals from multiscale edges,” IEEE Trans. Pattern Anal. Mach. Intell. 14, 710-732 (1992).
[CrossRef]

J. A. J. Roufs, “Perceptual image quality: concept and measurement,” Philips J. Res. 47, 35-62 (1992).

L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithm,” Physica D (Amsterdam) 60, 259-268 (1992).
[CrossRef]

1991 (1)

D. E. Critchlow and M. A. Fligner, “Paired comparisons, triple comparisons, and ranking experiments as generalized linear models, and their implementation on GLIM,” Psychometrika 56, 517-533 (1991).
[CrossRef]

1990 (2)

H. de Ridder and G. M. Majoor, “Numerical category scaling: an efficient method for assessing digital image coding impairments,” Proc. SPIE 1249, 65-77 (1990).
[CrossRef]

A. van Meeteren, “Characterization of task performance with viewing instruments,” J. Opt. Soc. Am. A 7, 2016-2023 (1990).
[CrossRef]

1988 (1)

I. Biderman and G. Ju, “Surface versus edge-based determinants of visual recognition,” Cogn. Psychol. 20, 38-64 (1988).
[CrossRef]

1987 (1)

1986 (1)

J. Canny, “A computational approach to edge detection,” IEEE Trans. Pattern Anal. Mach. Intell. PAMI-8, 679-698 (1986).
[CrossRef]

1982 (1)

J. A. Hanley and B. J. McNeil, “The meaning and use of the area under a receiver operating characteristic (ROC) curve,” Radiology (Oak Brook, Ill.) 143, 29-36 (1982).

1981 (1)

D. Granrath, “The role of human visual models in image processing,” Proc. IEEE 69, 552-561 (1981).
[CrossRef]

1980 (3)

D. Marr and E. Hildreth, “Theory of edge detection,” Proc. R. Soc. Lond. Ser. B 207, 187-217 (1980).
[CrossRef]

C. M. Jarque and A. K. Bera, “Efficient tests for normality, homoscedasticity, and serial independence of regression residuals,” Econ. Lett. 6, 255-259 (1980).
[CrossRef]

G. Legge and J. Foley, “Contrast masking in human vision,” J. Opt. Soc. Am. 70, 1458-1470 (1980).
[CrossRef]

1978 (1)

1977 (1)

D. Navon, “Forest before trees: the precedence of global features in visual perception,” Cogn. Psychol. 9, 353-383(1977).
[CrossRef]

1976 (1)

E. Rosch, C. Mervis, W. Gray, D. Johnson, and P. Boyes-Braem, “Basic objects in natural categories,” Cogn. Psychol. 8, 382-439 (1976).
[CrossRef]

1975 (1)

M. A. Georgeson and G. D. Sullivan, “Contrast constancy: debluring in human vision by spatial frequency channels,” J. Physiol. 252, 627-656 (1975).

1974 (2)

M. B. Brown and A. B. Forsythe, “Robust tests for the equality of variances,” J. Am. Stat. Assoc. 69, 364-367 (1974).
[CrossRef]

J. L. Mannos, “The effects of a visual fidelity criterion on the encoding of images,” IEEE Trans. Inf. Theory 20, 525-536(1974).
[CrossRef]

1972 (1)

T. Stockham, “Image processing in the context of a visual model,” Proc. IEEE 60, 828-842 (1972).
[CrossRef]

1964 (1)

J. S. Bruner and M. C. Potter, “Interference in visual recognition,” Science 144, 424-425 (1964).
[CrossRef]

1957 (1)

E. C. Fieller, H. O. Hartley, and E. S. Pearson, “Tests for rank correlation coefficients. I,” Biometrika 44, 470-481(1957).

1952 (1)

R. A. Bradley and M. E. Terry, “The rank analysis of incomplete block designs I: The method of paired comparisons,” Biometrika 39, 324-345 (1952).

1950 (1)

R. W. Hamming, “Error detecting for error correcting codes,” Bell Syst. Tech. J. 29, 147-160 (1950).

Avcibas, I.

I. Avcıbaş, B. Sankur, and K. Sayood, “Statistical evaluation of image quality measures,” J. Electron. Imaging 11, 206-233(2002).
[CrossRef]

Barba, D.

M. Carnec, P. Le Callet, and D. Barba, “Objective quality assessment of color images based on a generic perceptual reduced reference,” Signal Process., Image Commun. 23, 239-256(2008).
[CrossRef]

Bera, A. K.

C. M. Jarque and A. K. Bera, “Efficient tests for normality, homoscedasticity, and serial independence of regression residuals,” Econ. Lett. 6, 255-259 (1980).
[CrossRef]

Biderman, I.

I. Biderman and G. Ju, “Surface versus edge-based determinants of visual recognition,” Cogn. Psychol. 20, 38-64 (1988).
[CrossRef]

Bovik, A. C.

H. R. Sheikh, M. F. Sabir, and A. C. Bovik, “A statistical evaluation of recent full reference image quality assessment algorithms,” IEEE Trans. Image Process. 15, 3440-3451(2006).
[CrossRef]

H. R. Sheikh and A. C. Bovik, “Image information and visual quality,” IEEE Trans. Image Process. 15, 430-444(2006).
[CrossRef]

H. R. Sheikh, A. C. Bovik, and G. de Veciana, “An information fidelity criterion for image quality assessment using natural scene statistics,” IEEE Trans. Image Process. 14, 2117-2128(2005).
[CrossRef]

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600-612 (2004).
[CrossRef]

N. Damera-Venkata, T. D. Kite, W. S. Geisler, B. L. Evans, and A. C. Bovik, “Image quality assessment based on a degradation model,” IEEE Trans. Image Process. 9, 636-650 (2000).
[CrossRef]

H. R. Sheikh, Z. Wang, L. Cormack, and A. C. Bovik, “LIVE image quality assessment database release 2,” http://live.ece.utexas.edu/research/quality.

Z. Wang, E. P. Simoncelli, and A. C. Bovik, “Multi-scale structural similarity for image quality assessment,” in Proceedings of the 37th IEEE Asilomar Conference on Signals, Systems, and Computers (IEEE, 2003), Vol. 2, pp. 1398-1402.

Boyes-Braem, P.

E. Rosch, C. Mervis, W. Gray, D. Johnson, and P. Boyes-Braem, “Basic objects in natural categories,” Cogn. Psychol. 8, 382-439 (1976).
[CrossRef]

Bradley, R. A.

R. A. Bradley and M. E. Terry, “The rank analysis of incomplete block designs I: The method of paired comparisons,” Biometrika 39, 324-345 (1952).

Brady, N.

N. Brady and D. J. Field, “What's constant in contrast constancy? The effects of scaling on the perceived contrast of bandpass patterns,” Vis. Res. 35, 739-756 (1995).
[CrossRef]

Brill, M. H.

M. H. Brill, J. Lubin, P. Costa, S. Wolf, and J. Pearson, “Accuracy and cross-calibration of video quality metrics: new methods from ATIS/T1A1,” Signal Process., Image Commun. 19, 101-107 (2004).
[CrossRef]

Brown, M. B.

M. B. Brown and A. B. Forsythe, “Robust tests for the equality of variances,” J. Am. Stat. Assoc. 69, 364-367 (1974).
[CrossRef]

Brox, T.

G. Steidl, J. Weickert, T. Brox, P. Mrazek, and M. Welk, “On the equivalence of soft wavelet shrinkage, total variation diffusion, total variation regularization, and SIDEs,” SIAM J. Numer. Anal. 42, 686-713 (2004).
[CrossRef]

Bruce, V.

A. M. Burton, S. Wilson, M. Cowan, and V. Bruce, “Face recognition in poor-quality video; evidence from security surveillance,” Psychol. Sci. 10, 243-248 (1999).
[CrossRef]

Bruner, J. S.

J. S. Bruner and M. C. Potter, “Interference in visual recognition,” Science 144, 424-425 (1964).
[CrossRef]

Burton, A. M.

A. M. Burton, S. Wilson, M. Cowan, and V. Bruce, “Face recognition in poor-quality video; evidence from security surveillance,” Psychol. Sci. 10, 243-248 (1999).
[CrossRef]

Canny, J.

J. Canny, “A computational approach to edge detection,” IEEE Trans. Pattern Anal. Mach. Intell. PAMI-8, 679-698 (1986).
[CrossRef]

Carnec, M.

M. Carnec, P. Le Callet, and D. Barba, “Objective quality assessment of color images based on a generic perceptual reduced reference,” Signal Process., Image Commun. 23, 239-256(2008).
[CrossRef]

Chandler, D.

D. Chandler, “The CSIQ database,” http://vision.okstate.edu/index.php?loc=csiq.

Chandler, D. M.

E. C. Larson and D. M. Chandler, “The most apparent distortion: a dual strategy for full reference image quality,” Proc. SPIE 7242, 72420S (2009).
[CrossRef]

D. M. Chandler and S. S. Hemami, “VSNR: a wavelet-based visual signal-to-noise ratio for natural images,” IEEE Trans. Image Process. 16, 2284-2298 (2007).
[CrossRef]

D. M. Chandler, K. H. Lim, and S. S. Hemami, “Effects of spatial correlations and global precedence on the visual fidelity of distorted images,” Proc. SPIE 6057, 60570F (2006).
[CrossRef]

D. M. Chandler and S. S. Hemami, “Effects of natural images on the detectability of simple and compound wavelet subband quantization distortions,” J. Opt. Soc. Am. A 20, 1164-1180(2003).
[CrossRef]

Cochran, W. G.

G. W. Snedecor and W. G. Cochran, Statistical Methods, 8th ed. (Iowa State, 1989).

Collin, C. A.

C. A. Collin, “Spatial-frequency thresholds for object categorisation at basic and subordinate levels,” Perception 35, 41-52(2006).
[CrossRef]

C. A. Collin and P. A. McMullen, “Subordinate-level categorization relies on high spatial frequencies to a greater degree than basic-level categorization,” Percept. Psychophys. 67, 354-364 (2005).

Cormack, L.

H. R. Sheikh, Z. Wang, L. Cormack, and A. C. Bovik, “LIVE image quality assessment database release 2,” http://live.ece.utexas.edu/research/quality.

Costa, P.

M. H. Brill, J. Lubin, P. Costa, S. Wolf, and J. Pearson, “Accuracy and cross-calibration of video quality metrics: new methods from ATIS/T1A1,” Signal Process., Image Commun. 19, 101-107 (2004).
[CrossRef]

Coughlan, J. M.

S. Konishi, A. L. Yuille, J. M. Coughlan, and S. C. Zhu, “Statistical edge detection: learning and evaluating edge cues,” IEEE Trans. Pattern Anal. Mach. Intell. 25, 57-74 (2003).
[CrossRef]

Cowan, M.

A. M. Burton, S. Wilson, M. Cowan, and V. Bruce, “Face recognition in poor-quality video; evidence from security surveillance,” Psychol. Sci. 10, 243-248 (1999).
[CrossRef]

Critchlow, D. E.

D. E. Critchlow and M. A. Fligner, “Paired comparisons, triple comparisons, and ranking experiments as generalized linear models, and their implementation on GLIM,” Psychometrika 56, 517-533 (1991).
[CrossRef]

Dakin, S. C.

S. O. Dumoulin, S. C. Dakin, and R. F. Hess, “Sparsely distributed contours dominate extra-striate responses to complex scenes,” NeuroImage 42, 890-901 (2008).
[CrossRef]

Daly, S. J.

S. J. Daly, “The visible difference predictor: an algorithm for the assessment of image fidelity,” in Digital Images and Human Vision, A.B.Watson, ed. (MIT, 1993), pp. 179-206.

Damera-Venkata, N.

N. Damera-Venkata, T. D. Kite, W. S. Geisler, B. L. Evans, and A. C. Bovik, “Image quality assessment based on a degradation model,” IEEE Trans. Image Process. 9, 636-650 (2000).
[CrossRef]

Davis, J. P.

J. P. Davis and T. Valentine, “CCTV on trial: matching video images with the defendant in the dock,” Appl. Cogn. Psychol. 23, 482-505 (2009).
[CrossRef]

de Ridder, H.

H. de Ridder and G. M. Majoor, “Numerical category scaling: an efficient method for assessing digital image coding impairments,” Proc. SPIE 1249, 65-77 (1990).
[CrossRef]

De Valois, K. K.

R. L. De Valois and K. K. De Valois, Spatial Vision (Oxford University, 1990).

De Valois, R. L.

R. L. De Valois and K. K. De Valois, Spatial Vision (Oxford University, 1990).

de Veciana, G.

H. R. Sheikh, A. C. Bovik, and G. de Veciana, “An information fidelity criterion for image quality assessment using natural scene statistics,” IEEE Trans. Image Process. 14, 2117-2128(2005).
[CrossRef]

De Winter, J.

S. Panis, J. De Winter, J. Vandekerckhove, and J. Wagemans, “Identification of everyday objects on the basis of fragmented outline versions,” Perception 37, 271-289(2008).
[CrossRef]

Devore, J. L.

J. L. Devore, Probability and Statistics for Engineering and the Sciences, 5th ed. (Duxbury, 2000).

Donoho, D. L.

J.-L. Starck, M. Elad, and D. L. Donoho, “Image decomposition via the combination of sparse representations and a variational approach,” IEEE Trans. Image Process. 14, 1570-1582 (2005).
[CrossRef]

Dougherty, E.

C. Giardina and E. DoughertyMorphological Methods in Image and Signal Processing (Prentice Hall, 1998).

Driggers, R. G.

R. H. Vollmerhausen, E. Jacobs, and R. G. Driggers, “New metric for predicting target acquisition performance,” Opt. Eng. 43, 2806-2818 (2004).
[CrossRef]

J. C. Leachtenauer and R. G. Driggers, Surveillance and Reconnaissance Imaging Systems (Artech House, 2001).

Dumoulin, S. O.

S. O. Dumoulin, S. C. Dakin, and R. F. Hess, “Sparsely distributed contours dominate extra-striate responses to complex scenes,” NeuroImage 42, 890-901 (2008).
[CrossRef]

Eckstein, B. A.

J. M. Irvine, B. A. Eckstein, R. A. Hummel, R. J. Peters, and R. Ritzel, “Evaluation of the tactical utility of compressed imagery,” Opt. Eng. 41, 1262-1273 (2002).
[CrossRef]

Elad, M.

J.-L. Starck, M. Elad, and D. L. Donoho, “Image decomposition via the combination of sparse representations and a variational approach,” IEEE Trans. Image Process. 14, 1570-1582 (2005).
[CrossRef]

Eskicioglu, A. M.

A. M. Eskicioglu and P. S. Fisher, “Image quality measures and their performance,” IEEE Trans. Commun. 43, 2959-2965(1995).
[CrossRef]

Espinola, R. L.

P. D. O'Shea, E. L. Jacobs, and R. L. Espinola, “Effects of image compression on sensor performance,” Opt. Eng. 47, 013202(2008).
[CrossRef]

Evans, B. L.

N. Damera-Venkata, T. D. Kite, W. S. Geisler, B. L. Evans, and A. C. Bovik, “Image quality assessment based on a degradation model,” IEEE Trans. Image Process. 9, 636-650 (2000).
[CrossRef]

Fatemi, E.

L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithm,” Physica D (Amsterdam) 60, 259-268 (1992).
[CrossRef]

Fawcett, T.

T. Fawcett, “An introduction to ROC analysis,” Pattern Recogn. Lett. 27, 861-874 (2006).
[CrossRef]

Field, D. J.

N. Brady and D. J. Field, “What's constant in contrast constancy? The effects of scaling on the perceived contrast of bandpass patterns,” Vis. Res. 35, 739-756 (1995).
[CrossRef]

D. J. Field, A. Hayes, and R. Hess, “Contour integration by the human visual system: evidence for a local “association field”,” Vis. Res. 33, 173-193 (1993).
[CrossRef]

D. J. Field, “Relations between the statistics of natural images and the response properties of cortical cells,” J. Opt. Soc. Am. A 4, 2379-2394 (1987).
[CrossRef]

Fieller, E. C.

E. C. Fieller, H. O. Hartley, and E. S. Pearson, “Tests for rank correlation coefficients. I,” Biometrika 44, 470-481(1957).

Finkel, L. H.

T. M. Murphy and L. H. Finkel, “Shape representation by a network of V4-like cells,” Neural Netw. 20, 851-867 (2007).
[CrossRef]

Fisher, P. S.

A. M. Eskicioglu and P. S. Fisher, “Image quality measures and their performance,” IEEE Trans. Commun. 43, 2959-2965(1995).
[CrossRef]

Fligner, M. A.

D. E. Critchlow and M. A. Fligner, “Paired comparisons, triple comparisons, and ranking experiments as generalized linear models, and their implementation on GLIM,” Psychometrika 56, 517-533 (1991).
[CrossRef]

Foley, J.

Ford, C. G.

C. G. Ford, M. A. McFarland, and I. W. Stange, “Subjective video quality assessment methods for recognition tasks,” Proc. SPIE 7240, 72400Z (2009).
[CrossRef]

Forsythe, A. B.

M. B. Brown and A. B. Forsythe, “Robust tests for the equality of variances,” J. Am. Stat. Assoc. 69, 364-367 (1974).
[CrossRef]

Fowlkes, C.

D. Martin, C. Fowlkes, D. Tal and J. Malik, “A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics,” in Proceedings of the 8th International Conference of Computer Vision (IEEE, 2001), pp. 416-423.

Freeman, W. T.

E. P. Simoncelli and W. T. Freeman, “The steerable pyramid: a flexible architecture for multi-scale derivative computation,” in Proceedings of the IEEE International Conference on Image Processing (IEEE, 1995), Vol. 3, pp. 444-447.

Gaubatz, M. D.

M. D. Gaubatz, D. M. Rouse, and S. S. Hemami, “MeTriX MuX,” http://foulard.ece.cornell.edu/gaubatz/metrix_mux.

Geisler, W. S.

N. Damera-Venkata, T. D. Kite, W. S. Geisler, B. L. Evans, and A. C. Bovik, “Image quality assessment based on a degradation model,” IEEE Trans. Image Process. 9, 636-650 (2000).
[CrossRef]

Georgeson, M. A.

M. A. Georgeson and G. D. Sullivan, “Contrast constancy: debluring in human vision by spatial frequency channels,” J. Physiol. 252, 627-656 (1975).

Giardina, C.

C. Giardina and E. DoughertyMorphological Methods in Image and Signal Processing (Prentice Hall, 1998).

Granrath, D.

D. Granrath, “The role of human visual models in image processing,” Proc. IEEE 69, 552-561 (1981).
[CrossRef]

Gray, W.

E. Rosch, C. Mervis, W. Gray, D. Johnson, and P. Boyes-Braem, “Basic objects in natural categories,” Cogn. Psychol. 8, 382-439 (1976).
[CrossRef]

Green, D. M.

D. M. Green and J. A. Swets, Signal Detection Theory and Psychophysics (Peninsula, 1988).

Grill-Spector, K.

K. Grill-Spector, “The neural basis of object perception,” Curr. Opin. Neurobiol. 13, 159-166 (2003).
[CrossRef]

Group, Video Quality Experts

Video Quality Experts Group, “Final report from the VQEG on the validation of objective models of multimedia quality assessment, phase I,” (2008), version 2.6., http://www.vqeg.org.

Video Quality Experts Group, “VQEG final report of FR-TV phase II validation test” (2003), http://www.vqeg.org.

Hamming, R. W.

R. W. Hamming, “Error detecting for error correcting codes,” Bell Syst. Tech. J. 29, 147-160 (1950).

Hanley, J. A.

J. A. Hanley and B. J. McNeil, “The meaning and use of the area under a receiver operating characteristic (ROC) curve,” Radiology (Oak Brook, Ill.) 143, 29-36 (1982).

Hartley, H. O.

E. C. Fieller, H. O. Hartley, and E. S. Pearson, “Tests for rank correlation coefficients. I,” Biometrika 44, 470-481(1957).

Hayes, A.

D. J. Field, A. Hayes, and R. Hess, “Contour integration by the human visual system: evidence for a local “association field”,” Vis. Res. 33, 173-193 (1993).
[CrossRef]

Heeger, D.

P. Teo and D. Heeger, “Perceptual image distortion,” Proc. SPIE 2179, 127-141 (1994).
[CrossRef]

Hemami, S.

D. Rouse, R. Pepion, P. Le Callet, and S. Hemami, “Tradeoffs in subjective testing methods for image video quality assessment,” Proc. SPIE 7527, 75270F (2010).
[CrossRef]

D. Rouse, R. Pepion, S. Hemami, and P. Le Callet, “Image utility assessment and a relationship with image quality assessment,” Proc. SPIE 7240 (2009).
[CrossRef]

Hemami, S. S.

D. M. Rouse and S. S. Hemami, “Analyzing the role of visual structure in the recognition of natural image content with multi-scale SSIM,” Proc. SPIE 6806, 680615.1-680615.14 (2008).

D. M. Chandler and S. S. Hemami, “VSNR: a wavelet-based visual signal-to-noise ratio for natural images,” IEEE Trans. Image Process. 16, 2284-2298 (2007).
[CrossRef]

D. M. Rouse and S. S. Hemami, “Quantifying the use of structure in cognitive tasks,” Proc. SPIE 6492, 64921O (2007).
[CrossRef]

D. M. Chandler, K. H. Lim, and S. S. Hemami, “Effects of spatial correlations and global precedence on the visual fidelity of distorted images,” Proc. SPIE 6057, 60570F (2006).
[CrossRef]

D. M. Chandler and S. S. Hemami, “Effects of natural images on the detectability of simple and compound wavelet subband quantization distortions,” J. Opt. Soc. Am. A 20, 1164-1180(2003).
[CrossRef]

D. M. Rouse and S. S. Hemami, “Natural image utility assessment using image contours,” in Proceedings of the IEEE International Conference on Image Processing (IEEE, 2009), pp. 2217-2220.

D. M. Rouse and S. S. Hemami, “Understanding and simplifying the structural similarity metric,” in Proceedings of the IEEE International Conference on Image Processing (IEEE, 2008), pp. 1188-1191.

M. D. Gaubatz, D. M. Rouse, and S. S. Hemami, “MeTriX MuX,” http://foulard.ece.cornell.edu/gaubatz/metrix_mux.

Hess, R.

D. J. Field, A. Hayes, and R. Hess, “Contour integration by the human visual system: evidence for a local “association field”,” Vis. Res. 33, 173-193 (1993).
[CrossRef]

Hess, R. F.

S. O. Dumoulin, S. C. Dakin, and R. F. Hess, “Sparsely distributed contours dominate extra-striate responses to complex scenes,” NeuroImage 42, 890-901 (2008).
[CrossRef]

Hildreth, E.

D. Marr and E. Hildreth, “Theory of edge detection,” Proc. R. Soc. Lond. Ser. B 207, 187-217 (1980).
[CrossRef]

Hummel, R. A.

J. M. Irvine, B. A. Eckstein, R. A. Hummel, R. J. Peters, and R. Ritzel, “Evaluation of the tactical utility of compressed imagery,” Opt. Eng. 41, 1262-1273 (2002).
[CrossRef]

Irvine, J. M.

J. M. Irvine, B. A. Eckstein, R. A. Hummel, R. J. Peters, and R. Ritzel, “Evaluation of the tactical utility of compressed imagery,” Opt. Eng. 41, 1262-1273 (2002).
[CrossRef]

Jacobs, E.

R. H. Vollmerhausen, E. Jacobs, and R. G. Driggers, “New metric for predicting target acquisition performance,” Opt. Eng. 43, 2806-2818 (2004).
[CrossRef]

Jacobs, E. L.

P. D. O'Shea, E. L. Jacobs, and R. L. Espinola, “Effects of image compression on sensor performance,” Opt. Eng. 47, 013202(2008).
[CrossRef]

Jarque, C. M.

C. M. Jarque and A. K. Bera, “Efficient tests for normality, homoscedasticity, and serial independence of regression residuals,” Econ. Lett. 6, 255-259 (1980).
[CrossRef]

Johnson, D.

E. Rosch, C. Mervis, W. Gray, D. Johnson, and P. Boyes-Braem, “Basic objects in natural categories,” Cogn. Psychol. 8, 382-439 (1976).
[CrossRef]

Johnson, J.

J. Johnson, “Analysis of image forming systems,” in Image Intensifier Symposium (Fort Belvoir, 1958).

Johnston, J. D.

R. J. Safranek and J. D. Johnston, “A perceptually tuned sub-band image coder with image dependent quantization and post-quantization data compression,” in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (IEEE, 1989), pp. 1945-1948.

Ju, G.

I. Biderman and G. Ju, “Surface versus edge-based determinants of visual recognition,” Cogn. Psychol. 20, 38-64 (1988).
[CrossRef]

Kayargadde, V.

Keelan, B. W.

B. W. Keelan, Handbook of Image Quality: Characterization and Prediction (CRC, 2002).

Kite, T. D.

N. Damera-Venkata, T. D. Kite, W. S. Geisler, B. L. Evans, and A. C. Bovik, “Image quality assessment based on a degradation model,” IEEE Trans. Image Process. 9, 636-650 (2000).
[CrossRef]

Klein, S. A.

S. A. Klein, “Image quality and image compression: a psychophysicist's viewpoint,” in Digital Images and Human Vision, A.B.Watson, ed. (MIT, 1993), pp. 73-88.

Konishi, S.

S. Konishi, A. L. Yuille, J. M. Coughlan, and S. C. Zhu, “Statistical edge detection: learning and evaluating edge cues,” IEEE Trans. Pattern Anal. Mach. Intell. 25, 57-74 (2003).
[CrossRef]

Larson, E. C.

E. C. Larson and D. M. Chandler, “The most apparent distortion: a dual strategy for full reference image quality,” Proc. SPIE 7242, 72420S (2009).
[CrossRef]

Le Callet, P.

D. Rouse, R. Pepion, P. Le Callet, and S. Hemami, “Tradeoffs in subjective testing methods for image video quality assessment,” Proc. SPIE 7527, 75270F (2010).
[CrossRef]

D. Rouse, R. Pepion, S. Hemami, and P. Le Callet, “Image utility assessment and a relationship with image quality assessment,” Proc. SPIE 7240 (2009).
[CrossRef]

M. Carnec, P. Le Callet, and D. Barba, “Objective quality assessment of color images based on a generic perceptual reduced reference,” Signal Process., Image Commun. 23, 239-256(2008).
[CrossRef]

Leachtenauer, J. C.

J. C. Leachtenauer, “Resolution requirements and the Johnson criteria revisited,” Proc. SPIE 1-15 (2003).
[CrossRef]

J. C. Leachtenauer and R. G. Driggers, Surveillance and Reconnaissance Imaging Systems (Artech House, 2001).

Legge, G.

Lim, K. H.

D. M. Chandler, K. H. Lim, and S. S. Hemami, “Effects of spatial correlations and global precedence on the visual fidelity of distorted images,” Proc. SPIE 6057, 60570F (2006).
[CrossRef]

Loffler, G.

G. Loffler, “Perception of contours and shapes: low and intermediate stage mechanisms,” Vis. Res. 48, 2106-2127(2008).
[CrossRef]

Lubin, J.

M. H. Brill, J. Lubin, P. Costa, S. Wolf, and J. Pearson, “Accuracy and cross-calibration of video quality metrics: new methods from ATIS/T1A1,” Signal Process., Image Commun. 19, 101-107 (2004).
[CrossRef]

J. Lubin, “The use of psychophysical data and models in the analysis of display system performance,” in Digital Images and Human Vision, A.B.Watson, ed. (MIT, 1993), pp. 163-178.

Ma, W.

W. Ma and B. S. Manjunath, “Edgeflow: a technique for boundary detection and segmentation,” IEEE Trans. Image Process. 9, 1375-1388 (2000).
[CrossRef]

Majoor, G. M.

H. de Ridder and G. M. Majoor, “Numerical category scaling: an efficient method for assessing digital image coding impairments,” Proc. SPIE 1249, 65-77 (1990).
[CrossRef]

Malik, J.

D. Martin, C. Fowlkes, D. Tal and J. Malik, “A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics,” in Proceedings of the 8th International Conference of Computer Vision (IEEE, 2001), pp. 416-423.

Mallat, S.

S. Mallat and S. Zhong, “Characterization of signals from multiscale edges,” IEEE Trans. Pattern Anal. Mach. Intell. 14, 710-732 (1992).
[CrossRef]

Manjunath, B. S.

W. Ma and B. S. Manjunath, “Edgeflow: a technique for boundary detection and segmentation,” IEEE Trans. Image Process. 9, 1375-1388 (2000).
[CrossRef]

Mannos, J. L.

J. L. Mannos, “The effects of a visual fidelity criterion on the encoding of images,” IEEE Trans. Inf. Theory 20, 525-536(1974).
[CrossRef]

Marcellin, M. W.

D. S. Taubman and M. W. MarcellinJPEG2000: Image Compression Fundamentals, Standards, and Practice (Kluwer Academic, 2002).

Marr, D.

D. Marr and E. Hildreth, “Theory of edge detection,” Proc. R. Soc. Lond. Ser. B 207, 187-217 (1980).
[CrossRef]

Martens, J.-B.

Martin, D.

D. Martin, C. Fowlkes, D. Tal and J. Malik, “A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics,” in Proceedings of the 8th International Conference of Computer Vision (IEEE, 2001), pp. 416-423.

McFarland, M. A.

C. G. Ford, M. A. McFarland, and I. W. Stange, “Subjective video quality assessment methods for recognition tasks,” Proc. SPIE 7240, 72400Z (2009).
[CrossRef]

McMullen, P. A.

C. A. Collin and P. A. McMullen, “Subordinate-level categorization relies on high spatial frequencies to a greater degree than basic-level categorization,” Percept. Psychophys. 67, 354-364 (2005).

McNeil, B. J.

J. A. Hanley and B. J. McNeil, “The meaning and use of the area under a receiver operating characteristic (ROC) curve,” Radiology (Oak Brook, Ill.) 143, 29-36 (1982).

Mervis, C.

E. Rosch, C. Mervis, W. Gray, D. Johnson, and P. Boyes-Braem, “Basic objects in natural categories,” Cogn. Psychol. 8, 382-439 (1976).
[CrossRef]

Mitchell, J. L.

W. B. Pennebaker and J. L. Mitchell, JPEG: Still Image Data Compression Standard (Van Nostrand Reinhold, 1993).

Mrazek, P.

G. Steidl, J. Weickert, T. Brox, P. Mrazek, and M. Welk, “On the equivalence of soft wavelet shrinkage, total variation diffusion, total variation regularization, and SIDEs,” SIAM J. Numer. Anal. 42, 686-713 (2004).
[CrossRef]

Murphy, T. M.

T. M. Murphy and L. H. Finkel, “Shape representation by a network of V4-like cells,” Neural Netw. 20, 851-867 (2007).
[CrossRef]

Navon, D.

D. Navon, “Forest before trees: the precedence of global features in visual perception,” Cogn. Psychol. 9, 353-383(1977).
[CrossRef]

O'Shea, P. D.

P. D. O'Shea, E. L. Jacobs, and R. L. Espinola, “Effects of image compression on sensor performance,” Opt. Eng. 47, 013202(2008).
[CrossRef]

Osher, S.

L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithm,” Physica D (Amsterdam) 60, 259-268 (1992).
[CrossRef]

Panis, S.

S. Panis, J. De Winter, J. Vandekerckhove, and J. Wagemans, “Identification of everyday objects on the basis of fragmented outline versions,” Perception 37, 271-289(2008).
[CrossRef]

Pappas, T. N.

T. N. Pappas and R. J. Safranek, “Perceptual criteria for image quality evaluation,” in Handbook of Image and Video Processing, A.C.Bovik, ed. (Academic, 2000).

Párraga, C. A.

C. A. Párraga, T. Troscianko, and D. J. Tolhurst, “The effects of amplitude-spectrum statistics on foveal and peripheral discrimination of changes in natural images, and a multi-resolution model,” Vis. Res. 45, 3145-3168 (2005).
[CrossRef]

Pearlman, W. A.

Pearson, E. S.

E. C. Fieller, H. O. Hartley, and E. S. Pearson, “Tests for rank correlation coefficients. I,” Biometrika 44, 470-481(1957).

Pearson, J.

M. H. Brill, J. Lubin, P. Costa, S. Wolf, and J. Pearson, “Accuracy and cross-calibration of video quality metrics: new methods from ATIS/T1A1,” Signal Process., Image Commun. 19, 101-107 (2004).
[CrossRef]

Pennebaker, W. B.

W. B. Pennebaker and J. L. Mitchell, JPEG: Still Image Data Compression Standard (Van Nostrand Reinhold, 1993).

Pepion, R.

D. Rouse, R. Pepion, P. Le Callet, and S. Hemami, “Tradeoffs in subjective testing methods for image video quality assessment,” Proc. SPIE 7527, 75270F (2010).
[CrossRef]

D. Rouse, R. Pepion, S. Hemami, and P. Le Callet, “Image utility assessment and a relationship with image quality assessment,” Proc. SPIE 7240 (2009).
[CrossRef]

Peters, R. J.

J. M. Irvine, B. A. Eckstein, R. A. Hummel, R. J. Peters, and R. Ritzel, “Evaluation of the tactical utility of compressed imagery,” Opt. Eng. 41, 1262-1273 (2002).
[CrossRef]

Petersen, J. K.

J. K. Petersen, Understanding Surveillance Technologies (CRC, 2001).

Polat, U.

U. Polat and D. Sagi, “The architecture of perceptual spatial interactions,” Vis. Res. 34, 73-78 (1994).
[CrossRef]

U. Polat and D. Sagi, “Lateral interactions between spatial channels: suppression and facilitation revealed by lateral masking experiments,” Vis. Res. 33, 993-999 (1993).
[CrossRef]

Potter, M. C.

J. S. Bruner and M. C. Potter, “Interference in visual recognition,” Science 144, 424-425 (1964).
[CrossRef]

Poynton, C.

C. Poynton, “The rehabilitation of gamma,” Proc. SPIE 3299, 232-249 (1998).
[CrossRef]

Pratt, W. K.

W. K. Pratt, Digital Image Processing: PIKS Inside, 3rd ed.(Wiley, 2001).

Ritzel, R.

J. M. Irvine, B. A. Eckstein, R. A. Hummel, R. J. Peters, and R. Ritzel, “Evaluation of the tactical utility of compressed imagery,” Opt. Eng. 41, 1262-1273 (2002).
[CrossRef]

Rosch, E.

E. Rosch, C. Mervis, W. Gray, D. Johnson, and P. Boyes-Braem, “Basic objects in natural categories,” Cogn. Psychol. 8, 382-439 (1976).
[CrossRef]

Rosell, F. A.

F. A. Rosell and R. H. Willson, “Recent psychophysical experiments and the display signal-to-noise ratio concept,” in Perception of Displayed Information, L.Biberman, ed. (Plenum, 1973), pp. 167-232.

Roufs, J. A. J.

J. A. J. Roufs, “Perceptual image quality: concept and measurement,” Philips J. Res. 47, 35-62 (1992).

Rouse, D.

D. Rouse, R. Pepion, P. Le Callet, and S. Hemami, “Tradeoffs in subjective testing methods for image video quality assessment,” Proc. SPIE 7527, 75270F (2010).
[CrossRef]

D. Rouse, R. Pepion, S. Hemami, and P. Le Callet, “Image utility assessment and a relationship with image quality assessment,” Proc. SPIE 7240 (2009).
[CrossRef]

Rouse, D. M.

D. M. Rouse and S. S. Hemami, “Analyzing the role of visual structure in the recognition of natural image content with multi-scale SSIM,” Proc. SPIE 6806, 680615.1-680615.14 (2008).

D. M. Rouse and S. S. Hemami, “Quantifying the use of structure in cognitive tasks,” Proc. SPIE 6492, 64921O (2007).
[CrossRef]

D. M. Rouse and S. S. Hemami, “Natural image utility assessment using image contours,” in Proceedings of the IEEE International Conference on Image Processing (IEEE, 2009), pp. 2217-2220.

D. M. Rouse and S. S. Hemami, “Understanding and simplifying the structural similarity metric,” in Proceedings of the IEEE International Conference on Image Processing (IEEE, 2008), pp. 1188-1191.

M. D. Gaubatz, D. M. Rouse, and S. S. Hemami, “MeTriX MuX,” http://foulard.ece.cornell.edu/gaubatz/metrix_mux.

Rudin, L. I.

L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithm,” Physica D (Amsterdam) 60, 259-268 (1992).
[CrossRef]

Sabir, M. F.

H. R. Sheikh, M. F. Sabir, and A. C. Bovik, “A statistical evaluation of recent full reference image quality assessment algorithms,” IEEE Trans. Image Process. 15, 3440-3451(2006).
[CrossRef]

Safranek, R. J.

T. N. Pappas and R. J. Safranek, “Perceptual criteria for image quality evaluation,” in Handbook of Image and Video Processing, A.C.Bovik, ed. (Academic, 2000).

R. J. Safranek and J. D. Johnston, “A perceptually tuned sub-band image coder with image dependent quantization and post-quantization data compression,” in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (IEEE, 1989), pp. 1945-1948.

Sagi, D.

U. Polat and D. Sagi, “The architecture of perceptual spatial interactions,” Vis. Res. 34, 73-78 (1994).
[CrossRef]

U. Polat and D. Sagi, “Lateral interactions between spatial channels: suppression and facilitation revealed by lateral masking experiments,” Vis. Res. 33, 993-999 (1993).
[CrossRef]

Sankur, B.

I. Avcıbaş, B. Sankur, and K. Sayood, “Statistical evaluation of image quality measures,” J. Electron. Imaging 11, 206-233(2002).
[CrossRef]

Sayood, K.

I. Avcıbaş, B. Sankur, and K. Sayood, “Statistical evaluation of image quality measures,” J. Electron. Imaging 11, 206-233(2002).
[CrossRef]

Sheikh, H. R.

H. R. Sheikh and A. C. Bovik, “Image information and visual quality,” IEEE Trans. Image Process. 15, 430-444(2006).
[CrossRef]

H. R. Sheikh, M. F. Sabir, and A. C. Bovik, “A statistical evaluation of recent full reference image quality assessment algorithms,” IEEE Trans. Image Process. 15, 3440-3451(2006).
[CrossRef]

H. R. Sheikh, A. C. Bovik, and G. de Veciana, “An information fidelity criterion for image quality assessment using natural scene statistics,” IEEE Trans. Image Process. 14, 2117-2128(2005).
[CrossRef]

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600-612 (2004).
[CrossRef]

H. R. Sheikh, Z. Wang, L. Cormack, and A. C. Bovik, “LIVE image quality assessment database release 2,” http://live.ece.utexas.edu/research/quality.

Simoncelli, E. P.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600-612 (2004).
[CrossRef]

M. J. Wainwright, E. P. Simoncelli, and A. S. Willsky, “Random cascades on wavelet trees and their use in analyzing and modeling natural images,” Appl. Comput. Harmon. Anal. 11, 89-123 (2001).
[CrossRef]

M. J. Wainwright and E. P. Simoncelli, “Scale mixtures of Gaussians and the statistics of natural images,” in Advances in Neural Information Processing Systems, S.A.Solla, T.K.Leen, and K.-R.Miller, eds. (MIT, 2000), pp. 855-861.

E. P. Simoncelli and W. T. Freeman, “The steerable pyramid: a flexible architecture for multi-scale derivative computation,” in Proceedings of the IEEE International Conference on Image Processing (IEEE, 1995), Vol. 3, pp. 444-447.

Z. Wang, E. P. Simoncelli, and A. C. Bovik, “Multi-scale structural similarity for image quality assessment,” in Proceedings of the 37th IEEE Asilomar Conference on Signals, Systems, and Computers (IEEE, 2003), Vol. 2, pp. 1398-1402.

Snedecor, G. W.

G. W. Snedecor and W. G. Cochran, Statistical Methods, 8th ed. (Iowa State, 1989).

Solomon, J. A.

A. B. Watson, G. Y. Yang, J. A. Solomon, and J. Villasenor, “Visibility of wavelet quantization noise,” IEEE Trans. Image Process. 6, 1164-1175 (1997).
[CrossRef]

A. B. Watson and J. A. Solomon, “Model of visual contrast gain control and pattern masking,” J. Opt. Soc. Am. A 14, 2379-2391 (1997).
[CrossRef]

Standardization, International Organization for

International Organization for Standardization, “Information technology--digital compression and coding of continuous-tone still images--requirements and guidelines,” ITU-T T.81 (International Telecommunication Union, 1992).

Stange, I. W.

C. G. Ford, M. A. McFarland, and I. W. Stange, “Subjective video quality assessment methods for recognition tasks,” Proc. SPIE 7240, 72400Z (2009).
[CrossRef]

Starck, J.-L.

J.-L. Starck, M. Elad, and D. L. Donoho, “Image decomposition via the combination of sparse representations and a variational approach,” IEEE Trans. Image Process. 14, 1570-1582 (2005).
[CrossRef]

Steidl, G.

G. Steidl, J. Weickert, T. Brox, P. Mrazek, and M. Welk, “On the equivalence of soft wavelet shrinkage, total variation diffusion, total variation regularization, and SIDEs,” SIAM J. Numer. Anal. 42, 686-713 (2004).
[CrossRef]

Stockham, T.

T. Stockham, “Image processing in the context of a visual model,” Proc. IEEE 60, 828-842 (1972).
[CrossRef]

Strohmeier, D.

D. Strohmeier and G. Tech, “Sharp, bright, three-dimensional: open profiling of quality for mobile 3DTV coding methods,” Proc. SPIE 75420T (2010).
[CrossRef]

Sullivan, G. D.

M. A. Georgeson and G. D. Sullivan, “Contrast constancy: debluring in human vision by spatial frequency channels,” J. Physiol. 252, 627-656 (1975).

Swets, J. A.

D. M. Green and J. A. Swets, Signal Detection Theory and Psychophysics (Peninsula, 1988).

Tal, D.

D. Martin, C. Fowlkes, D. Tal and J. Malik, “A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics,” in Proceedings of the 8th International Conference of Computer Vision (IEEE, 2001), pp. 416-423.

Taubman, D. S.

D. S. Taubman and M. W. MarcellinJPEG2000: Image Compression Fundamentals, Standards, and Practice (Kluwer Academic, 2002).

Tech, G.

D. Strohmeier and G. Tech, “Sharp, bright, three-dimensional: open profiling of quality for mobile 3DTV coding methods,” Proc. SPIE 75420T (2010).
[CrossRef]

Teo, P.

P. Teo and D. Heeger, “Perceptual image distortion,” Proc. SPIE 2179, 127-141 (1994).
[CrossRef]

Terry, M. E.

R. A. Bradley and M. E. Terry, “The rank analysis of incomplete block designs I: The method of paired comparisons,” Biometrika 39, 324-345 (1952).

Tolhurst, D. J.

C. A. Párraga, T. Troscianko, and D. J. Tolhurst, “The effects of amplitude-spectrum statistics on foveal and peripheral discrimination of changes in natural images, and a multi-resolution model,” Vis. Res. 45, 3145-3168 (2005).
[CrossRef]

Torralba, A.

A. Torralba, “How many pixels make an image?,” Vis. Neurosci. 26, 123-131 (2009).
[CrossRef]

Troscianko, T.

C. A. Párraga, T. Troscianko, and D. J. Tolhurst, “The effects of amplitude-spectrum statistics on foveal and peripheral discrimination of changes in natural images, and a multi-resolution model,” Vis. Res. 45, 3145-3168 (2005).
[CrossRef]

Ullman, S.

S. Ullman, High-Level Vision: Object Recognition and Visual Cognition (MIT, 1996).

Valentine, T.

J. P. Davis and T. Valentine, “CCTV on trial: matching video images with the defendant in the dock,” Appl. Cogn. Psychol. 23, 482-505 (2009).
[CrossRef]

van Meeteren, A.

Vandekerckhove, J.

S. Panis, J. De Winter, J. Vandekerckhove, and J. Wagemans, “Identification of everyday objects on the basis of fragmented outline versions,” Perception 37, 271-289(2008).
[CrossRef]

Villasenor, J.

A. B. Watson, G. Y. Yang, J. A. Solomon, and J. Villasenor, “Visibility of wavelet quantization noise,” IEEE Trans. Image Process. 6, 1164-1175 (1997).
[CrossRef]

Vollmerhausen, R. H.

R. H. Vollmerhausen, E. Jacobs, and R. G. Driggers, “New metric for predicting target acquisition performance,” Opt. Eng. 43, 2806-2818 (2004).
[CrossRef]

Wagemans, J.

S. Panis, J. De Winter, J. Vandekerckhove, and J. Wagemans, “Identification of everyday objects on the basis of fragmented outline versions,” Perception 37, 271-289(2008).
[CrossRef]

Wainwright, M. J.

M. J. Wainwright, E. P. Simoncelli, and A. S. Willsky, “Random cascades on wavelet trees and their use in analyzing and modeling natural images,” Appl. Comput. Harmon. Anal. 11, 89-123 (2001).
[CrossRef]

M. J. Wainwright and E. P. Simoncelli, “Scale mixtures of Gaussians and the statistics of natural images,” in Advances in Neural Information Processing Systems, S.A.Solla, T.K.Leen, and K.-R.Miller, eds. (MIT, 2000), pp. 855-861.

Wang, Z.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600-612 (2004).
[CrossRef]

Z. Wang, E. P. Simoncelli, and A. C. Bovik, “Multi-scale structural similarity for image quality assessment,” in Proceedings of the 37th IEEE Asilomar Conference on Signals, Systems, and Computers (IEEE, 2003), Vol. 2, pp. 1398-1402.

H. R. Sheikh, Z. Wang, L. Cormack, and A. C. Bovik, “LIVE image quality assessment database release 2,” http://live.ece.utexas.edu/research/quality.

Watson, A. B.

A. B. Watson and J. A. Solomon, “Model of visual contrast gain control and pattern masking,” J. Opt. Soc. Am. A 14, 2379-2391 (1997).
[CrossRef]

A. B. Watson, G. Y. Yang, J. A. Solomon, and J. Villasenor, “Visibility of wavelet quantization noise,” IEEE Trans. Image Process. 6, 1164-1175 (1997).
[CrossRef]

A. B. Watson, “DCT quantization matrices visually optimized for individual images,” Proc. SPIE 1913, 202-216 (1993).
[CrossRef]

Weickert, J.

G. Steidl, J. Weickert, T. Brox, P. Mrazek, and M. Welk, “On the equivalence of soft wavelet shrinkage, total variation diffusion, total variation regularization, and SIDEs,” SIAM J. Numer. Anal. 42, 686-713 (2004).
[CrossRef]

Welk, M.

G. Steidl, J. Weickert, T. Brox, P. Mrazek, and M. Welk, “On the equivalence of soft wavelet shrinkage, total variation diffusion, total variation regularization, and SIDEs,” SIAM J. Numer. Anal. 42, 686-713 (2004).
[CrossRef]

Willsky, A. S.

M. J. Wainwright, E. P. Simoncelli, and A. S. Willsky, “Random cascades on wavelet trees and their use in analyzing and modeling natural images,” Appl. Comput. Harmon. Anal. 11, 89-123 (2001).
[CrossRef]

Willson, R. H.

F. A. Rosell and R. H. Willson, “Recent psychophysical experiments and the display signal-to-noise ratio concept,” in Perception of Displayed Information, L.Biberman, ed. (Plenum, 1973), pp. 167-232.

Wilson, S.

A. M. Burton, S. Wilson, M. Cowan, and V. Bruce, “Face recognition in poor-quality video; evidence from security surveillance,” Psychol. Sci. 10, 243-248 (1999).
[CrossRef]

Wolf, S.

M. H. Brill, J. Lubin, P. Costa, S. Wolf, and J. Pearson, “Accuracy and cross-calibration of video quality metrics: new methods from ATIS/T1A1,” Signal Process., Image Commun. 19, 101-107 (2004).
[CrossRef]

Yang, G. Y.

A. B. Watson, G. Y. Yang, J. A. Solomon, and J. Villasenor, “Visibility of wavelet quantization noise,” IEEE Trans. Image Process. 6, 1164-1175 (1997).
[CrossRef]

Yuille, A. L.

S. Konishi, A. L. Yuille, J. M. Coughlan, and S. C. Zhu, “Statistical edge detection: learning and evaluating edge cues,” IEEE Trans. Pattern Anal. Mach. Intell. 25, 57-74 (2003).
[CrossRef]

Zhong, S.

S. Mallat and S. Zhong, “Characterization of signals from multiscale edges,” IEEE Trans. Pattern Anal. Mach. Intell. 14, 710-732 (1992).
[CrossRef]

Zhu, S. C.

S. Konishi, A. L. Yuille, J. M. Coughlan, and S. C. Zhu, “Statistical edge detection: learning and evaluating edge cues,” IEEE Trans. Pattern Anal. Mach. Intell. 25, 57-74 (2003).
[CrossRef]

Appl. Cogn. Psychol. (1)

J. P. Davis and T. Valentine, “CCTV on trial: matching video images with the defendant in the dock,” Appl. Cogn. Psychol. 23, 482-505 (2009).
[CrossRef]

Appl. Comput. Harmon. Anal. (1)

M. J. Wainwright, E. P. Simoncelli, and A. S. Willsky, “Random cascades on wavelet trees and their use in analyzing and modeling natural images,” Appl. Comput. Harmon. Anal. 11, 89-123 (2001).
[CrossRef]

Bell Syst. Tech. J. (1)

R. W. Hamming, “Error detecting for error correcting codes,” Bell Syst. Tech. J. 29, 147-160 (1950).

Biometrika (2)

R. A. Bradley and M. E. Terry, “The rank analysis of incomplete block designs I: The method of paired comparisons,” Biometrika 39, 324-345 (1952).

E. C. Fieller, H. O. Hartley, and E. S. Pearson, “Tests for rank correlation coefficients. I,” Biometrika 44, 470-481(1957).

Cogn. Psychol. (3)

D. Navon, “Forest before trees: the precedence of global features in visual perception,” Cogn. Psychol. 9, 353-383(1977).
[CrossRef]

I. Biderman and G. Ju, “Surface versus edge-based determinants of visual recognition,” Cogn. Psychol. 20, 38-64 (1988).
[CrossRef]

E. Rosch, C. Mervis, W. Gray, D. Johnson, and P. Boyes-Braem, “Basic objects in natural categories,” Cogn. Psychol. 8, 382-439 (1976).
[CrossRef]

Curr. Opin. Neurobiol. (1)

K. Grill-Spector, “The neural basis of object perception,” Curr. Opin. Neurobiol. 13, 159-166 (2003).
[CrossRef]

Econ. Lett. (1)

C. M. Jarque and A. K. Bera, “Efficient tests for normality, homoscedasticity, and serial independence of regression residuals,” Econ. Lett. 6, 255-259 (1980).
[CrossRef]

IEEE Trans. Commun. (1)

A. M. Eskicioglu and P. S. Fisher, “Image quality measures and their performance,” IEEE Trans. Commun. 43, 2959-2965(1995).
[CrossRef]

IEEE Trans. Image Process. (9)

H. R. Sheikh, A. C. Bovik, and G. de Veciana, “An information fidelity criterion for image quality assessment using natural scene statistics,” IEEE Trans. Image Process. 14, 2117-2128(2005).
[CrossRef]

W. Ma and B. S. Manjunath, “Edgeflow: a technique for boundary detection and segmentation,” IEEE Trans. Image Process. 9, 1375-1388 (2000).
[CrossRef]

A. B. Watson, G. Y. Yang, J. A. Solomon, and J. Villasenor, “Visibility of wavelet quantization noise,” IEEE Trans. Image Process. 6, 1164-1175 (1997).
[CrossRef]

N. Damera-Venkata, T. D. Kite, W. S. Geisler, B. L. Evans, and A. C. Bovik, “Image quality assessment based on a degradation model,” IEEE Trans. Image Process. 9, 636-650 (2000).
[CrossRef]

D. M. Chandler and S. S. Hemami, “VSNR: a wavelet-based visual signal-to-noise ratio for natural images,” IEEE Trans. Image Process. 16, 2284-2298 (2007).
[CrossRef]

H. R. Sheikh, M. F. Sabir, and A. C. Bovik, “A statistical evaluation of recent full reference image quality assessment algorithms,” IEEE Trans. Image Process. 15, 3440-3451(2006).
[CrossRef]

J.-L. Starck, M. Elad, and D. L. Donoho, “Image decomposition via the combination of sparse representations and a variational approach,” IEEE Trans. Image Process. 14, 1570-1582 (2005).
[CrossRef]

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600-612 (2004).
[CrossRef]

H. R. Sheikh and A. C. Bovik, “Image information and visual quality,” IEEE Trans. Image Process. 15, 430-444(2006).
[CrossRef]

IEEE Trans. Inf. Theory (1)

J. L. Mannos, “The effects of a visual fidelity criterion on the encoding of images,” IEEE Trans. Inf. Theory 20, 525-536(1974).
[CrossRef]

IEEE Trans. Pattern Anal. Mach. Intell. (3)

S. Mallat and S. Zhong, “Characterization of signals from multiscale edges,” IEEE Trans. Pattern Anal. Mach. Intell. 14, 710-732 (1992).
[CrossRef]

J. Canny, “A computational approach to edge detection,” IEEE Trans. Pattern Anal. Mach. Intell. PAMI-8, 679-698 (1986).
[CrossRef]

S. Konishi, A. L. Yuille, J. M. Coughlan, and S. C. Zhu, “Statistical edge detection: learning and evaluating edge cues,” IEEE Trans. Pattern Anal. Mach. Intell. 25, 57-74 (2003).
[CrossRef]

J. Am. Stat. Assoc. (1)

M. B. Brown and A. B. Forsythe, “Robust tests for the equality of variances,” J. Am. Stat. Assoc. 69, 364-367 (1974).
[CrossRef]

J. Electron. Imaging (1)

I. Avcıbaş, B. Sankur, and K. Sayood, “Statistical evaluation of image quality measures,” J. Electron. Imaging 11, 206-233(2002).
[CrossRef]

J. Opt. Soc. Am. (2)

J. Opt. Soc. Am. A (5)

J. Physiol. (1)

M. A. Georgeson and G. D. Sullivan, “Contrast constancy: debluring in human vision by spatial frequency channels,” J. Physiol. 252, 627-656 (1975).

Neural Netw. (1)

T. M. Murphy and L. H. Finkel, “Shape representation by a network of V4-like cells,” Neural Netw. 20, 851-867 (2007).
[CrossRef]

NeuroImage (1)

S. O. Dumoulin, S. C. Dakin, and R. F. Hess, “Sparsely distributed contours dominate extra-striate responses to complex scenes,” NeuroImage 42, 890-901 (2008).
[CrossRef]

Opt. Eng. (3)

R. H. Vollmerhausen, E. Jacobs, and R. G. Driggers, “New metric for predicting target acquisition performance,” Opt. Eng. 43, 2806-2818 (2004).
[CrossRef]

J. M. Irvine, B. A. Eckstein, R. A. Hummel, R. J. Peters, and R. Ritzel, “Evaluation of the tactical utility of compressed imagery,” Opt. Eng. 41, 1262-1273 (2002).
[CrossRef]

P. D. O'Shea, E. L. Jacobs, and R. L. Espinola, “Effects of image compression on sensor performance,” Opt. Eng. 47, 013202(2008).
[CrossRef]

Pattern Recogn. Lett. (1)

T. Fawcett, “An introduction to ROC analysis,” Pattern Recogn. Lett. 27, 861-874 (2006).
[CrossRef]

Percept. Psychophys. (1)

C. A. Collin and P. A. McMullen, “Subordinate-level categorization relies on high spatial frequencies to a greater degree than basic-level categorization,” Percept. Psychophys. 67, 354-364 (2005).

Perception (2)

C. A. Collin, “Spatial-frequency thresholds for object categorisation at basic and subordinate levels,” Perception 35, 41-52(2006).
[CrossRef]

S. Panis, J. De Winter, J. Vandekerckhove, and J. Wagemans, “Identification of everyday objects on the basis of fragmented outline versions,” Perception 37, 271-289(2008).
[CrossRef]

Philips J. Res. (1)

J. A. J. Roufs, “Perceptual image quality: concept and measurement,” Philips J. Res. 47, 35-62 (1992).

Physica D (Amsterdam) (1)

L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithm,” Physica D (Amsterdam) 60, 259-268 (1992).
[CrossRef]

Proc. IEEE (2)

D. Granrath, “The role of human visual models in image processing,” Proc. IEEE 69, 552-561 (1981).
[CrossRef]

T. Stockham, “Image processing in the context of a visual model,” Proc. IEEE 60, 828-842 (1972).
[CrossRef]

Proc. R. Soc. Lond. Ser. B (1)

D. Marr and E. Hildreth, “Theory of edge detection,” Proc. R. Soc. Lond. Ser. B 207, 187-217 (1980).
[CrossRef]

Proc. SPIE (13)

E. C. Larson and D. M. Chandler, “The most apparent distortion: a dual strategy for full reference image quality,” Proc. SPIE 7242, 72420S (2009).
[CrossRef]

J. C. Leachtenauer, “Resolution requirements and the Johnson criteria revisited,” Proc. SPIE 1-15 (2003).
[CrossRef]

A. B. Watson, “DCT quantization matrices visually optimized for individual images,” Proc. SPIE 1913, 202-216 (1993).
[CrossRef]

P. Teo and D. Heeger, “Perceptual image distortion,” Proc. SPIE 2179, 127-141 (1994).
[CrossRef]

D. Rouse, R. Pepion, S. Hemami, and P. Le Callet, “Image utility assessment and a relationship with image quality assessment,” Proc. SPIE 7240 (2009).
[CrossRef]

D. M. Rouse and S. S. Hemami, “Quantifying the use of structure in cognitive tasks,” Proc. SPIE 6492, 64921O (2007).
[CrossRef]

H. de Ridder and G. M. Majoor, “Numerical category scaling: an efficient method for assessing digital image coding impairments,” Proc. SPIE 1249, 65-77 (1990).
[CrossRef]

D. M. Rouse and S. S. Hemami, “Analyzing the role of visual structure in the recognition of natural image content with multi-scale SSIM,” Proc. SPIE 6806, 680615.1-680615.14 (2008).

D. Strohmeier and G. Tech, “Sharp, bright, three-dimensional: open profiling of quality for mobile 3DTV coding methods,” Proc. SPIE 75420T (2010).
[CrossRef]

C. Poynton, “The rehabilitation of gamma,” Proc. SPIE 3299, 232-249 (1998).
[CrossRef]

D. M. Chandler, K. H. Lim, and S. S. Hemami, “Effects of spatial correlations and global precedence on the visual fidelity of distorted images,” Proc. SPIE 6057, 60570F (2006).
[CrossRef]

C. G. Ford, M. A. McFarland, and I. W. Stange, “Subjective video quality assessment methods for recognition tasks,” Proc. SPIE 7240, 72400Z (2009).
[CrossRef]

D. Rouse, R. Pepion, P. Le Callet, and S. Hemami, “Tradeoffs in subjective testing methods for image video quality assessment,” Proc. SPIE 7527, 75270F (2010).
[CrossRef]

Psychol. Sci. (1)

A. M. Burton, S. Wilson, M. Cowan, and V. Bruce, “Face recognition in poor-quality video; evidence from security surveillance,” Psychol. Sci. 10, 243-248 (1999).
[CrossRef]

Psychometrika (1)

D. E. Critchlow and M. A. Fligner, “Paired comparisons, triple comparisons, and ranking experiments as generalized linear models, and their implementation on GLIM,” Psychometrika 56, 517-533 (1991).
[CrossRef]

Radiology (Oak Brook, Ill.) (1)

J. A. Hanley and B. J. McNeil, “The meaning and use of the area under a receiver operating characteristic (ROC) curve,” Radiology (Oak Brook, Ill.) 143, 29-36 (1982).

Science (1)

J. S. Bruner and M. C. Potter, “Interference in visual recognition,” Science 144, 424-425 (1964).
[CrossRef]

SIAM J. Numer. Anal. (1)

G. Steidl, J. Weickert, T. Brox, P. Mrazek, and M. Welk, “On the equivalence of soft wavelet shrinkage, total variation diffusion, total variation regularization, and SIDEs,” SIAM J. Numer. Anal. 42, 686-713 (2004).
[CrossRef]

Signal Process., Image Commun. (2)

M. Carnec, P. Le Callet, and D. Barba, “Objective quality assessment of color images based on a generic perceptual reduced reference,” Signal Process., Image Commun. 23, 239-256(2008).
[CrossRef]

M. H. Brill, J. Lubin, P. Costa, S. Wolf, and J. Pearson, “Accuracy and cross-calibration of video quality metrics: new methods from ATIS/T1A1,” Signal Process., Image Commun. 19, 101-107 (2004).
[CrossRef]

Vis. Neurosci. (1)

A. Torralba, “How many pixels make an image?,” Vis. Neurosci. 26, 123-131 (2009).
[CrossRef]

Vis. Res. (6)

U. Polat and D. Sagi, “Lateral interactions between spatial channels: suppression and facilitation revealed by lateral masking experiments,” Vis. Res. 33, 993-999 (1993).
[CrossRef]

U. Polat and D. Sagi, “The architecture of perceptual spatial interactions,” Vis. Res. 34, 73-78 (1994).
[CrossRef]

D. J. Field, A. Hayes, and R. Hess, “Contour integration by the human visual system: evidence for a local “association field”,” Vis. Res. 33, 173-193 (1993).
[CrossRef]

C. A. Párraga, T. Troscianko, and D. J. Tolhurst, “The effects of amplitude-spectrum statistics on foveal and peripheral discrimination of changes in natural images, and a multi-resolution model,” Vis. Res. 45, 3145-3168 (2005).
[CrossRef]

N. Brady and D. J. Field, “What's constant in contrast constancy? The effects of scaling on the perceived contrast of bandpass patterns,” Vis. Res. 35, 739-756 (1995).
[CrossRef]

G. Loffler, “Perception of contours and shapes: low and intermediate stage mechanisms,” Vis. Res. 48, 2106-2127(2008).
[CrossRef]

Other (54)

International Telecommunication Union, “Subjective video quality assessment methods for multimedia applications,” ITU-U P.910 (International Telecommunication Union, 2008).

Numerical category scaling , adjective category scale , and categorical sort are alternative names describing the ACR test method. The subjective assessment methodology for video quality (SAMVIQ) generally obtains more accurate perceived quality scores and avoids many problems where observers avoid using the ends of the quality scale. Both ACR and SAMVIQ yield very similar perceived quality scores for our collection of distorted images .

“Multimedia group test plan” (2008), draft version 1.21., http://www.vqeg.org.

Prior work in the context of perceived quality often denotes a perceived quality score as a mean opinion score.

The perceived quality of unrecognizable images with perceived utility scores less than −15 range from 1 to 1.4 with the average, standard deviation, and median being 1.07, 0.089, and 1.04, respectively.

G. W. Snedecor and W. G. Cochran, Statistical Methods, 8th ed. (Iowa State, 1989).

J. L. Devore, Probability and Statistics for Engineering and the Sciences, 5th ed. (Duxbury, 2000).

Only six BLOCK distorted images have perceived utility scores greater than −15, so results corresponding to the BLOCK distorted images provide little insight into the relationship between quality and utility. Furthermore, these images have perceived quality scores in the range [1,1.3] (i.e., “bad” quality) and perceived utility scores in the range [−13,4] (i.e., effectively useless).

Values of Conf(STS(γ)>STS+HPF(γ)) less than 0.025 and greater than 0.975 indicate that the subjective scores for TS and TS+HPF distorted images with equal γ are statistically different at the 95% confidence level (i.e., a two-sided z test). Values of Conf(STS(γ)>STS+HPF(γ)) less than 0.05 indicate that the subjective score for the TS distorted image is statistically smaller than the subjective score for a TS+HPF distorted image formed from the same reference image using the same γ at the 95% confidence level (i.e., a one-sided z test). Similarly, values of Conf(STS(γ)>STS+HPF(γ)) greater than 0.95 indicate that the subjective score for the TS distorted image is statistically greater than the subjective score for a TS+HPF distorted image with the same γ.

R. J. Safranek and J. D. Johnston, “A perceptually tuned sub-band image coder with image dependent quantization and post-quantization data compression,” in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (IEEE, 1989), pp. 1945-1948.

S. J. Daly, “The visible difference predictor: an algorithm for the assessment of image fidelity,” in Digital Images and Human Vision, A.B.Watson, ed. (MIT, 1993), pp. 179-206.

J. Lubin, “The use of psychophysical data and models in the analysis of display system performance,” in Digital Images and Human Vision, A.B.Watson, ed. (MIT, 1993), pp. 163-178.

S. A. Klein, “Image quality and image compression: a psychophysicist's viewpoint,” in Digital Images and Human Vision, A.B.Watson, ed. (MIT, 1993), pp. 73-88.

T. N. Pappas and R. J. Safranek, “Perceptual criteria for image quality evaluation,” in Handbook of Image and Video Processing, A.C.Bovik, ed. (Academic, 2000).

The experiments described in this paper augment the experiments described in previous publications by the authors .

W. B. Pennebaker and J. L. Mitchell, JPEG: Still Image Data Compression Standard (Van Nostrand Reinhold, 1993).

“Independent JPEG Group,” http://www.ijg.org.

International Organization for Standardization, “Information technology--digital compression and coding of continuous-tone still images--requirements and guidelines,” ITU-T T.81 (International Telecommunication Union, 1992).

D. S. Taubman and M. W. MarcellinJPEG2000: Image Compression Fundamentals, Standards, and Practice (Kluwer Academic, 2002).

The National Imagery Interpretability Rating Scale (NIIRS) has been associated with image quality . However, the NIIRS characterizes an image's quality based on the ability of a photo interpreter to detect, recognize, and identify objects in an image. Various versions of the NIIRS have been designed for specific image applications. The NIIRS is more compatible with the definition of utility used in this paper.

H. R. Sheikh, Z. Wang, L. Cormack, and A. C. Bovik, “LIVE image quality assessment database release 2,” http://live.ece.utexas.edu/research/quality.

D. Chandler, “The CSIQ database,” http://vision.okstate.edu/index.php?loc=csiq.

A visually lossless image is visually indistinguishable from a reference image.

E. P. Simoncelli and W. T. Freeman, “The steerable pyramid: a flexible architecture for multi-scale derivative computation,” in Proceedings of the IEEE International Conference on Image Processing (IEEE, 1995), Vol. 3, pp. 444-447.

The high-pass residual generated by the steerable pyramid is not used.

M. D. Gaubatz, D. M. Rouse, and S. S. Hemami, “MeTriX MuX,” http://foulard.ece.cornell.edu/gaubatz/metrix_mux.

Video Quality Experts Group, “VQEG final report of FR-TV phase II validation test” (2003), http://www.vqeg.org.

Video Quality Experts Group, “Final report from the VQEG on the validation of objective models of multimedia quality assessment, phase I,” (2008), version 2.6., http://www.vqeg.org.

We use “NICE” to generically refer to both the single-scale and multiscale implementations of NICE, and specific implementations of NICE (e.g., NICECanny) will be identified when necessary.

Using the fine-scale steerable pyramid filters to identify image contours for MS-NICE lead to statistically similar performance to the single-scale implementation of NICE using the Sobel Canny edge detectors.

In this paper, “natural images” are formed using imaging devices that sense the natural environment over the visible portion of the electromagnetic spectrum (e.g., digital cameras). Computer-generated images and other types of synthetic images are not considered natural images.

The notation MS-NICES≤2 is used to refer to both MS-NICE1 and MS-NICE2.

D. Martin, C. Fowlkes, D. Tal and J. Malik, “A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics,” in Proceedings of the 8th International Conference of Computer Vision (IEEE, 2001), pp. 416-423.

The local variance comparison used by SSIM corresponds to an analysis of high-frequency content and does not need to be removed.

Z. Wang, E. P. Simoncelli, and A. C. Bovik, “Multi-scale structural similarity for image quality assessment,” in Proceedings of the 37th IEEE Asilomar Conference on Signals, Systems, and Computers (IEEE, 2003), Vol. 2, pp. 1398-1402.

D. M. Rouse and S. S. Hemami, “Natural image utility assessment using image contours,” in Proceedings of the IEEE International Conference on Image Processing (IEEE, 2009), pp. 2217-2220.

W. K. Pratt, Digital Image Processing: PIKS Inside, 3rd ed.(Wiley, 2001).

C. Giardina and E. DoughertyMorphological Methods in Image and Signal Processing (Prentice Hall, 1998).

The Hamming distance counts the number of dissimilar elements between two vectors .

D. M. Rouse and S. S. Hemami, “Understanding and simplifying the structural similarity metric,” in Proceedings of the IEEE International Conference on Image Processing (IEEE, 2008), pp. 1188-1191.

J. K. Petersen, Understanding Surveillance Technologies (CRC, 2001).

R. L. De Valois and K. K. De Valois, Spatial Vision (Oxford University, 1990).

The subscript k for Nk accounts for decimated wavelet decompositions, such as the steerable pyramid, whose channels in coarser image scales have fewer coefficients than channels in finer image scales.

C.Ford, P.Raush, and K.Davis, eds., Video Quality in Public Safety Conference (Institute for Telecommunication Sciences, 2009).

D. M. Green and J. A. Swets, Signal Detection Theory and Psychophysics (Peninsula, 1988).

J. C. Leachtenauer and R. G. Driggers, Surveillance and Reconnaissance Imaging Systems (Artech House, 2001).

J. Johnson, “Analysis of image forming systems,” in Image Intensifier Symposium (Fort Belvoir, 1958).

L.M.Biberman, ed., Perception of Displayed Information(Plenum, 1973).

The subscript k for Nk accounts for decimated wavelet decompositions, such as the steerable pyramid, whose channels in coarser image scales have fewer coefficients than channels in finer image scales.

M. J. Wainwright and E. P. Simoncelli, “Scale mixtures of Gaussians and the statistics of natural images,” in Advances in Neural Information Processing Systems, S.A.Solla, T.K.Leen, and K.-R.Miller, eds. (MIT, 2000), pp. 855-861.

B. W. Keelan, Handbook of Image Quality: Characterization and Prediction (CRC, 2002).

F. A. Rosell and R. H. Willson, “Recent psychophysical experiments and the display signal-to-noise ratio concept,” in Perception of Displayed Information, L.Biberman, ed. (Plenum, 1973), pp. 167-232.

The Johnson criteria were based on a study with a specific set of objects, and it is possible that different objects would suggest different criteria for object recognition .

S. Ullman, High-Level Vision: Object Recognition and Visual Cognition (MIT, 1996).

Cited By

OSA participates in CrossRef's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1
Fig. 1

Original reference airplane image and distorted images illustrating the five distortions described in Subsection 2A. The JPEG and BLOCK distortions are introduced by quantizing coefficients of a block-based DCT. J 2 K + DCQ distortions result from quantizing coefficients of a discrete wavelet transform according to the DCQ strategy [32]. TS distortions are induced via TV regularization to smooth texture regions with limited disruption to edges. A HPF that removes low-frequency signal information from images with TS distortions produces the TS + HPF distortions. Table 1 contains descriptions of each of the distortions.

Fig. 2
Fig. 2

Natural images serving as reference images for the experiments.

Fig. 3
Fig. 3

Four images from the airplane/ J 2 K + DCQ sequence used in Experiment 1 (Subsection 2B). J 2 K + DCQ distorted images are pa rameterized using the encoding bitrate R in bits per pixel (see Table 1). The encoding bitrate of the visually lossless airplane image specified by the DCQ strategy is R VL = 1.85 bits / pixel . The perceived utility (U) scores and perceived quality (Q) scores obtained via the subjective experiments are provided for each image.

Fig. 4
Fig. 4

Quality is not a suitable proxy for utility. The scatterplot shows the relationship between perceived utility scores and the perceived quality scores for nine reference images. The symbols indicate the reference image corresponding to each subjective score. The RT and the REC are denoted on the axis corresponding to perceived utility scores. The quality adjectives are denoted on the axis corresponding to the perceived quality scores. Standard error bars have been included for both subjective scores. In each figure, the fitted nonlinear mapping from the abscissa to the ordinate is denoted by the solid curve, and the 95% PI for the fitted nonlinear mapping is denoted by the dashed curves. See also Fig. 5.

Fig. 5
Fig. 5

Perceived utility versus perceived quality where the symbols indicate the distortion (cf. Figure 1) corresponding to each subjective score. See caption of Fig. 4.

Fig. 6
Fig. 6

Perceived quality either decreases or remains the same when low-frequency content is disrupted (i.e., for TS + HPF distortions relative to TS distortions). The figures show the confidence that the perceived quality (Q) score of the TS distortions are greater than the perceived quality score for TS + HPF distortions with equal γ as a function of the perceived quality score of the TS distortions. See Subsection 4B for additional details regarding the confidence analysis and its interpretation.

Fig. 7
Fig. 7

Disruptions to low-frequency content do not affect the perceived utility of most images. The figures show the confidence that the perceived utility (U) score of the TS distortions are greater than the perceived utility score for TS + HPF distortions with equal γ as a function of the perceived utility score of the TS distortions. Refer to the caption of Fig. 6.

Fig. 8
Fig. 8

Example showing that the skier TS distorted image has statistically greater quality than the TS + HPF distorted image with equal γ but statistically lower utility. Removing the low-frequency content from the skier image (i.e., the TS + HPF distorted image) introduces “halos” near edges that enhance the visibility of the skier. See also Figs. 6, 7.

Fig. 9
Fig. 9

Differences in perceived quality (Q) do not imply differences in perceived utility (U). In terms of perceived utility, the distorted images in the middle column are statistically equivalent to the distorted images in the left column. However, in terms of perceived quality the distorted images in middle column are statistically equivalent to the distorted images in the right column. The images have been cropped from their original versions.

Fig. 10
Fig. 10

VIF is more sensitive to distortions at finer image scales (i.e., high spatial frequencies) over those at coarser image scales (i.e., low spatial frequencies), whereas VIF* is more sensitive to disruptions to coarser scale content than finer scale content. Figures 10a, 10b respectively show the image scale measurements computed by VIF and VIF* for the airplane image with J 2 K + DCQ ( Q = 3.8 , U = 77 ), TS ( Q = 4.0 , U = 76 ), and TS + HPF ( Q = 3.2 , U = 69 ) distortions. These images have statistically equivalent perceived utility, but the perceived quality of the TS + HPF distorted image is statistically smaller than the other two distorted images. The pooled image scale measurements for VIF reflect their similarity in perceived utility but not their differences in perceived quality. The pooled image scale measurements for VIF* reflect their differences in perceived quality not their similarity in perceived utility.

Tables (4)

Tables Icon

Table 1 Summary of Image Distortions Studied a

Tables Icon

Table 2 Results Summarizing the Relationship between Perceived Quality and Perceived Utility a

Tables Icon

Table 3 Statistics Summarizing the Performance of Estimators as Utility Estimators a

Tables Icon

Table 4 Statistics Summarizing the Performance of Objective Estimators as Quality Estimators a

Equations (11)

Equations on this page are rendered with MathJax. Learn more.

f ( q ) = a log ( q ) + b
z stat = S TS ( γ ) S TS + HPF ( γ ) σ S TS ( γ ) 2 + σ S TS + HPF ( γ ) 2 .
C rms ( E ) = 1 μ L ( X ) [ 1 M i = 1 M ( L ( E i + μ X ) μ L ( E + μ X ) ) 2 ] 1 / 2 ,
NICE = s = 1 S d H ( B s E , B ^ s E ) s = 1 S N B s ,
B s ( i ) = { 1 M s ( i ) > β s and i I s 0 else .
y n = t n p b q + m M n w m t m q ,
VIF VIF contour + VIF texture ,
VIF * = k = 1 K 1 B k IFC ( C k , F k ) k = 1 K 1 B k IFC ( C k , E k ) ,
VIF = k = 1 K IFC ( C k , F k ) k = 1 K IFC ( C k , E k ) .
IFC ( C k , F k ) = b = 1 B k log 2 ( | g b 2 s b 2 K U + ( σ V b 2 + σ N 2 ) I | | ( σ V b 2 + σ N 2 ) I | )
IFC ( C k , E k ) = b = 1 B k log 2 ( | s b 2 K U + σ N 2 I | | σ N 2 I | ) ,

Metrics