Structural similarity metrics and information-theory-based metrics have been proposed as completely different alternatives to the traditional metrics based on error visibility and human vision models. Three basic criticisms were raised against the traditional error visibility approach: (1) it is based on near-threshold performance, (2) its geometric meaning may be limited, and (3) stationary pooling strategies may not be statistically justified. These criticisms and the good performance of structural and information-theory-based metrics have popularized the idea of their superiority over the error visibility approach. In this work we experimentally or analytically show that the above criticisms do not apply to error visibility metrics that use a general enough divisive normalization masking model. Therefore, the traditional divisive normalization metric  is not intrinsically inferior to the newer approaches. In fact, experiments on a number of databases including a wide range of distortions show that divisive normalization is fairly competitive with the newer approaches, robust, and easy to interpret in linear terms. These results suggest that, despite the criticisms of the traditional error visibility approach, divisive normalization masking models should be considered in the image quality discussion.
© 2010 Optical Society of AmericaFull Article | PDF Article
Salvador Gabarda and Gabriel Cristóbal
J. Opt. Soc. Am. A 24(12) B42-B51 (2007)
Zhou Wang and Qiang Li
J. Opt. Soc. Am. A 24(12) B61-B69 (2007)
David M. Rouse, Sheila S. Hemami, Romuald Pépion, and Patrick Le Callet
J. Opt. Soc. Am. A 28(2) 157-188 (2011)