Structural similarity metrics and information-theory-based metrics have been proposed as completely different alternatives to the traditional metrics based on error visibility and human vision models. Three basic criticisms were raised against the traditional error visibility approach: (1) it is based on near-threshold performance, (2) its geometric meaning may be limited, and (3) stationary pooling strategies may not be statistically justified. These criticisms and the good performance of structural and information-theory-based metrics have popularized the idea of their superiority over the error visibility approach. In this work we experimentally or analytically show that the above criticisms do not apply to error visibility metrics that use a general enough divisive normalization masking model. Therefore, the traditional divisive normalization metric [1] is not intrinsically inferior to the newer approaches. In fact, experiments on a number of databases including a wide range of distortions show that divisive normalization is fairly competitive with the newer approaches, robust, and easy to interpret in linear terms. These results suggest that, despite the criticisms of the traditional error visibility approach, divisive normalization masking models should be considered in the image quality discussion.
Timothy D. Dixon, Eduardo Fernández Canga, Stavri G. Nikolov, Tom Troscianko, Jan M. Noyes, C. Nishan Canagarajah, and Dave R. Bull J. Opt. Soc. Am. A 24(12) B125-B135 (2007)
You do not have subscription access to this journal. Cited by links are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.
You do not have subscription access to this journal. Figure files are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.
You do not have subscription access to this journal. Article tables are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.
You do not have subscription access to this journal. Equations are available to subscribers only. You may subscribe either as an Optica member, or as an authorized user of your institution.
Quality of Metrics on the LIVE Database (F-test): Probability That the Model in The Row is Better Than the Model in the Column for the Linear and Several Non-Linear Fitsa
RMSE
MSSIM
VIF
DN
Linear Fit
RMSE:
—
0.00
0.08
0.00
0.00
:
1.00
—
1.00
0.00
0.00
MSSIM:
0.92
0.00
—
0.00
0.00
VIF:
1.00
1.00
1.00
—
1.00
DN:
1.00
1.00
1.00
0.00
—
Four-Parameter Sigmoid Fit
RMSE:
—
0.00
0.00
0.00
0.00
:
1.00
—
0.64
0.00
0.09
MSSIM:
1.00
0.36
—
0.00
0.04
:
1.00
1.00
1.00
—
1.00
DN:
1.00
0.91
0.96
0.00
—
Five-Parameter Sigmoid Fit
RMSE:
—
0.00
0.00
0.00
0.00
:
1.00
—
0.58
0.00
0.14
MSSIM:
1.00
0.42
—
0.00
0.10
:
1.00
1.00
1.00
—
1.00
DN:
1.00
0.86
0.90
0.00
—
Fourth-Order Polynomial Fit
RMSE:
—
0.12
1.00
0.00
0.00
:
0.88
—
1.00
0.00
0.00
MSSIM:
0.00
0.00
—
0.00
0.00
VIF:
1.00
1.00
1.00
—
1.00
DN:
1.00
1.00
1.00
0.00
—
Highlighted probabilities mean that the model in the row is better than the model in the column at 90% confidence level for the linear and several non-linear fits. The models highlighted with have non-Gaussian residuals, so the result is not strictly correct.
Table 4
Quality of Metrics on the TID Database (Excluding LIVE-like Distortion)a
Quality of Metrics on the LIVE Database (F-test): Probability That the Model in The Row is Better Than the Model in the Column for the Linear and Several Non-Linear Fitsa
RMSE
MSSIM
VIF
DN
Linear Fit
RMSE:
—
0.00
0.08
0.00
0.00
:
1.00
—
1.00
0.00
0.00
MSSIM:
0.92
0.00
—
0.00
0.00
VIF:
1.00
1.00
1.00
—
1.00
DN:
1.00
1.00
1.00
0.00
—
Four-Parameter Sigmoid Fit
RMSE:
—
0.00
0.00
0.00
0.00
:
1.00
—
0.64
0.00
0.09
MSSIM:
1.00
0.36
—
0.00
0.04
:
1.00
1.00
1.00
—
1.00
DN:
1.00
0.91
0.96
0.00
—
Five-Parameter Sigmoid Fit
RMSE:
—
0.00
0.00
0.00
0.00
:
1.00
—
0.58
0.00
0.14
MSSIM:
1.00
0.42
—
0.00
0.10
:
1.00
1.00
1.00
—
1.00
DN:
1.00
0.86
0.90
0.00
—
Fourth-Order Polynomial Fit
RMSE:
—
0.12
1.00
0.00
0.00
:
0.88
—
1.00
0.00
0.00
MSSIM:
0.00
0.00
—
0.00
0.00
VIF:
1.00
1.00
1.00
—
1.00
DN:
1.00
1.00
1.00
0.00
—
Highlighted probabilities mean that the model in the row is better than the model in the column at 90% confidence level for the linear and several non-linear fits. The models highlighted with have non-Gaussian residuals, so the result is not strictly correct.
Table 4
Quality of Metrics on the TID Database (Excluding LIVE-like Distortion)a