Multireader multicase (MRMC) variance analysis has become widely utilized to analyze observer studies for which the summary measure is the area under the receiver operating characteristic (ROC) curve. We extend MRMC variance analysis to binary data and also to generic study designs in which every reader may not interpret every case. A subset of the fundamental moments central to MRMC variance analysis of the area under the ROC curve (AUC) is found to be required. Through multiple simulation configurations, we compare our unbiased variance estimates to naïve estimates across a range of study designs, average percent correct, and numbers of readers and cases.
© 2007 Optical Society of America
The study of image quality often involves the use of psychophysical studies to evaluate an imaging system, or perhaps as a validation of model observer predictions for circumstances new to that model observer. Studies involving human readers are also central to the evaluation of new imaging technologies for which there is no alternative to the use of clinical images from actual patients. Just as important as the mean performance of the observer is the uncertainty of the measurement.
Previous publications have presented methods for the analysis of the uncertainty in the summary measure of observer performance using the multireader multicase (MRMC) paradigm, mainly in the context of analyzing the area under the receiver operating characteristic (ROC) curve [, , , , ] and a “fully crossed” study design, where every reader reads every case. The data analyzed in these publications are typically the matrix of ROC scores obtained from each reader for each case.
In this paper we present an unbiased method for estimating the variance in an experiment with multiple readers and multiple cases for which the outcomes are binary and the summary performance measure is a percent correct (PC). We also extend the analysis beyond the fully crossed study design to allow arbitrary study designs, including the “doctor–patient” study design, where each doctor sees his or her own patients.
Some examples of PCs are sensitivity, specificity, and the PC in an M-alternative forced-choice (MAFC) experiment. Sensitivity is the percent of abnormals correctly identified, and specificity is the percent of normals correctly identified. We shall also refer to the abnormals as the signal-present cases (hypothesis 1, ), and the normals as the signal-absent cases (hypothesis 0, ).
In an MAFC experiment, the reader must choose which of M-alternatives within a trial contains the signal. So, in the typical two-alternative forced-choice (2AFC) task a trial is often a pair of images, one signal-absent and one signal-present, displayed side by side or in sequence. The outcome of the choice is binary; the reader is either right or wrong. The rate at which the reader correctly picks the alternative with the signal is the PC.
Regardless of the specific task, readers, and cases, we denote the binary success outcome generically by , where g specifies the case and γ specifies the reader. This success outcome is 0 when reader r incorrectly identifies case g and 1 when the reader is successful.
In a particular study, there is a set of cases and a set of readers. Without replicating readings, we could collect outcomes if every reader reads every case (the fully crossed design). For the doctor–patient study design, depicted pictorially on the left of Fig. 1 , some of these data are not collected. The shaded area in Fig. 1 indicates which cases were read by which readers. Since each case is read by only one reader, a significant amount of data are missing compared to the fully crossed design, which would fill the whole matrix. Additionally, we allow the number of cases, or “case load,” read by each reader to be different.
On the right in Fig. 1 we provide a simple example demonstrating the data from a binary-outcome experiment with multiple readers, each reading their own cases. The PC in the last row weighs each reading equally: 100 correct decisions divided by 130 readings is 77%. Now one might assume that the readings are all independent and identically distributed (iid) and estimate the standard error using the sample variance divided by the total number of readings; this equals 3.7. However, since each reader may have a different skill at the task, the readings are not identically distributed and this naïve estimate likely underestimates the true variance.
Instead of calculating the average performance as in Fig. 1, one might average the three reader-specific PCs, yielding , which is noticeably different from the previous average performance. One might continue and estimate the standard error using the sample variance of the three reader-specific PCs and divide by the number of readers, yielding 20.6. This result is more than five times that of the previous underestimate but, in reality, probably overestimates the true variance. This overestimate is due to the reader-specific PCs being noisy realizations of the true PCs.
This simple example highlights two naïve estimates of variance. The first incorrectly treats the readings as identically distributed, and the second incorrectly treats the reader PCs as being measured without error. The variance estimate that we provide appropriately accounts for the readers, cases, and correlations that arise from the actual study design. These variance estimates apply to the average PC when readers are treated equally or when readings are treated equally.
In what follows, we make the following assumptions: Readers are iid, cases are also iid, and readers are independent of cases. Additionally, given a reader and a case, an outcome can be deterministic, as when the reader is a mathematical classifier, or an outcome can be a random variable, as might be expected when the reader is a human and unable to reproduce the same decision on subsequent readings (reader jitter). This distinction is unnecessary for the current work; our variance estimate accounts for reader jitter whether it exists or not.
2. THEORY AND METHODS
We define a design matrix D and a success matrix S. Both matrices are ; their elements are denoted and , where i stands for the case and r for the reader. The design matrix holds a one in every position where an outcome was collected and a zero everywhere else. The success matrix holds the observed success outcomes . For the reader, we denote the number of cases read by and the PC by
When an outcome is not collected, and is technically undefined. In practice, we can set to any number we want when , since it will always appear with and the product will always be zero. Therefore, to ease the transition to ensemble statistics, we think of as the success outcome whether or not it was collected in the study.
We shall assume that the design matrix does not depend on the success matrix and vice versa, as such dependencies would certainly bias the study. In this paper we shall consider fixed study designs and random study designs. For a fixed study design, D is specified before data are collected; for a random study design, there is a protocol, or sampling scheme, that determines a distribution for the possible study designs.
The typical endpoint in a study is a reader-averaged PC:
Other choices for weights may be driven by the experience or skill of each reader. In the most general framework the weights are arbitrary, as long as they sum to one.
2B. Population Quantities
2B1. Fixed Study Designs
The mean of for a fixed study design D is straightforward:
Next, carefully accounting for possible correlations across readers and cases (see Appendix A), the population variance of for a fixed study design isA7, A8, A9, A10)].
The unique numbering of the coefficients above is driven by how we label the moments. We refer to the moments in Eq. (4) as , , , and to coincide with notation previously derived for the empirical area under the ROC curve (AUC) [, ]. For AUC, there are eight fundamental moments of the success outcomes. The factor of 2 increase in the number of moments comes from partitioning cases into two subsets: signal-absent and signal-present.
The variance can be written concisely as a scalar product between the coefficients and the moments arranged in vectors and ; that is, , where coefficients , , , and are all understood to equal zero. This variance will carry a subscript γ or g when needed to indicate weights treating each reader equally or weights treating each reading equally. The moments themselves are nothing more than second moments ( through ) and a mean squared , as are expected in a variance. Finally, we shall extend this notation to include , the success outcome averaged over reader γ and case g.
The simple form of the variance expression in Eq. (4) hides complexity that comes with all the different possible study designs and weights. It is worthwhile to see how the variance of is related to the variances of the reader-specific . In general,
The covariance for the general study design simplifies for the fully crossed and doctor–patient study designs. The covariance for the fully crossed study design is Eq. (A13) minus the mean squared, or
2B2. Special Cases and Random Study Designs
The vector of coefficients for a fixed study design is made up of complicated sums that simplify for the study designs considered in this paper (see Table 1 ). If we allow the study design to be random (with some distribution), we get the variance of by averaging the coefficients of the fixed study design over the distribution of study designs. This is possible because we assume the design and success matrices are independent. When averaged over the distribution of study designs, the variance is no longer dependent, or conditional, on a fixed study, and the subscript should be dropped.
2C. Variance Estimates
2C1. Fixed Study Design
Expressing the fixed-study-design variance of as a linear combination of moments, as described in the previous section, leads to the unbiased moment estimator that we present here,A3, A4, A5, A6)] with sums over the readers and cases. The estimates are as follows:
The weights for each pair of observations are analogous to the weights used for the average performance: Each case (or pair of cases) is given equal weight for each reader, and the readers are given the same (relative) weights as before. In theory, the weights could be different from those for ; however, there does not seem to be a good reason to make them different. As for and , we add the subscript γ or g to when necessary to indicate whether the weights equally weigh each reader or each reading.
In situations where two readers have nonoverlapping case samples, the denominator in can be zero. But at the same time, the numerator will be zero as well. In these situations the contribution to is taken to be zero. Consequently, for the doctor–patient study design, where readers never read the same cases, is entirely zero.
When replacing the expected values with sums for estimation there are two things to remember: Avoid biases and count the number of samples that are being summed. The elements of the design matrix are an easy way to count the number of samples that are being summed.
Biases creep in when we replace a squared average with a squared sum. To avoid the bias, replace the squared average with two sums and do not include the index of the first sum in the second sum. For example, the estimate of squares the average over readers for a fixed case. When replacing this squared average with sums over r and , we do not let equal r. We also normalize the weights in the sum over so that they sum to one. The result can be shown to be unbiased with standard algebraic and probabilistic manipulations.
Unfortunately, our moment-based MRMC variance estimate is not necessarily positive. It is a linear combination of sums of squares, where one coefficient, , is negative. The possibility of negative estimates are an unfortunate consequence of estimating variances with sums of squares and too few samples. Bayesian and maximum-likelihood estimates could avoid the unfortunate negative estimates, but that approach is beyond the scope of the nonparametric treatment of this paper.
2C2. Random Study Design
The only change needed to account for random study designs is to replace with an estimate of . One estimate of is just the observed itself, which would not be an actual change of the fixed-study-design variance estimator. Other estimates of would require priors on the distribution of possible study designs. For this manuscript, we shall investigate the fixed-study-design estimator and consider other estimators at a later date.
2C3. Naïve Estimates
As a basis for comparison, we consider the two naïve estimates described in the Introduction. Neither accounts for the MRMC nature of the data, but both have been used in the literature. The first estimate essentially assumes that all the readings are iid, indirectly assuming that readers all have the same skill and are reading different cases. Given this assumption, the success outcomes are all independent Bernoulli trials with the same probability of success, and the variance of the reader-averaged PC is estimated as
The second estimate uses the sample variance of the reader-specific PCs:
We shall utilize the Monte Carlo (MC) simulation scheme developed by Roe and Metz [] to investigate the variance estimates presented above in a 2AFC experiment. This simulation scheme assumes that a reader generates two scores for each case, where a case represents a signal-absent and signal-present pair of alternatives. If the score of the signal-absent alternative is lower than the score of the signal-present alternative, the success outcome for the case is one; otherwise, it is zero:
The model for the scores is a sum of Gaussian random variables:
With such a simple description for the scores, we can characterize the distribution of the PC to second order. First, the reader’s skill averaged over all cases is
The first option is to (numerically) calculate the average over the two independent reader components [Eq. (21)], letting τ go to infinity. The second option starts over, eliminating the condition on r in Eq. (20). Noticing that is simply a Gaussian with variance two centered on ,]. Please note that for population quantities, equals (because ) and equals .
This leaves and as the remaining second-order moments unaccounted for in this problem. Without a familiar probability density function (pdf) for , the only option we found for calculating these moments is through numerical integration. The integral expressions for and are
2D2. Simulation Configurations
The relevant parameters for the simulation are listed in Table 2 . We vary all the simulation parameters in a factorial design, yielding total configurations. For each of these, we run 10,000 MC trials. Compared to the simulation parameters of Roe and Metz [], we consider a broader range of reader variance for the scores, especially on the high end. The range they considered was 1%–10% of the total; our range is 5%–83%.
Another factor that we investigate is how the cases are distributed among the readers. We investigate six study designs with the expected number of cases read by each reader given in Table 2. Table 3 exemplifies the study designs with five readers and an average of 102 cases read by each reader. The first four of the study designs listed are doctor–patient study designs, the next is fully crossed, and the last has a unique hybrid structure that is neither fully crossed nor doctor–patient.
The first doctor–patient study design is flat; every reader reads the same number of cases. For the Poisson doctor–patient study design, the number of cases each reader reads is five cases plus a Poisson random variable with mean . For the uniform distributions, the number is selected from the interval for the broad distribution or for the moderate distribution. These distributions force a minimum of five readings per reader.
The final study design we consider is motivated by an observer study conducted by investigators at the National Cancer Institute. The observer study used a subset of images from the atypical squamous cells of undetermined significance (ASCUS) low-grade squamous intraepithelial lesion (LSIL) triage study known as ALTS [, ]. In that study a small subset of the cases were read by all the study colposcopists. The remaining cases were each read by three readers. Here we have a data set of cases. Each reader reads the first three cases of this data set; the remaining cases are each read by three randomly selected readers. The curious size of the data set is chosen so that the total number of readings for this study design is , the same total expected for the other study designs. We shall refer to this study design as the hybrid study design.
Finally, we consider both weighting methods mentioned above: equally weighing readers and equally weighing readings.
3. SIMULATION RESULTS AND DISCUSSION
In what follows, we compare our variance estimators to the truth, the population quantities. Our population quantities are calculated from the integral expressions in Eqs. (22, 23, 24) and the MC averages for the coefficients. So the truth still has an element of uncertainty in it; the expected values of the nonlinear coefficients are intractable.
To verify the integral expressions, we compare each population variance (from integration) to the sample variances of 10,000 independent MC performance estimates. A separate point is given for each of the 729 simulation configurations, 6 types of study designs, and both ways to weigh individual reader PCs in Fig. 2 . Across all these simulation configurations, which cover a broad range of variances, the maximum difference found was 6% and the mean was .
Expected Variance. Before we assess our estimates, it is worthwhile to show the variances expected from all the experiments. Figure 3 shows the population variances for all the high-PC (0.96) simulation configurations compared to the expected values (from MC averaging) of the naïve variance estimates. The expected values of our moment estimators are unbiased and thus equal the population variances. At the bottom of each column of plots, the x axis is labeled according to the size of the simulated experiment. The 27 different components of variance configurations are then explored within each experiment size according to the reader component of variance . This sorting shows that the reader component of variance has a strong impact on the expected variance of the experiment. The size of the experiment also affects the experimental variance, though to a lesser degree. Additionally, the simulation configurations for lower PCs (0.86, 0.70; not shown) are quite similar to those given in Fig. 3 except that they are shifted upward. This behavior mimics the binomial variance, which increases with decreasing performance.
Interestingly, the overall scale across the different study designs is relatively constant for each experiment size. Recall that each study design has the same expected number of readings given the same experiment size. However, we can see that different study designs behave differently across different components-of-variance configurations.
Regarding the impact of reader weights, lies on top of in all the plots except for the broad uniform doctor–patient study design (Fig. 3d). In that plot, can be that of (notice some dots peaking out from behind the solid curve). What this means is that the variance of the reader-averaged PC does not depend on the reader weights except when the reader case loads are very different.
Finally, in each plot the naïve estimates bracket the true MRMC variances: is biased high (the dotted curve upper bound) and is biased low (the dashed curve lower bound). In the plots, can be nine times the true variance, whereas can be as little as 2% of the true variance.
Root-mean-square error. Here we assess the variance estimators with the relative root-mean-square error (RRMSE), or
Figure 4 plots the RRMSE for the fully crossed study design: Plot A shows the high PC (0.96), and Plot B shows the low PC (0.70). As for the previous plots, the x axis is labeled according to the size of the simulated experiment, while the different variance configurations are explored within each experiment size, sorted by the reader component of variance . Recall that for the fully crossed study design, equally weighing readers is the same as equally weighing readings. Consequently, our MRMC variance estimators of and are also equal: .
We first point out that at high PC (Fig. 4a) and with only three readers, the RRMSE of our MRMC estimators runs above 100% (solid curve). Three readers are not enough to do the MRMC variance estimation, and as the reader component of variance increases, the estimator gets even noisier. In this regime, the naïve estimator appears to be performing fairly well (dashed curve), that is, until we recall how biased it is (Fig. 3e). The bias of , on the other hand, is driving the RRMSE to extreme values (dotted curve).
As the size of the experiment grows, the RRMSE of our MRMC estimator decreases, while that of does not. That is to say, our MRMC estimator improves with more data, while the naïve estimator cannot adapt to the overdispersive nature of the data. Nonetheless, even with ten readers, each reading 102 cases on average, our MRMC estimator has too much error when the PC and reader variability are high.
When the PC is lower (Fig. 4b), the estimation problem can be done with reasonable precision and accurracy. When there are ten readers and 50 cases in the experiment, the RRMSE of our MRMC estimator ranges between 20% and 40%.
For the broad uniform study design (Figs. 5a, 5b ), the overall story is similar, but we now see a difference from the reader weights. In experiments with little data and high PC, the error estimating (dashed-dotted curve) is significantly larger than that estimating (solid curve). However, for the experiments with adequate readers (ten) and moderate PC, where the RRMSE ranges between 30% and 60%, the difference in errors becomes negligible.
Finally, the RRMSE stories for the other study designs are similar to either that of the fully crossed or the broad uniform study designs. The hybrid study design, with its additional case correlations from cases being read by at least three readers, mimics the fully crossed study design. The other doctor–patient study designs mimic the broad uniform doctor–patient study design, although the differences between the RRMSEs for and are not as pronounced.
In summary, the reader weights do not play a significant role in the total variance of the average PC except when the case loads are very different, as in the broad uniform study design. Additionally, it takes about ten readers and a moderate PC to reasonably estimate the MRMC variance. In this regime the error estimating and is about the same. Finally, our MRMC estimator improves as more data are collected and performance is moderate; it is a consistent estimator. In contrast, the naïve estimators are not consistent; they do not get closer to the truth with more data.
4. CONCLUSIONS AND FUTURE WORK
We have presented a framework for estimating the variance of a binary-outcome experiment that appropriately accounts for readers and cases as random effects. This framework is based on the larger one developed for estimating the MRMC variance of AUC [, ] obtained according to a fully crossed study design. The MRMC variance of AUC has eight fundamental second-order moments of the success outcomes, whereas for the binary-outcome experiment there are only four fundamental moments. We have also generalized the framework to accommodate any MRMC random or fixed study design. A fully crossed study design is not required, though we have highlighted it and another special study design, the doctor–patient study design.
In addition to quantifying the uncertainty of the MRMC experiment conducted, the framework provided can be used to consider other study designs. For example, a small pilot study can be used to estimate the moments of the success outcomes. Then a larger pivotal study can be considered by simply changing the study design matrix, which will change the coefficients . This larger pivotal study does not even need to be of the same type as the pilot study, as long as the appropriate moments have been estimated.
We have examined our estimator with the MC simulation scheme developed by Roe and Metz.[] This simulation was originally developed to investigate the Dorfman–Berbaum–Metz (DBM) linear-random-effects (components-of-variance) model of AUC [] and has since served as a testbed for assessing other MRMC approaches [, , ]. Within our framework, we have also been able to derive integral expressions for numerically calculating the fundamental moments of the success outcomes for the Roe and Metz simulation. Extending these results to the eight fundamental moments of the MRMC variance of AUC is available upon request from the author and is being drafted for publication. This result ties off a loose end that has been present since the simulation model was developed. For a short discussion showing how the success moments are related to the components of variance, see Appendix B.
The variance estimates presented are useful for the visual perception investigator performing clinical studies or human psychophysics experiments, as well as for the investigator developing models of the human or ideal observer. For the latter, the utility comes to bear when the model observer is estimated from a finite set of training cases. If another set of cases is obtained (same size), another estimate of the observer (same model) could be obtained. These two model-observer estimates can be thought of as samples from a population of readers. In this setting, a MRMC performance experiment can be run where we generate a sample of readers (trained on independent sets of cases) and a sample of testing cases (cases that are independent of the ones used for training any observer). Performing an MRMC variance analysis on this experiment will allow the investigator to account for the variability from training the model with a finite set of training samples and from testing the model with a finite set of testers. Such an accounting is essential to model development and is starting to be appreciated in the field of computer-aided diagnosis and detection of disease [].
One direction for future work in this area is to estimate MRMC covariances. The method we presented in this paper generalizes easily to estimating covariances when the readers and cases are paired across two reading conditions or modalities. Simply replace the success matrix with a difference of success matrices and proceed as described for the single-modality MRMC variance analysis. These covariances can be used to quantify the statistical difference between the performance of a set of readers reading the same cases in two modalities, or the difference between two observer models.
Another direction for future work is to take the general study design concepts to AUC []. Pooling ROC scores happens just as pooling success outcomes happens. The subsequent variance analysis and hypothesis tests done do not typically account for the fact that the scores from several readers reading different cases are not identically distributed. For AUC, however, not only is the variance analysis wrong, but the actual pooled AUC can be quite different from the average reader AUC [], especially when the readers use the ROC score axis differently.
APPENDIX A: SECOND-MOMENT, FIXED STUDY DESIGN
Here we assume that the design matrix and weights are fixed, and we calculate the second moment of . It is
The squared sum over readers and cases is a quadruple sum that we separate into four parts:
Since the readers and cases are iid, the moments in each line of the expression above do not depend on r, or i, , which we define to coincide with notation previously derived for the empirical AUC [, ].
Given that the moments in Eq. (A2) are independent of the readers r, and cases i, , we can see that the second moment is simply four moments weighted by four coefficients. The variance utilizes the same four coefficients, while subtracting 1 from the last coefficient to account for subtracting the mean squared from . Therefore, after some algebraic manipulations, the coefficients areA9, A10) can be rewritten so that the sums over do not need to skip the term, making the computer implementation more efficient.
The general case expression in Eq. (A2) simplifies for the study designs considered in this paper (see Table 1). For the fully crossed study design, always equals one, so sums over all i equal and sums over equal . For doctor–patient study designs, readers never read the same cases, so the sum over i of always equals zero and the sum over i and of .
It is also handy to derive the expected value of
APPENDIX B: COMPONENTS OF VARIANCE
In this section we relate our moment decomposition of the variance given in Eq. (4) to a components-of-variances (CofVs) decomposition [, , ]. We begin by considering the distribution of reader skill; some readers are better than others. The skill of a reader is the success outcome for a given reader γ averaged over all cases in the population, or
Likewise, we consider the distribution of case difficulty. The case difficulty is the success outcome for a given case g averaged over all readers in the population, or
Instead of the development above, the DBM model starts by decomposing the performance into three random effects:
At first, the variance of the interaction term is not obvious. The reason is that the variance of the interaction term depends on the study design. It depends on how the readers and cases are sampled and combined in the summary performance statistic. We can actually figure out the variance of the interaction term by starting with the total variance and organizing it according to reciprocal powers of , , much like is done in the work of Barrett et al. [, ]. For the fully crossed study design, we have
1. D. D. Dorfman, K. S. Berbaum, and C. E. Metz, “Receiver operating characteristic rating analysis: generalization to the population of readers and patients with the jackknife method,” Invest. Radiol. 27, 723–731 (1992). [CrossRef] [PubMed]
2. S. V. Beiden, R. F. Wagner, and G. Campbell, “Components-of-variance models and multiple-bootstrap experiments: an alternative method for random-effects, receiver operating characteristic analysis,” Acad. Radiol. 7, 341–349 (2000). [CrossRef] [PubMed]
3. N. A. Obuchowski, S. V. Beiden, K. S. Berbaum, S. L. Hillis, H. Ishwaran, H. H. Song, and R. F. Wagner, “Multireader, multicase receiver operating characteristic analysis: an empirical comparison of five methods,” Acad. Radiol. 11, 980–995 (2004). [CrossRef] [PubMed]
5. B. D. Gallas and D. G. Brown, “Reader studies for validation of CAD systems,” submitted to Neural Networks.
6. C. A. Roe and C. E. Metz, “Dorfman–Berbaum–Metz method for statistical analysis of multireader, multimodality receiver operating characteristic (ROC) data: validation with computer simulation,” Acad. Radiol. 4, 298–303 (1997). [CrossRef] [PubMed]
8. J. Jeronimo, L. S. Massad, and M. Schiffman, “Visual appearance of the uterine cervix: correlation with human papillomavirus detection and type,” Am. J. Obstet. Gynecol. 97, 47.e1–47.e8 (2007). [CrossRef]
9. S. L. Hillis and K. S. Berbaum, “Monte Carlo validation of the Dorfman–Berbaum–Metz method using normalized pseudovalues and less data-based model simplification,” Acad. Radiol. 12, 1534–1541 (2005). [CrossRef] [PubMed]
10. S. L. Hillis, N. A. Obuchowski, K. M. Schartz, and K. S. Berbaum, “A comparison of the Dorfman–Berbaum–Metz and Obuchowski–Rockette methods for receiver operating characteristic (ROC) data,” Stat. Med. 24, 1579–1607 (2005). [CrossRef] [PubMed]
12. W. A. Yousef, R. F. Wagner, and M. H. Loew, “Assessing classifiers from two independent data sets using ROC analysis: a nonparametric approach,” IEEE Trans. Pattern Anal. Mach. Intell. 28, 1809–1817 (2006). [CrossRef] [PubMed]
13. M. S. Pepe, The Statistical Evaluation of Medical Tests for Classification and Prediction (Oxford U. Press, 2003).
15. H. H. Barrett, M. A. Kupinski, and E. Clarkson, “Probabilistic Foundations of the MRMC Method,” Proc. SPIE 5749, 21–31 (2005). [CrossRef]