## Abstract

The Bayesian ideal observer is optimal among all observers and sets an absolute upper bound for the performance of any observer in classification tasks [
Van Trees, Detection, Estimation, and Modulation Theory, Part I (Academic, 1968).
]. Therefore, the ideal observer should be used for objective image quality assessment whenever possible. However, computation of ideal-observer performance is difficult in practice because this observer requires the full description of unknown, statistical properties of high-dimensional, complex data arising in real life problems. Previously, Markov-chain Monte Carlo (MCMC) methods were developed by Kupinski *et al.* [J. Opt. Soc. Am. A **20**, 430(2003)
] and by Park *et al.* [J. Opt. Soc. Am. A **24**, B136 (2007)
and IEEE Trans. Med. Imaging **28**, 657 (2009)
] to estimate the performance of the ideal observer and the channelized ideal observer (CIO), respectively, in classification tasks involving non-Gaussian random backgrounds. However, both algorithms had the disadvantage of long computation times. We propose a fast MCMC for real-time estimation of the likelihood ratio for the CIO. Our simulation results show that our method has the potential to speed up ideal-observer performance in tasks involving complex data when efficient channels are used for the CIO.

© 2009 Optical Society of America

## 1. INTRODUCTION

The Bayesian ideal observer provides a quantitative measure of the diagnostic performance of an imaging system, summarized by the area under the receiver operating characteristic curve (AUC) [1]. This observer is optimal among all observers, either human or model, and sets an absolute upper bound for observer performance in classification tasks [1]. For system optimization and design, making use of all available information in raw data is much desired when assessing medical image quality. Therefore, the ideal observer should be used for image quality assessment whenever possible, as suboptimal observers use only a part of the information available in the data. However, the Bayesian ideal observer requires the full description of the statistical properties of the data to optimally perform a classification task [1], but such information is often unknown for complex, realistic backgrounds found in clinical applications. This has been the barrier to the use of the ideal observer approach for system optimization in practice. Furthermore, the dimension of the integrals that need to be calculated for the observer is huge because of the use of high-dimensional data sets. For estimating the test statistic of the ideal observer, the likelihood ratio, in binary classification tasks involving non-Gaussian distributed lumpy backgrounds, Kupinski *et al.* [2] developed a Markov-chain Monte Carlo (MCMC) method. Subsequently, other researchers adopted and modified the Kupinski MCMC approach to explore tasks involving other types of images from cardiac single-photon emission computed tomography (SPECT) and breast computed tomography [3, 4]. However, these MCMC methods still have the disadvantage of long computation times due to the high dimensionality of the data. In addition, the methods are limited to particular patient models investigated in these studies.

In an attempt to reduce the computation load and still approximate ideal-observer performance, Park *et al.* [5] investigated a channelized ideal observer (CIO) in binary classification tasks involving non-Gaussian lumpy backgrounds, where a channel matrix using Laguerre–Gauss (LG) channels was applied to the data in order to reduce the dimension of the data. In that work, they extended the Kupinski MCMC to incorporate the conditional probability of channelized data into the acceptance probability for constructing a Markov chain to estimate the likelihood ratio of the CIO while using the same proposal density for sampling backgrounds as in the Kupinski MCMC. They found that the CIO with LG channels gave a close approximation to ideal-observer performance. However, there still was an absolute gap between the mean performance of the ideal observer and the CIO with LG channels. Moreover, LG channels are limited to rotationally symmetric signals and lumpy backgrounds [6].

To overcome this shortcoming of LG channels as efficient channels, in their subsequent work, Park *et al.* [7] investigated the use of singular value decomposition (SVD) of a given linear imaging system in choosing efficient channels for the ideal observer in the same tasks. In that work, singular vectors most strongly associated with the signal-only image were found to be highly efficient for the ideal observer. In addition, singular vectors most strongly associated with the background were also efficient when a large number of channels were used. These SVD channels are not only highly efficient for the ideal observer, but also are not limited to types of backgrounds and signals. However, the SVD approach for finding efficient channels has a limitation in that it requires the system to be linear and the system’s response functions to be known.

In order to develop a more flexible but effective method of choosing efficient channels, Park and her collaborators considered a partial least square (PLS) method, where the PLS weights are estimated by maximizing the covariance between the data and the truth (either signal-present or signal-absent) [8]. The PLS channels were again found to be highly efficient for the ideal observer in the aforementioned tasks. Furthermore, the PLS approach did not require as many channels as the SVD for the CIO to estimate ideal-observer performance. The PLS approach is flexible in the sense that it requires neither the knowledge of the system’s response functions nor the system to be linear.

The extended MCMC developed by Park *et al.* [5] that was used for estimating the CIO provided a computational tool for investigating the aforementioned efficient channels. However, it did not reduce the computational burden of calculating ideal-observer performance because the extended MCMC still used the proposal density of the Kupinski MCMC for sampling high-dimensional backgrounds. The use of a parameter vector of the lumpy-background model for testing the Kupinski MCMC algorithm facilitated the design of their symmetric proposal density, but has limitations in practice in that it is often infeasible to find a parameter vector that represents complex, realistic background images found in clinical applications. Moreover, even though the use of the background parameter vector somewhat alleviated the high-dimensionality problem, it still was not sufficient to reduce the computational cost effectively and make the method practical to be used in real-time computation of ideal-observer performance.

In the present paper, we propose a novel approach to estimate the CIO, which samples from a much-lower-dimensional channelized data space without any parametric background model rather than from either the high-dimensional image or background parameter space. We call this approach a channelized ideal-observer Markov-chain Monte Carlo (CIO-MCMC). The CIO-MCMC method is general in that it can be used for estimating ideal-observer performance in binary classification tasks involving any type of background and signal provided that efficient channels for such types of background and signal are used. In this paper, to show the validity of the method for estimating the CIO in comparison with the ideal observer, we will test the CIO-MCMC in binary classification tasks involving non-Gaussian lumpy backgrounds, where the Kupinski MCMC algorithm provides a way to estimate ideal-observer performance. We will compare the CIO-MCMC method to the Kupinski MCMC for the ideal observer [2] and the extended MCMC for the CIO [5]. We will show that the CIO-MCMC method has the potential to become a practical method for real-time computation of ideal-observer performance in tasks involving complex backgrounds.

## 2. BACKGROUND

#### 2A. Binary Classification and Data Channelization

For binary classification tasks, signal-present and signal-absent hypotheses are considered, given by

where the vectors**b**and

**s**represent the noiseless background and signal images, respectively,

**n**represents measurement noise, and

**g**represents the resulting data vector. All the vectors are

*M*-dimensional.

For channelization of the data, an ${N}_{c}\times M$ channel matrix, **T**, is applied to the data **g**. This matrix consists of ${N}_{c}$ rows of channel vectors to generate an ${N}_{c}$-dimensional channelized data vector **v** via

*efficient channels*generated by the methods of SVD and PLS [7, 8] can be used in

**T**for the CIO to approximate the ideal observer. For notational convenience, $\mathbf{Tb}={\mathbf{v}}_{b}$ and $\mathbf{Ts}={\mathbf{v}}_{s}$ will be used, respectively, for the channelized noiseless background and signal images. For the channelized noise, $\mathbf{Tn}={\mathbf{v}}_{n}$ will be used.

#### 2B. Bayesian Ideal and Channelized Ideal Observers

The test statistic of the ideal observer is the likelihood ratio, given by

**g**conditional on the hypothesis ${H}_{j}$, $j=0,1$. Similarly, the CIO likelihood ratio is defined as the ideal observer that optimally detects the signal seen in the channelized data, given by [5]

**θ**for the object to be imaged [5]:

*et al.*[5]:

**b**and $\left\{\mathbf{b}\left({\mathbf{\theta}}^{\left(i\right)}\right)\right\}$ constitute a Markov chain, and ${J}_{0}$ indicates a burn-in to remove the initial unstable samples. The extended MCMC samples backgrounds to construct a Markov chain for estimating the likelihood ratio of the CIO.

The extended MCMC was useful in studying the properties of the CIO in comparison with the ideal observer [5, 7, 8]. However, it did not improve the computational time for estimating ideal-observer performance. In the next section, we will describe our CIO-MCMC method that speeds up the computation of the performance of the CIO and hence ideal-observer performance by use of efficient channels.

## 3. METHODS

#### 3A. Likelihood Ratio for the Channelized Ideal Observer

Our approach to efficient estimation of ideal-observer performance is to design an algorithm that samples much lower dimensional channelized backgrounds, ${\mathbf{v}}_{b}$, for estimating the CIO rather than high-dimensional backgrounds themselves, **b**. Our approach will improve sampling efficiency and hence reduce the computation burden of calculating ideal-observer performance, while the CIO combined with efficient channels still approximates ideal-observer performance. To this end, the CIO likelihood ratio in Eq. (5) can be rewritten as an integral with respect to the channelized background, ${\mathbf{v}}_{b}$ [9]:

In the expression for ${\Lambda}_{\mathrm{CBKE}}\left(\mathbf{v}\right|{\mathbf{v}}_{b}^{\prime})$ given in Eq. (11), $\mathrm{pr}\left(\mathbf{v}\right|{\mathbf{v}}_{b}^{\prime},{H}_{j})$ values are calculated by plugging ${\mathbf{v}}_{b}^{\prime}$ into the PDFs, $\mathrm{pr}\left(\mathbf{v}\right|{\mathbf{v}}_{b},{H}_{j})$, of channelized data **v** under the hypothesis ${H}_{j}$ conditional on the channelized background ${\mathbf{v}}_{b}$. These PDFs are governed by the statistics of channelized noise ${\mathbf{v}}_{n}(\equiv \mathbf{Tn})$, since **n** is the only source of uncertainty given the background **b**, i.e., $\mathrm{pr}\left(\mathbf{v}\right|{\mathbf{v}}_{b},{H}_{j})=\mathrm{pr}({\mathbf{v}}_{n},{H}_{j})$. Therefore, ${\Lambda}_{\mathrm{CBKE}}\left(\mathbf{v}\right|{\mathbf{v}}_{b}^{\prime})$ can be calculated with knowledge of the statistics of channelized noise.

#### 3B. Proposal Density and Acceptance Probability for CIO-MCMC

The likelihood $\Lambda \left(\mathbf{v}\right)$ of the CIO, given in Eq. (10), can be approximated by an MCMC approach [9]:

Given the channelized data **v** and the $i\text{th}$ sample ${\mathbf{v}}_{b}^{\left(i\right)}$, for sampling ${\mathbf{v}}_{b}^{(*)}$, the proposal density, $q\left({\mathbf{v}}_{b}^{(*)}\right|{\mathbf{v}}_{b}^{\left(i\right)})$, is modeled to be a multivariate Gaussian proposal with width, ${\mathbf{\sigma}}_{c}$, centered at the previously accepted sample, ${\mathbf{v}}_{b}^{\left(i\right)}$. Then the acceptance probability, ${\alpha}_{v}$, to either accept or reject the sampled vector, ${\mathbf{v}}_{b}^{(*)}$, is given by [10]

The posterior density of ${\mathbf{v}}_{b}^{(\cdot )}$, $\mathrm{pr}\left({\mathbf{v}}_{b}^{(\cdot )}\right|\mathbf{v},{H}_{0})$, is a constant multiple of $\mathrm{pr}\left(\mathbf{v}\right|{\mathbf{v}}_{b}^{(\cdot )},{H}_{0})\mathrm{pr}\left({\mathbf{v}}_{b}^{(\cdot )}\right)$, where $\mathrm{pr}\left({\mathbf{v}}_{b}^{(\cdot )}\right)$ is a prior density on ${\mathbf{v}}_{b}^{(\cdot )}$ and $\mathrm{pr}\left(\mathbf{v}\right|{\mathbf{v}}_{b}^{(\cdot )},{H}_{0})$ is the conditional density of **v** on the channelized background ${\mathbf{v}}_{b}^{(\cdot )}$ and ${H}_{0}$. Therefore, another formula for the acceptance probability, ${\alpha}_{v}$, can be written as

#### 3C. Modeling of Posterior or Prior Densities for CIO-MCMC

As discussed above, for estimating the CIO likelihood ratio given in Eq. (5), the Park extended MCMC [5] was used. The extended MCMC uses the same proposal density for sampling backgrounds as the Kupinski MCMC [2], but a different acceptance probability expression that incorporates the conditional probabilities of channelized data on the channelized version of the given background, **b**. We note that this vector, **b**, was the background used for generating the data vector, **g**, for which the likelihood ratios of the ideal observer and the CIO were estimated. By design of the Kupinski MCMC and the Park extension, which followed the definition of the Metropolis–Hastings algorithm, the accepted sample backgrounds for constructing a Markov chain to estimate the CIO likelihood ratio tend to be similar to the given background. In other words, backgrounds drawn by the Kupinski proposal density are likely to be rejected if they are too different from the given background or, equivalently, if they cause too low values of the posterior.

The aforementioned consideration leads to an assumption that the accepted sample vectors, $\left\{{\mathbf{v}}_{b}^{\left(i\right)}\right\}$, for estimating the CIO likelihood ratio, $\Lambda \left(\mathbf{v}\right)$, using the extended MCMC, follow a Gaussian distribution centering around the channelized version of the given background, ${\mathbf{v}}_{b}$. Therefore, marginal distributions of channelized vectors of the background samples accepted by the extended MCMC were modeled to be Gaussian in this work. This assumption is also supported by our observation of marginal distributions of channelized backgrounds obtained by using the extended MCMC. Figure 1 illustrates that a marginal distribution of the channelized version of MCMC sampled backgrounds for the given background are usually a sum of either two or three different Gaussian distributions. The figure also indicates that the sampled marginal distribution lies well within an overall marginal distribution of the channelized version of many different randomly generated backgrounds with the same background parameters (i.e., the same mean number of lumps, $\overline{N}$, in the lumpy-background case [13]).

In order to calculate the acceptance probability, either the posterior or prior density needs to be modeled, since these quantities are unknown. With the aforementioned assumption, we propose to model the posterior density $\mathrm{pr}\left({\mathbf{v}}_{b}^{(\cdot )}\right|\mathbf{v},{H}_{j})$ in Eq. (15) to be a multivariate Gaussian with mean ${\mathbf{v}}_{b}$ and covariance ${\mathbf{K}}_{{\mathbf{v}}_{b}}^{\mathrm{post}}$, where ${\mathbf{v}}_{b}$ is the channelized version of the noiseless background **b** for the given data **g**. The covariance, ${\mathbf{K}}_{{\mathbf{v}}_{b}}^{\mathrm{post}}$, needs to be chosen so that sampling efficiency of the CIO-MCMC is good enough for a Markov chain to converge. We shall discuss this aspect later in Section 4. With this posterior model, the CIO likelihood ratio in Eq. (10) can be estimated via either a general Monte Carlo sampling directly from the posterior or the MCMC sampling from the proposal density. In this work, we focus on the MCMC simulation.

Another expression for calculating the acceptance probability, given in Eq. (16), requires knowledge of the prior on the channelized background, which is often unknown, as well as the conditional probability of channelized data on the channelized background, which is governed by the system’s noise statistics as discussed earlier. Channelized-background samples drawn by the proposal density can be far from the channelized background for the given image, which would lead to a low value of the conditional probability and hence an increased chance of rejecting these samples. Therefore, such samples may not be as appropriate for use in constructing a Markov chain to estimate the CIO likelihood ratio of **v** as other samples that are closer to the given channelized background. However, since the prior also plays a significant role in the acceptance probability, if the prior is not modeled appropriately, bad samples still can be accepted, which may hurt the convergent property of the Markov chain. As an attempt to improve sampling efficiency, we can use the aforementioned observation in modeling the unknown prior density. That is, we propose to model the prior to be another Gaussian with covariance ${\mathbf{K}}_{{\mathbf{v}}_{b}}^{\u201c\text{prior}\u201d}$ and to be centered at the channelized background, ${\mathbf{v}}_{b}$, of the given data, **g**. Note that we use “prior” with double quotation to indicate that the prior model we use in this work is not a true prior.

#### 3D. Consistency Checks for CIO-MCMC

To check the bias of the estimates of the CIO likelihood ratio, calculated by the CIO-MCMC, a number of things can be considered. One is to plot the moment-generating function for *Λ* under ${H}_{0}$, ${M}_{0}\left(\beta \right)$, given by [11]:

Lower and upper bounds for AUC estimates of the CIO can also be calculated by using the method proposed by Clarkson [12]. That is, using the following definition for the likelihood-generating function

## 4. SIMULATION STUDY

#### 4A. Models for Imaging System, Object, Noise, and Channels

The lumpy-background model [13] was used for simulating 64 × 64 randomly varying background objects with the mean number of Gaussian lumps $\overline{N}=5$, lump (spatial) width of 7, and lump magnitude of 1. Since images are 64 × 64, the dimension of each image vector is $M={64}^{2}$. For the signal object, a Gaussian profile with (spatial) width 3 and magnitude 0.2 was used. A continuous-to-discrete linear imaging operator with Gaussian blur functions with (spatial) width 0.5 and magnitude 40 was applied to simulate noiseless signal and background images, **s** and **b**. See [5] for the details of the imaging operator and image simulation. For the noise, **n**, an independent, identically distributed (i.i.d.) multivariate Gaussian model with (amplitude) width ${\sigma}_{n}=20$ was used. See Fig. 2 for the Gaussian signal, a signal-absent lumpy background, and its resulting images by use of the continuous-to-discrete operator and Gaussian and Poisson noise models.

The lumpy-background images generated by the aforementioned method do not always follow a Gaussian distribution. One way to check is to look at a histogram of an ensemble of lumpy-background images as shown in Fig. 3 . This figure shows two histograms of 5000 lumpy-background images with Gaussian and Poisson noise realizations. Both histograms reveal that the lumpy-background images are non-Gaussian distributed, though at different degrees. Mean, minimum, and maximum of all pixel values used in the histogram for the Gaussian noise case are 12, $-106$, and 223, whereas they are respectively 12, 0, and 192 for the Poisson noise case. The difference in the lengths of the tails from the mean in each histogram reveals how much non-Gaussian behavior each data set has. Therefore, the background statistics in our simulation are not exactly Gaussian for the chosen level of Gaussian noise with respect to the background parameters even if a higher level of Gaussian noise can cause the background to be Gaussian distributed.

With use of the Gaussian noise statistics of **n**, the conditional PDFs of channelized data given the channelized background are

**T**is the channel matrix consisting of ${N}_{c}$ rows of

*M*-dimensional channel vectors.

For our choice of efficient channels, the 80 singular vectors most strongly associated with the signal, reproduced from [7], were used. See Fig. 4 for the first 100 of these singular vectors in a 2D profile. These channels were found to be highly efficient for the ideal observer in the same task used in our simulation [7].

#### 4B. Proposal, Posterior, and Prior Densities for CIO-MCMC

For the proposal density, $q\left({\mathbf{v}}_{b}^{(*)}\right|{\mathbf{v}}_{b}^{\left(i\right)})$, to sample ${\mathbf{v}}_{b}^{(*)}$ given the previously accepted sample ${\mathbf{v}}_{b}^{\left(i\right)}$, we assumed that the proposal was an i.i.d. Gaussian with width ${\sigma}_{c}$ and centered at ${\mathbf{v}}_{b}^{\left(i\right)}$. In particular, ${\sigma}_{c}=19\u221510,1\u221532$ were chosen, respectively, for the posterior and prior model cases of the CIO-MCMC.

For the posterior density, $\mathrm{pr}\left({\mathbf{v}}_{b}^{(\cdot )}\right|\mathbf{v},{H}_{j})$, we assumed a Gaussian posterior with covariance ${\mathbf{K}}_{{\mathbf{v}}_{b}}^{\mathrm{post}}$ and mean ${\mathbf{v}}_{b}$; i.e., the posterior was centered at the true channelized background ${\mathbf{v}}_{b}$. For calculating the covariance of the Gaussian posterior, we assumed ${\mathbf{K}}_{{\mathbf{v}}_{b}}^{\mathrm{post}}={\mathbf{\sigma}}_{v}^{2}{\mathbf{I}}_{{N}_{c}}$, where each component of ${\mathbf{\sigma}}_{v}$ was estimated as 10 times the sample standard deviation of each element of the channelized lumpy background using many different realizations of the noiseless lumpy-background image. In the simulation, 5000 noiseless lumpy backgrounds were used for this calculation. See Fig. 5 for the sample standard deviations of the channelized background used in this simulation.

For the Gaussian “prior” density, $\mathrm{pr}\left({\mathbf{v}}_{b}^{(\cdot )}\right)$, we again assumed that it was centered at the channelized version of the given background, ${\mathbf{v}}_{b}$. For calculating the covariance of this prior, we assumed ${\mathbf{K}}_{{\mathbf{v}}_{b}}^{\u201c\text{prior}\u201d}={\mathbf{\sigma}}_{v}^{2}{\mathbf{I}}_{{N}_{c}}$. where ${\mathbf{\sigma}}_{v}$ contained $1/6$ times the same sample standard deviation of each element of the channelized background. Note that the particular values for ${\sigma}_{c}$ and ${\mathbf{\sigma}}_{v}$ were chosen after investigating a number of different choices by use of the consistency checks discussed in Section 3D.

In Table 1 , we summarize a complete list of all the parameter values used for modeling the proposal, posterior, and prior densities. In Section 5, we show that both the posterior and “prior” models enable the CIO-MCMC to well approximate ideal-observer performance. In Appendix B, we discuss how the CIO-MCMC algorithm with our “prior” model actually works as another MCMC method with a different posterior density.

#### 4C. Observer Performance, Variance Analysis, and Consistency Checks

To generate a Markov chain for calculating one likelihood ratio, we used $J=\mathrm{400,000}$, ${J}_{0}=\mathrm{200,000}$ to choose stable samples for both the posterior and the prior model cases. These numbers were determined after many simulations with different values of *J* and ${J}_{0}$. To estimate the AUC of either of the ideal and MCMC-CIO observers, the calculation was done for the same set of 200 pairs of signal-absent and signal-present lumpy images. This calculation was repeated five times, using five different random-number seeds for generating five different Markov chains, resulting in five different AUC estimates. The arithmetic mean of the five AUC estimates of either of the observers was used as the AUC of either observer. To estimate the variance of the AUC estimates, a multiple-reader multiple-case variance analysis based on a probabilistic approach [14], in particular the one-shot method by Gallas [15], was employed.

As discussed in Subsection 3D, we performed the following consistency checks to see whether the CIO likelihood ratio estimates calculated by the CIO-MCMC satisfy the properties of the likelihood ratio: (1) plotting ${\Lambda}_{\mathrm{CBKE}}\left(\mathbf{v}\right|{\mathbf{v}}_{b})$ as a function of ${\mathbf{v}}_{b}$ to show the random nature of the samples used for estimating the CIO likelihood ratio, $\widehat{\Lambda}\left(\mathbf{v}\right)$; (2) $\widehat{\Lambda}\left(\mathbf{v}\right)$ as a function of the number of samples, *J*, to show the convergent property of $\widehat{\Lambda}\left(\mathbf{v}\right)$; (3) ${M}_{0}\left(\beta \right)$ as a function β to show the stability of our CIO-MCMC algorithms; (4) ${\u27e8\Lambda \u27e9}_{1}-\mathrm{var}{\left(\Lambda \right)}_{0}$, using 10,000 pairs of signal-present and signal-absent estimates of the likelihood ratio, $\widehat{\Lambda}\left(\mathbf{v}\right|{H}_{1})$ and $\widehat{\Lambda}\left(\mathbf{v}\right|{H}_{0})$; and (5) calculating upper and lower bounds on the performance of the CIO.

#### 4D. Analytical Expressions for the CIO Likelihood Ratio

In our simulation, either the posterior of or prior on the channelized background was assumed to be Gaussian. In addition, the noise was assumed to follow an i.i.d. Gaussian distribution with zero mean and width ${\sigma}_{n}$. Use of these Gaussian models allows for analytical expressions for the CIO likelihood ratio via the convolution theorem. That is, a convolution of two Gaussians with means, ${\mathbf{\mu}}_{1}$, ${\mathbf{\mu}}_{2}$, and variances, ${\mathbf{\sigma}}_{1}^{2}\mathbf{I}$, ${\mathbf{\sigma}}_{2}^{2}\mathbf{I}$, is another Gaussian with mean ${\mathbf{\mu}}_{1}+{\mathbf{\mu}}_{2}$ and variance $({\mathbf{\sigma}}_{1}^{2}+{\mathbf{\sigma}}_{2}^{2})\mathbf{I}$.

Under the assumption of the Gaussian posterior and noise statistics, ${\Lambda}_{\mathrm{CBKE}}\left(\mathbf{v}\right|{\mathbf{v}}_{b})$ in Eq. (10) can be replaced with Eq. (21), and $\mathrm{pr}\left({\mathbf{v}}_{b}\right|\mathbf{v},{H}_{0})$ in Eq. (10) with the Gaussian with mean ${\mathbf{v}}_{b}$ and covariance ${\mathbf{\sigma}}_{v}^{2}{\mathbf{I}}_{{N}_{c}}$. Then, the integral given in Eq. (10) can be calculated analytically, resulting in the following expression for the CIO likelihood ratio:

*k*th elements of the vectors

**v**, ${\mathbf{v}}_{b}$, ${\mathbf{v}}_{s}$, and ${\mathbf{\sigma}}_{v}$. Note that the orthogonality of the SVD channels was used to derive this expression (PLS channels are also applicable, owing to their orthogonality).

Under the assumption of the Gaussian prior and noise statistics, the probability of channelized data **v** under ${H}_{j}$, $j=0,1$, can be estimated via

Table 2 summarizes these analytical expressions for the CIO likelihood ratio. These analytical expressions were used to compare against the CIO-MCMC method. They were also used to explore the relationship between the Gaussian widths and the AUC of the CIO. In our simulation, for computing the CIO likelihood ratio via these analytical expressions, another 1000 pairs of signal-present and signal-absent lumpy-background images, which were independent of the 200 pairs used for the CIO-MCMC and ideal observer, were used.

## 5. RESULTS AND DISCUSSION

#### 5A. Performance of CIO-MCMC Method

Figures 6, 7, respectively, show the relationship of the width parameters of the Gaussian posterior and the prior to the performance of the CIO estimated by the analytical method. While the other consistency checks provided ways to locate and narrow down reasonable ranges of these width parameters, the analytical method, when available, can be used to study how the width parameters affect the outcome of the CIO performance and hence to confirm that the parameters chosen by the other checks were indeed appropriate. The chosen values of ${C}_{v}=10$ and $1\u22156$ used for the posterior and “prior” CIO-MCMC simulations are consistent with the outcomes shown in these figures. We note that the target performance for the CIO was the performance of the true ideal observer, 0.91 ±0.02, as shown in Table 3 .

Table 3 summarizes the maximum CIO performance achieved by the extended MCMC [5], CIO-MCMC, and analytical expressions given in Eqs. (23, 24) in comparison with ideal-observer performance estimated by the Kupinski MCMC. For the CIO results, 80 SVD channels associated with the signal were used. As indicated in Table 3, both the CIO-MCMC and the analytical CIO provide good approximations to the performance of the ideal observer and the CIO estimated by the extended MCMC and reproduced from [7]. In addition, we computed the AUC of the analytical CIO for the same 200 image pairs that were used for the CIO-MCMC and ideal observer. In this case, the AUCs of the analytical CIO for the posterior and prior models were 0.88 ± 0.03 and 0.92 ± 0.03 in the format of $\overline{\mathrm{AUC}}\pm 2\text{\hspace{0.17em}}\mathrm{STD}$, where $\overline{\mathrm{AUC}}$ and STD represent the sample mean AUC and the standard deviation. These are also good approximations to ideal-observer performance.

Figures 8, 9 each present four empirical ROC curves generated using the 1000 estimates of the likelihood ratio for the ideal observer and the CIO by the extended MCMC, MCMC-CIO, and the analytical method. For Fig. 8, the Gaussian posterior was used in the CIO-MCMC and the analytical MCMC, whereas the Gaussian “prior” was used in the algorithm for Fig. 9.

These figures reveal how well, across all false positive rates, the CIO estimated by the CIO-MCMC and analytical CIO predicts the ideal observer. It appears that the posterior CIO-MCMC predicts the ideal observer equally well across all false positive rates, whereas in the range (0.05,0.25) of false positive rates, the “prior” CIO-MCMC overestimates the ideal observer. The analytical CIO predicts the ideal observer well across all false positive rates for both posterior and prior cases.

For the reduction of computation times, the CIO-MCMC proved to be much faster than the Kupinski MCMC and the extended MCMC. For our simulation study, a computer cluster, which consisted of 56 dual-core dual-processor Opterons running at 2 GHz with 8 Gbytes of RAM, was used. To calculate the five AUC estimates of the CIO, using 80 independent cores on this cluster, the CIO-MCMC took only a few minutes, while the Kupinski MCMC and its extension, for the ideal observer and the CIO, respectively, required several hours.

#### 5B. Consistency Checks on the CIO-MCMC Method

Figures 10, 11, 12, 13 present four different consistency checks for the posterior CIO-MCMC case, whereas Figs. 14, 15, 16, 17 are for the “prior” CIO-MCMC case.

Figures 10, 14 show estimates of ${\Lambda}_{\mathrm{CBKE}}\left(\mathbf{v}\right)$ as a function of *J* for $4\times {10}^{5}$ states of a Markov chain for the 80 SVD channel cases used in Table 3. In each figure, the random nature of the ${\Lambda}_{\mathrm{CBKE}}$ values indicates that the samples of ${\Lambda}_{\mathrm{CBKE}}\left(\mathbf{v}\right)$ were randomly chosen for estimating a particular likelihood-ratio estimate, $\widehat{\Lambda}\left(\mathbf{v}\right)$. Figures 11, 15 each show that an estimate of $\widehat{\Lambda}\left(\mathbf{v}\right)$ converges well after a certain number of iterations, *J*.

Figures 12, 16 each show a plot of the moment-generating function, ${M}_{0}\left(\beta \right)$, of the CIO likelihood ratio, $\widehat{\Lambda}\left(\mathbf{v}\right)$, for the 80 SVD channel cases used in Table 3. We observed that the estimates of the CIO likelihood ratio fairly well satisfy the properties of ${M}_{0}\left(\beta \right)$, such as the upward concavity for the CIO likelihood ratio discussed in Subsection 3.D. However, the property of ${M}_{0}\left(1\right)=1$ was better satisfied with the “prior” CIO-MCMC than with the posterior CIO-MCMC, although even for the posterior case the value of ${M}_{0}\left(1\right)$ was still not blown up to such a large quantity, which was often seen in other density choices discussed in the next section. Figures 13, 17 show ${\u27e8\Lambda \u27e9}_{1}-\mathrm{var}{\left(\Lambda \right)}_{0}$ as a function of the number of samples, *J*, for estimating Λ. These figures reveal that the property of ${\u27e8\Lambda \u27e9}_{1}-\mathrm{var}{\left(\Lambda \right)}_{0}=1$ is not well satisfied.

Finally, the bounds of the CIO AUC for the posterior and “prior” CIO-MCMC algorithms were (0.59,0.92) and (0.82,0.96), respectively, which well include the mean AUCs of 0.91 and 0.92, respectively. This calculation was done with the 1000 likelihood-ratio estimates for the five MCMC runs of the 200 pairs of images used in our simulation.

#### 5C. Other Choices for Posterior and Prior Densities

The “prior” choices suggested in this work may not be optimal, as the true prior density should be wide enough to sufficiently cover distributions of any random samples of the channelized background. Therefore, one of the more theoretically reasonable choices for the prior is to use the overall mean and covariance of the channelized background for many different realizations of the channelized background. We considered this option with 5000 different channelized noiseless background images. However, using the “prior” with the overall mean and covariance in the CIO-MCMC resulted in inappropriate selection of samples for constructing a Markov chain, which again led to nonconvergence of the Markov chain using millions of samples. Therefore, we did not include the results using this choice since CIO performance estimated by this choice was not close enough to the performance of the ideal observer.

Another choice is to use a flat prior. Then the priors in both the numerator and the denominator of the acceptance probability expression given in Eq. (16) would be canceled. We tried this approach, but did not get good results for reasons similar to those discussed above, and hence the results are not included in this paper.

We also tried independent but differently distributed (i.d.d.) Gaussian proposal models combined with either an i.d.d. Gaussian posterior or prior model. However, this approach did not yield a good approximation to the ideal observer AUC. In particular, samples of ${\Lambda}_{\mathrm{CBKE}}\left(\mathbf{v}\right)$ were biased, and the estimates of *Λ* as a function of the number of samples did not converge. We observed the AUC of the CIO using this approach decrease with the number of samples, *J*. This indicates that the approach does not provide stable, efficient sampling for the CIO-MCMC.

## 6. CONCLUSIONS

We have proposed a fast Markov-chain Monte Carlo method, the CIO-MCMC, for efficiently calculating ideal-observer performance via a novel way of sampling channelized-background vectors rather than background images. In addition, in the case where the system’s noise statistics can be modeled to be Gaussian, our approach provides analytical expressions for the CIO likelihood ratio, which yield good approximations to ideal-observer performance. The analytical CIO, while linear, is a new observer that outperforms the best linear observer, the channelized Hotelling observer, and estimates the ideal observer.

The CIO-MCMC requires far less computer time (a few minutes as opposed to several hours of the Kupinski MCMC computation), thereby providing a potentially practical tool for real-time computation of ideal-observer performance. Furthermore, the analytical CIO provides another method for real-time computation of ideal-observer performance in the case of Gaussian system noise statistics.

Our approach uses noise statistics of the given imaging system in estimation of the CIO likelihood ratio. Therefore, with knowledge of noise statistics of an imaging system of interest, the CIO-MCMC has the potential to speed up the computation of ideal-observer performance in classification tasks involving complex or realistic background images in clinical applications.

## 7. FUTURE WORK

We noticed that the widths of the Gaussian proposal and prior densities, ${\sigma}_{c}$ and ${\mathbf{\sigma}}_{v}$, used in the CIO-MCMC significantly affected how well the performance of the CIO, estimated by the CIO-MCMC, approximates the performance of the ideal observer and CIO, respectively, estimated by the Kupinski MCMC and its extension that sample background images. The consistency checks performed in this work provided reasonable pointers to where to look for locating a neighborhood of the proposal and posterior width parameters for the CIO-MCMC to yield good approximations to ideal-observer performance. However, for the CIO-MCMC to be used in tasks involving different types of images and non-Gaussian noise, further investigations are necessary to find a method of choosing appropriate width parameters that bring good sampling efficiency and approximations to ideal-observer performance.

With the models used for our MCMC simulation, the property of ${\u27e8\Lambda \u27e9}_{1}-\mathrm{var}{\left(\Lambda \right)}_{0}=1$ was not well satisfied. This may be because of modeling multimode Gaussian marginal distributions with single-mode Gaussian models. Further investigations by use of different density models are necessary for improving the property of the Markov chain to satisfy the aforementioned condition as well as the other checks executed in our simulation.

We are also interested in applying our method to classification tasks involving different types of backgrounds such as in planar and volumetric x-ray breast imaging for estimating ideal-observer performance. Last, while our investigation using a Gaussian noise model can be useful in various imaging modalities, including x-ray imaging systems, further investigations using other noise models including Poisson noise would benefit other imaging applications such as nuclear medicine. Therefore, we are interested in expanding our method to cases with noise models other than Gaussian.

## APPENDIX A: Replacing $\mathbf{\theta}$ with ${\mathbf{v}}_{b}$ in the Likelihood Ratio

To show that Eqs. (6, 10) are equivalent, we use a standard method of transforming density functions given by

**v**and

**θ**, respectively. Note that $\mathrm{pr}\left(\mathbf{T}\mathbf{b}\left(\mathbf{\theta}\right)\right|\mathbf{v},{H}_{0})=\mathrm{pr}\left(\mathbf{\theta}\right|\mathbf{v},{H}_{0})$, since

**T**is a scalar matrix and

**b**is fully governed by the vector

**θ**. Then, by use of Eq. (A1), Eq. (10) leads to Eq. (6) as follows:

**θ**. The other way [from Eq. (6) to Eq. (10)] is done similarly by using Eq. (A2).

## APPENDIX B: “PRIOR” CIO-MCMC as Another Posterior CIO-MCMC

In this subsection, we show that the “prior” CIO-MCMC is equivalent to another MCMC method with a posterior density model. For ease of notation, we define the matrices **A** and **B** via $\mathbf{A}={\sigma}_{n}^{2}\mathbf{T}{\mathbf{T}}^{\u2020}$ and $\mathbf{B}={\mathbf{\sigma}}_{v}^{2}{\mathbf{I}}_{{N}_{c}}$.

For the posterior CIO-MCMC method, the Gaussian posterior model is given by

**v**conditional on ${\mathbf{v}}_{b}^{*}$ is given by

*C*and ${C}^{\prime}$ do not depend on ${\mathbf{v}}_{b}^{*}$ and the quadratic quantity

*Q*is given by

*Q*, expands into

Now consider a similar quadratic expression

*Q*and ${Q}^{\prime}$ differ only by terms that do not depend on ${\mathbf{v}}_{b}^{*}$. This means that the “prior” CIO-MCMC method is equivalent to an MCMC method using the posterior where $\mathbf{K}={({\mathbf{A}}^{-1}+{\mathbf{B}}^{-1})}^{-1}$. The mean of this posterior,

**x**, is a matrix-weighted sum of the data

**v**and the true background ${\mathbf{v}}_{b}$. This explains why the “prior” CIO-MCMC works fairly similarly to the posterior CIO-MCMC.

## ACKNOWLEDGMENTS

The authors thank Drs. Laura Thompson and Frank Samuelson at the FDA for their comments on the manuscript. This work is supported in part by the National Institute of Biomedical Imaging and Bioengineering at the National Institutes of Health intramural grant to the Center for Devices and Radiological Health, FDA.

**1. **H. L. Van Trees, *Detection, Estimation, and Modulation Theory, Part I* (Academic, 1968).

**2. **M. A. Kupinski, J. W. Hoppin, E. Clarkson, and H. H. Barrett, “Ideal observer computation using Markov-chain Monte Carlo,” J. Opt. Soc. Am. A **20**, 430–438 (2003). [CrossRef]

**3. **X. He, B. S. Caffo, and E. Frey, “Toward realistic and practical ideal observer estimation for the optimization of medical imaging systems,” IEEE Trans. Med. Imaging **27**, 1535–1543 (2008). [CrossRef]

**4. **C. K. Abbey and J. M. Boone, “An ideal observer for a model of x-ray imaging in breast parenchymal tissue,” in *Digital Mammography*, E. A. Krupinski, ed., Vol. 5116 of Lecture Notes in Computer Science, (Springer-Verlag, 2008), pp. 393–400. [CrossRef]

**5. **S. Park, H. H. Barrett, E. Clarkson, M. A. Kupinski, and K. J. Myers, “A channelized-ideal observer using Laguerre–Gauss channels in detection tasks involving non-Gaussian distributed lumpy backgrounds and a Gaussian signal,” J. Opt. Soc. Am. A **24**, B136–B150 (2007). [CrossRef]

**6. **B. D. Gallas and H. H. Barrett, “Validating the use of channels to estimate the ideal linear observer,” J. Opt. Soc. Am. A **20**, 1725–1738 (2003). [CrossRef]

**7. **S. Park, J. M. Witten, and K. J. Myers, “Singular vectors of a linear imaging system as efficient channels for the Bayesian ideal observer,” IEEE Trans. Med. Imaging **28**, 657–667 (2009). [CrossRef]

**8. **J. Witten, S. Park, and K. J. Myers, “Using partial least square to compute efficient channels for the Bayesian ideal observer,” Proc. SPIE **7263**, 72630Q (2009). [CrossRef]

**9. **S. Park and E. Clarkson, “Markov-chain Monte Carlo for the performance of a channelized-ideal observer in detection tasks with non-Gaussian lumpy backgrounds,” Proc. SPIE **6917**, 69170T (2008). [CrossRef]

**10. **C. P. Robert and G. Casella, *Monte Carlo Statistical Methods*, 2nd ed. (Springer,2004).

**11. **H. H. Barrett, C. K. Abbey, and E. Clarkson, “Objective assessment of image quality III: ROC metrics, ideal observers, and likelihood-generating functions,” J. Opt. Soc. Am. A **15**, 1520–1535 (1998). [CrossRef]

**12. **E. Clarkson, “Bounds on the area under the receiver operating characteristic curve for the ideal observer,” J. Opt. Soc. Am. A **19**, 1963–1968 (2001). [CrossRef]

**13. **J. P. Rolland and H. H. Barrett, “Effect of random background inhomogeneity on observer detection performance,” J. Opt. Soc. Am. A **9**, 649–658 (1992). [CrossRef]

**14. **E. Clarkson, M. A. Kupinski, and H. H. Barrett, “A probabilistic development of the MRMC method,” Acad. Radiol. **13**(11), 1410–1421 (2006). [CrossRef]

**15. **B. D. Gallas, “One-shot estimate of MRMC variance: AUC,” Acad. Radiol. **13**, 353–362 (2006). [CrossRef]