## Abstract

Detection of signals in noisy images is necessary in many applications, including astronomy and medical imaging. The optimal linear observer for performing a detection task, called the Hotelling observer in the medical literature, can be regarded as a generalization of the familiar prewhitening matched filter. Performance on the detection task is limited by randomness in the image data, which stems from randomness in the object, randomness in the imaging system, and randomness in the detector outputs due to photon and readout noise, and the Hotelling observer accounts for all of these effects in an optimal way. If multiple temporal frames of images are acquired, the resulting data set is a spatio-temporal random process, and the Hotelling observer becomes a spatio-temporal linear operator. This paper discusses the theory of the spatio-temporal Hotelling observer and estimation of the required spatio-temporal covariance matrices. It also presents a parallel implementation of the observer on a cluster of Sony PLAYSTATION 3 gaming consoles. As an example, we consider the use of the spatio-temporal Hotelling observer for exoplanet detection.

©2009 Optical Society of America

## 1. Introduction

Signal detection [1, 2] is a fundamental task in image science and a common problem in many fields, from medicine (for example, the detection of tumors [1, 3, 4]) to astronomy (detection of extrasolar planets [5, 6] or near-earth objects). As a consequence, an extensive theory and numerous algorithms have been developed to address the problem of signal detection from images [1,2,7–13]. Some systems, such as adaptive optics systems [14,15], can deliver sequences of spatially correlated images. Spatio-temporally correlated data also appear in medical applications, such as [16, 17]. In many cases, the large amount of spatio-temporal data makes it hard to examine them and apply the detection algorithm directly to the data. Reduction in the temporal dimension is usually performed by adding together some or all of the frames. In adaptive optics, for example, images of the same object are taken over time and summed together either on the readout chip or in an external computer. The resulting single-frame image is used to perform the task of interest, but the information loss that results from the summation can reduce the detection performance. In medical applications, by contrast, it is usually not valid to assume that the object is constant over a long exposure time, and indeed the temporal dynamics of the object may be what defines the signal to be detected. Short-exposure images are inherently noisy, but long exposures wash out the signal. Full spatio-temporal processing is required for optimal signal detection.

The performance of a detection algorithm is mathematically quantified using receiver operating characteristic (ROC) analysis [18–20] and the area under the ROC curve (AUC) [19, 21]. With respect to that measure, the likelihood ratio is the optimal detector [21]. However, the likelihood ratio requires knowledge of the probability density functions under the hypotheses signal present and signal absent. Such probability density functions are often unknown or hard to estimate in practical cases. A more viable solution is the Hotelling observer [22], which requires only the knowledge of the data mean vector and covariance matrix. The Hotelling observer is linear, and it is optimal with respect to the class of linear observers [21] and a certain detectability measure to be defined below.

In this paper, the optimal-linear Hotelling observer is applied to spatio-temporal imagery. For this reason, we talk about the spatio-temporal Hotelling observer. By construction, such an observer is able to use both the spatial and temporal correlations between pixels in an optimal way, with respect to all linear observers. Methods for the estimation of the mean data vector and covariance matrix are described as well. Computational methods are described, and a parallel algorithm is implemented on a cluster of Sony PLAYSTATION 3 game consoles.

## 2. The spatio-temporal Hotelling observer

Let {**g**^{(1)}, …,*g*^{(J)}} be a collection of *J* images of the same object *f* (assumed to be independent of time) taken over time. Each **g**^{(j)} is, in turn, a collection of *M* pixel intensities {*g*
^{(j)}
_{1}, …,*g*
* ^{(j)}_{M}*}. The data set {

**g**^{(1)}, …,

**g**^{(J)}} represents the intensities of a total of

*MJ*pixels, which we raster-scan and represent in compact form as the

*MJ*×1 vector

*.*

**G**The task of interest is binary discrimination: given a noisy data set * G*, an observer must decide whether the object

*f*that produced

*belongs to either the “signal-absent” class Γ*

**G**_{0}or to the “signal-present” class Γ

_{1}. The observer can also be said to decide between hypothesis

*H*

_{0}where the signal is absent, or hypothesis

*H*

_{1}where the signal is present. In any case, the observer evaluates a real-valued non-random function

*t*on the random data

*and compares*

**G***t*(

**) to a threshold**

*G**τ*. If

*t*(

*)>*

**G***τ*, hypothesis

*H*

_{1}is assumed. Otherwise, hypothesis

*H*

_{0}is concluded. All of the possible outcomes and associated terminology are summarized in Table 1.

Recall that * G* is a random vector, so

*t*(

*) is a random variable. For any fixed value of*

**G***τ*, the decision taken depends on

*t*(

*), so it is a random variable as well. We can consider its probability density functions pr(*

**G***t*|

*H*

_{0}) and pr(

*t*|

*H*

_{1}) given, respectively, hypothesis

*H*

_{0}or

*H*

_{1}. These two densities allow us to formally define the true positive fraction (TPF) and the false positive fraction (FPF) as

in which it is made clear that TPF and FPF are functions of *τ*.

If we change the value of *τ*, different values of TPF(*τ*) and FPF(*τ*) are obtained; a plot of TPF(*τ*) versus FPF(*τ*) as *τ* is varied over the real line is called a receiver operating characteristic (ROC) curve [18–20], and the area under the ROC curve (AUC) [19, 21] is a meaningful figure of merit for a binary classification task. The AUC is defined as [21]

${\mathrm{AUC}}_{t\left(G\right)}=-{\int}_{-\infty}^{\infty}\mathrm{TPF}\left(\tau \right)\genfrac{}{}{0.1ex}{}{\mathrm{dFPF}\left(\tau \right)}{\mathrm{d\tau}}d\tau .$.

Another figure of merit for the same task is the signal-to-noise ratio (SNR) on the test statistic *t*(* G*):

in which the notation 〈*t*(* G*)〉

_{G|Hi}denotes the statistical expectation of the random variable

*t*(

*) conditioned to the knowledge that hypothesis*

**G***H*is true. Similarly, Var{

_{i}*t*(

*)|*

**G***H*} is the variance of

_{i}*t*(

*) under the hypothesis*

**G***H*. In (2), hypotheses

_{i}*H*

_{0}and

*H*

_{1}are assumed equiprobable. The AUCt(G) is a known, monotonic function of SNR

_{t(G)}if

*t*(

*) is normally distributed [21].*

**G**Many classical monographs discuss statistical decision theory; some of them are [2] and [9]. These texts show that, for binary classifications and for any ROC-related figure of merit (including the AUC), the optimal observer is the likelihood ratio [1, 2, 9, 21]:

$\Lambda \left(G\right)=\genfrac{}{}{0.1ex}{}{\mathrm{pr}\left(G|{H}_{1}\right)}{\mathrm{pr}\left(G|{H}_{0}\right)},$,

or, equivalently, its logarithm *λ* (* G*)=lnΛ(

*). However, we note that Λ(*

**G***) requires knowledge of the multivariate densities pr(*

**G****|**

*G**H*), which are usually unknown or difficult to estimate. A more viable alternative can be found by restricting attention to linear observers, i.e., observers of the form

_{i}*t*(

*)=*

**G**

*W*^{T}

*, for an appropriate template vector*

**G***of the same size of*

**W***. Here, the symbol T denotes the transpose of a vector or matrix, so*

**G**

**W**^{T}

*is a scalar product. The optimal template vector can be derived by substituting*

**G***t*(

*)=*

**G**

**W**^{T}

*G*in (2) and maximizing SNR

^{2}

*(*

_{t}*) with respect to*

**G***. The resulting template vector, which we call the Hotelling template vector [21, 22], is of the form*

**W**in which **K**
* G* is the covariance matrix of the data vector

*and $S={\stackrel{\equiv}{\mathit{G}}}_{1}-{\stackrel{\equiv}{\mathit{G}}}_{0}$ is the image of the signal. The triple overbar represents an average over object randomness, system randomness, and measurement noise as discussed below. The linear observer that uses*

**G**

**W**_{Hot}is the Hotelling observer, defined as

*t*

_{Hot}(

*)=*

**G**

**W**^{T}

_{Hot}

*. The Hotelling observer is also called a prewhitening matched filter [21], and the prewhitening operation is both correcting for the correlation in a single image and also undoing the frame-to-frame correlation. Note that the Hotelling observer*

**G***t*

_{Hot}(

*) defined above is applied to spatio-temporal data, as opposed to the classical use of the Hotelling observer in the case of purely spatial data.*

**G**## 3. Analysis of the data covariance matrix

In a general case, we have three sources of randomness: detector noise, point spread function variability, and object variability. This leads to the decomposition [23]

with which the Hotelling template vector is written as

The expression in (4) can be formally derived from the definition of covariance matrix for * G*:

in which * P* represents the sequence {

**p**^{(1)}, …,

**p**^{(J)}} of point spread functions (PSFs). We will allow the PSFs to be random, as in the adaptive optics problem. The PSFs will be assumed known statistically (PSF-known-statistically or PKS [21]), and their contribution

**K**̄

^{PSF}

*̄ to the data covariance matrix will be estimated by means of simulated data. It is important to note that full knowledge of the statistical properties of the PSFs is not needed. Instead, the Hotelling observer requires the knowledge of only the mean signal*

_{G}*and the data covariance matrix*

**S****K**. As long as these two quantities can be estimated with sufficient accuracy, the Hotelling observer will deliver high performance. No moments higher than the second are needed.

_{G}Expression (6) contains three averaging (or statistical expectation) steps. The innermost expectation is on the noise and for given * P* and

*f*. The resulting quantity is averaged over

*given*

**P***f*, and, finally, we average over

*f*. From (6) and adding and subtracting terms appropriately [23], (4) is derived, provided that:

${\stackrel{\u033f}{\mathbf{K}}}_{G}^{\mathrm{noise}}={\u3008{\u3008{\mathbf{K}}_{\mathit{G}}^{\mathrm{noise}}\u3009}_{P\mid f}\u3009}_{f}={\u3008{\u3008\left\{{\u3008\left[G-\stackrel{\u02c9}{G}\right]{\left[G-\stackrel{\u02c9}{G}\right]}^{\top}\u3009}_{\mathit{G}\mid P,f}\right\}\u3009}_{P\mid f}\u3009}_{f},$ ${\stackrel{\u02c9}{\mathbf{K}}}_{\stackrel{\u02c9}{G}}^{\mathrm{PSF}}={\u3008{\mathbf{K}}_{\stackrel{\u02c9}{G}}^{\mathrm{PSF}}\u3009}_{f}={\u3008\left\{{\u3008\left[\stackrel{\u02c9}{G}-\stackrel{\u033f}{G}\right]{\left[\stackrel{\u02c9}{G}-\stackrel{\u033f}{G}\right]}^{\top}\u3009}_{P\mid f}\right\}\u3009}_{f},$ ${\mathbf{K}}_{\stackrel{\u033f}{G}}^{\mathrm{object}}={\u3008\left[\stackrel{\u033f}{G}-\stackrel{\equiv}{G}\right]\phantom{\rule[-0ex]{.2em}{0ex}}{\left[\stackrel{\u033f}{G}-\stackrel{\equiv}{G}\right]}^{\top}\u3009}_{f},$,

in which * G* with a variable number of bars denotes average with respect to noise, PSFs, and object. By construction, random vectors

*,*

**G***̄, and*

**G***̿ are uncorrelated [23].*

**G**For simplicity, we will assume that the signal we want to detect is known in brightness and location. In this case, we refer to a signal-known-exactly (SKE) problem [21]. The more realistic problem of unknown signal location can be handled by scanning the observer template [24–26]. The SKE hypothesis provides valuable information that the observer can use to improve detection performance (quantified by an increase in AUC) with respect to the detection and localization problem of [25]. We also assume that the object background is known exactly (background-known-exactly or BKE in the terminology of [21]).

The noise covariance matrix is usually very easy to study. Indeed, if we assume that only photon (Poisson) and readout (Gaussian) noises are present in the sequence * G*, then the noise in distinct elements of the detector are usually uncorrelated and so we can write

where *σ*
^{2}
* _{m}* is the readout noise variance for the

*m*-th pixel of the detector, and

${\stackrel{\equiv}{g}}_{m}^{\left(j\right)}=\mathrm{Pr}\left({H}_{0}\right){\stackrel{\u033f}{g}}_{m\mid {H}_{0}}^{\left(j\right)}+\mathrm{Pr}\left({H}_{1}\right){\stackrel{\u033f}{g}}_{m\mid {H}_{1}}^{\left(j\right)}$

is the variance of the photon noise for the same pixel when ${\stackrel{\equiv}{g}}_{m}^{\left(j\right)}$ is the expected number of photons (from objects in the field of view and background) collected for the *m*-th pixel of the *j*-th image. The appropriateness of the Poisson model for the photon noise is justified by invoking the so-called Poisson postulates [21]. The readout noise variance *σ ^{2}_{m}* at each detector pixel is usually known, as provided by the detector’s manufacturer, or it can be measured. Matrix

**K**̿

^{noise}

*above is diagonal with no zero terms on the diagonal, which guarantees the invertibility of*

**G****K**

*.*

_{G}## 4. Estimation of the Hotelling template vector

In this section, we describe how quantities discussed in the previous section can be estimated. We will rely on simulation code and consider *L*
_{1} realization of the PSF sequence * P* and

*L*

_{2}realization of a random signal. Consider

*L*

_{1}

*L*

_{2}simulated noiseless data sets

*̄*

**G**^{(ℓ1,ℓ2)}1,

*ℓ*

_{1}=1, …,

*L*

_{1},

*ℓ*

_{2}=1, …,

*L*

_{2}for the hypothesis

*H*

_{1}and, similarly,

*L*

_{1}noiseless data sets

*̄*

**G**^{(ℓ1)}

_{0}, for

*ℓ*

_{1}=1, …,

*L*

_{1}for the hypothesis

*H*

_{0}.

The noise covariance matrix is estimated as:

${\left[{\stackrel{\hat{\u033f}}{\mathbf{K}}}_{G}^{\mathrm{noise}}\right]}_{m,m\prime}^{\left(j,j\prime \right)}=\{\begin{array}{cc}\multicolumn{1}{c}{{\sigma}_{m}^{2}+{\stackrel{\hat{\equiv}}{g}}_{m}^{\left(j\right)}}& \multicolumn{1}{c}{\mathrm{if}m=m\prime \mathrm{and}j=j\prime ,}\\ \multicolumn{1}{c}{0}& \multicolumn{1}{c}{\mathrm{otherwise},}\end{array}$,

where we denoted estimated quantities using the hat symbol and we set

${\stackrel{\hat{\equiv}}{g}}_{m}^{\left(j\right)}={\left[\stackrel{\hat{\equiv}}{\mathit{G}}\right]}_{m}^{\left(j\right)}=\genfrac{}{}{0.1ex}{}{1}{{L}_{1}}\underset{{\ell}_{1}=1}{\overset{{L}_{1}}{\Sigma}}{\left[{\stackrel{\u02c9}{\mathit{G}}}_{0}^{\left({\ell}_{1}\right)}\right]}_{m}^{\left(j\right)}.$.

The PSF covariance matrix **K**̄^{PSF}
*_{G}*̄ is estimated from simulated data as well:

where

Expressions (8) and (9) show that ${\widehat{\overline{\mathbf{K}}}}_{\overline{\mathit{G}}}^{\mathrm{PSF}}$ is the sample estimate of ${\overline{\mathbf{K}}}_{\overline{\mathit{G}}}^{\mathrm{PSF}}$, estimated from the noiseless simulated data.

Finally, if we consider randomness in the signal to be detected or the background on which it is superimposed, the object term in the covariance matrix expression can be estimated as:

${\left[{\hat{\mathbf{K}}}_{\stackrel{\u033f}{G}}^{\mathrm{object}}\right]}_{m,m\prime}^{\left(j,j\prime \right)}=\genfrac{}{}{0.1ex}{}{1}{{L}_{2}-1}\underset{{\ell}_{2}=1}{\overset{{L}_{2}}{\Sigma}}{\left[{\Delta \stackrel{\u033f}{G}}_{1}^{\left({\ell}_{2}\right)}\right]}_{m}^{\left(j\right)}{\left[\Delta {\stackrel{\u033f}{G}}_{1}^{\left({\ell}_{2}\right)}\right]}_{m\prime}^{\left(j\prime \right)},$

where

${\left[\Delta {\stackrel{\u033f}{G}}_{1}^{\left({\ell}_{2}\right)}\right]}_{m}^{\left(j\right)}=\genfrac{}{}{0.1ex}{}{1}{{L}_{1}}\underset{{\ell}_{1}=1}{\overset{{L}_{1}}{\Sigma}}{\left[\Delta {\stackrel{\u02c9}{G}}_{1}^{({\ell}_{1},{\ell}_{2})}\right]}_{m}^{\left(j\right)}.$

It might be tempting to try to estimate the whole data covariance matrix **K_{G}** in (6) from noisy simulated data, without using the decomposition (4). A necessary (but not sufficient) condition for such estimate

**K̂**to be nonsingular is that the number

_{G}*L*=

*L*

_{1}

*L*

_{2}of simulated noisy sequences must be greater than

*MJ*, the order of

**K**itself. If, for example, each image

_{G}

**g**^{(j)}is of size 64×64 and there are 25 of them in each sequence

*, then*

**G***MJ*=64

^{2}·25≈10

^{5}. Simulating such a huge number of image sequences is clearly prohibitive. Instead, the decomposition (4), along with (7), guarantees the invertibility of

**K̂**. This can be proved by noting that

_{G}**K̂**is symmetric and (strictly) positive definite. Indeed:

**G**${x}^{\top}{\hat{\mathbf{K}}}_{G}x={x}^{\top}{\hat{\stackrel{\u033f}{\mathbf{K}}}}_{G}^{\mathrm{noise}}x+{x}^{\top}\left[{\hat{\stackrel{\u02c9}{\mathbf{K}}}}_{\stackrel{\u02c9}{G}}^{\mathrm{PSF}}+{\hat{\mathbf{K}}}_{\stackrel{\u033f}{G}}^{\mathrm{object}}\right]x\ge {x}^{\top}{\hat{\stackrel{\u033f}{\mathbf{K}}}}_{G}^{\mathrm{noise}}x=\underset{m=0}{\overset{M}{\Sigma}}\left({\sigma}_{m}^{2}+{\hat{\stackrel{\equiv}{g}}}_{m}^{\left(j\right)}\right){x}_{m}^{2}>0,$.

for any vector **x**≠**0**.

Finally, the average contribution of the signal to the image data is estimated as:

$\hat{S}=\genfrac{}{}{0.1ex}{}{1}{{L}_{1}{L}_{2}}\underset{{\ell}_{1}=1}{\overset{{L}_{1}}{\Sigma}}\underset{{\ell}_{2}=1}{\overset{{L}_{2}}{\Sigma}}{\stackrel{\u02c9}{G}}_{1}^{({\ell}_{1},{\ell}_{2})}-\genfrac{}{}{0.1ex}{}{1}{{L}_{1}}\underset{{\ell}_{1}=1}{\overset{{L}_{1}}{\Sigma}}{\stackrel{\u02c9}{G}}_{0}^{\left({\ell}_{1}\right)}.$,

Armed with an estimate of the signal to be detected and estimates of the covariance matrices that appear on the right-hand side of (4), we can formally write an expression for the Hotelling template vector estimate: in analogy with (5), we define

For the SKE/BKE case, (10) reduces to:

Although (10) and (11) make sense because [**K̂_{G}**]

^{−1}exists, the matrix we need to invert is huge, so computing the inverse by means of standard algorithms (such as Gaussian elimination) is computationally prohibitive [27]. Even if we could invert

**K̂**, storing it will require an incredible amount of disk space. However, we recognize the particular structure of the PSF covariance matrix [see (8)] and consider an algorithm that takes advantage of it. Indeed, if we introduce the matrix

_{G}**R**whose elements are

${\left[\mathbf{R}\right]}_{\ell ,m}^{\left(j\right)}=\genfrac{}{}{0.1ex}{}{1}{\sqrt{L-1}}{\left[\Delta {\stackrel{\u02c9}{G}}_{0}^{\left(\ell \right)}\right]}_{0}^{\left(j\right)},$,

then (8) is rewritten as

${\hat{\stackrel{\u02c9}{\mathbf{K}}}}_{G}^{\mathrm{PSF}}={\mathrm{RR}}^{\top}.$.

The *MJ*×*L* matrix **R** contains in the *ℓ*-th column the *MJ*×1 vector obtained by raster-scanning the pixels in Δ* G*̄

^{(ℓ)}

_{0}. The Woodbury matrix-inversion lemma [28–31] allows us to rewrite the inverse in (10) as a computationally tractable expression. In abstract form, the matrix-inversion lemma can be stated as follows: ${[\mathbf{A}-\mathbf{UBV}]}^{-1}={\mathbf{A}}^{-1}+{\mathbf{A}}^{-1}{[{\mathbf{I}}_{N}-\mathbf{BV}{\mathbf{A}}^{-1}U]}^{-1}\mathbf{BV}{\mathbf{A}}^{-1},$.

in which **I**
* _{N}* is the identity matrix of order

*N*. If $\mathbf{A}={\widehat{\overline{\overline{\mathbf{K}}}}}_{\mathit{G}}^{\mathrm{noise}},\phantom{\rule[-0ex]{1em}{0ex}}\mathbf{B}=-{\mathbf{I}}_{L},\phantom{\rule[-0ex]{1em}{0ex}}\mathbf{U}=\mathbf{R},\phantom{\rule[-0ex]{1em}{0ex}}\mathbf{V}={\mathbf{R}}^{\top}$ then

where

$\mathbf{Q}={\mathbf{I}}_{L}+{\mathbf{R}}^{\top}{\left[{\hat{\stackrel{\u033f}{\mathbf{K}}}}_{G}^{\mathrm{noise}}\right]}^{-1}\mathbf{R}$.

The matrix ${\hat{\stackrel{\u033f}{\mathbf{K}}}}_{G}^{\mathrm{noise}}$ is diagonal, so computing its inverse does not pose computational problems. The matrix **Q** is of size *L*×*L* and invertible. Note that *L* is usually much smaller than *MJ*, which implies that **Q**
^{−1} can be calculated in much shorter time than [**K̂_{G}**]

^{−1}. Standard Gaussian elimination with pivoting is a fast and numerically stable way to invert

**Q**. Overall, using (12) is a more tractable and stable problem than computing (11) directly.

## 5. Implementation

For this research, we took advantage of the computational capabilities available at the Center for Gamma-Ray Imaging, University of Arizona. In particular, we used a Sony PLAYSTATION 3 cluster consisting of 30 units. Each unit was equipped with a Cell Broadband Engine (Cell BE) processor, 256 MB of RAM memory, 60 GB of disk space, and was running Linux Fedora 7, kernel version 2.6.23. The IBM’s Cell BE Software Development Kit 3.0 was installed on all of the units and used to generate code suitable for the Cell BE microprocessor. All machines in the cluster were connected by means of a 1-Gbit/s local area network (LAN). All algorithms were coded using the C programming language, and source files were compiled using IBM XL C/C++ compiler, version 9.0. Communication between different nodes of the computer cluster was achieved using the Message Passing Interface (MPI) standard. The Cell BE architecture [32–34] has recently received enormous attention from the computer science community. The possibility of using up to nine processing cores and the fact that SIMD instructions are supported make the Cell BE processor particularly appealing as a low-cost solution for high-performance scientific computing [35–39].

An algorithm for the computation of the template vector * W*̂ WHot according to (10) and (12) was implemented and run on the PLAYSTATION 3 cluster. The algorithm is composed of two different programs: one to be run as a master process and one to be run as a slave process.

Pseudocode for the master process is shown below:

*J*←number of images in each sequence

*L*←number of simulated sequences

*N*←number of slave processes

**for all**
*n* ∈{1, …,*N*} **do**

send “*initialize*” to slave process *n*

**end for**

**for all**
*ℓ* ∈ {1, …,*L*} **do**

read * G*
̄(ℓ)

_{0}from the disk

**for all**
*n* ∈{1, …,*N*} **do**

send ** G**̄

^{(ℓ)}

_{0}to slave process

*n*

**end for**

**end for**

**for all**
*ℓ* ∈{1, …,*L*} **do**

read *G*̄^{(ℓ)}
_{1} from the disk

**for all**
*n* ∈{1, …,*N*} **do**

send * G*̄

^{(ℓ)}

_{1}to slave process

*n*

**end for**

**end for**

*j*←1

*m*←0 {number of tasks completed}

**while**
*m*<*J*
**do**

**while**
*m*<*J* and there is an idle slave process **do**

*n*←index of an idle slave process

send “*compute* [* W*̂

_{Hot}]

^{(j)}” to slave process n

*j*←*j*+1

**end while**

**if**
*m*<*J*
**then**

*n*←slave process that has just computed [* W*̂

_{Hot}]

^{(j′)}

receive [* W*̂

_{Hot}]

^{(j′)}from slave process n

save [* W*̂

_{Hot}]

^{(j′)}to disk

*m*←*m*+1

**end if**

**end while**

**for all**
*n* ∈{1, …,*N*} **do**

send “*end of computation*” to slave process *n*

**end for**

Pseudocode for the slave processes is shown below:

*J*←number of images in each sequence

*L*←number of simulated sequences

*σ*
^{2}
* _{m}*←readout noise variance for all

*m*∈ {1, …,

*M*}

**for all**
*ℓ* ∈{1, …,*L*} **do**

receive * G*̄

^{(ℓ)}

_{0}from master process

save * G*̄

^{(ℓ)}0 to disk

**end for**

**for all**
*ℓ* ∈ {1, …,*L*} **do**

receive *G*̄^{(ℓ)}
_{1} from master process

save * G*̄

^{(ℓ)}

_{1}to disk

**end for**

${\hat{\stackrel{\equiv}{G}}}_{0}\leftarrow \genfrac{}{}{0.1ex}{}{1}{L}\sum _{\ell =1}^{L}{\stackrel{\u02c9}{\mathit{G}}}_{0}^{(\ell )}$

${\hat{\stackrel{\equiv}{G}}}_{1}\leftarrow \genfrac{}{}{0.1ex}{}{1}{L}\sum _{\text{}\ell =1}^{L}{\stackrel{\u02c9}{\mathit{G}}}_{1}^{(\ell )}$

$\hat{s}\leftarrow {\hat{\stackrel{\equiv}{G}}}_{1}-{\hat{\stackrel{\equiv}{G}}}_{0}$

**for all**
*j* ∈ {1, …,*J*} **do**

${\left[{\hat{\stackrel{\u033f}{\mathbf{K}}}}_{\mathit{G}}^{\mathrm{noice}}\right]}^{\left(j\right)}\leftarrow \mathrm{dang}\left({\sigma}_{1}^{2}+{\hat{\stackrel{\u033f}{\stackrel{\u02c9}{g}}}}_{\mathrm{0,1}}^{\left(j\right)},\dots {\sigma}_{M}^{2}+{\hat{\stackrel{\u033f}{\stackrel{\u02c9}{g}}}}_{0,m}^{j}\right)$

**end for**

${\left[\mathbf{R}\right]}_{\ell}^{\left(j\right)}\leftarrow \genfrac{}{}{0.1ex}{}{1}{\sqrt{L-1}}{\left[\Delta {\stackrel{\u02c9}{\mathit{G}}}_{0}^{(\ell )}\right]}^{\left(j\right)}$

$\mathbf{Q}\leftarrow {\mathbf{I}}_{L}+{\mathbf{R}}^{T}{\left[{\hat{\stackrel{\u033f}{\mathbf{K}}}}_{G}^{\mathrm{noice}}\right]}^{-1}\mathbf{R}$

**Q**
^{−1}←inverse(**Q**)

**while** not message “*end of computation*” received **do**

receive “*compute* [* W*̂

_{Hot}]

^{(j)}” from master process

[* W*̂

_{Hot}]

^{(j)}←0

_{M×M}

**for all**
*j*′ ∈{1, …,*J*} **do**

${\mathbf{T}}^{(j,j\text{'})}\leftarrow (j,j\text{'})\mathrm{-th}\phantom{\rule[-0ex]{.2em}{0ex}}\mathrm{block}\phantom{\rule[-0ex]{.2em}{0ex}}\mathrm{of}{\left[{\hat{\stackrel{\u033f}{\mathbf{K}}}}_{G}^{\mathrm{noise}}\right]}^{-1}\left\{{\mathbf{I}}_{\mathrm{MJ}}-\mathbf{R}{\mathbf{Q}}^{-1}{\mathbf{R}}^{T}{\left[{\hat{\stackrel{\u033f}{\mathbf{K}}}}_{G}^{\mathrm{noise}}\right]}^{-1}\right\}$

[* W*̂

_{Hot}]

^{(j)}←[

*̂*

**W**_{Hot}]

^{(j)}+

^{T}

^{(j, j′)}[

*̂]*

**S**^{(j′)}

**end for**

send [* W*̂

_{Hot}]

^{(j)}to master process

**end while**

In the implementation, we took advantage of the Cell BE processor and its SIMD capabilities. Floating-point values were stored in double precision, and SIMD instructions were used for the computation of the blocks of [**K̂_{G}**]

^{−1}as they were needed. Splitting the work load among all of the processing cores available on each PLAYSTATION 3 allowed more than a 15-fold reduction in the computation time with respect to an implementation that uses only one core and ignores their SIMD capabilities. Had we used single-precision values, the speed-up would have been even larger.

## 6. Simulation results

As an example, we developed the spatio-temporal Hotelling observer for adaptive optics (AO) images [14, 15]. The simulation code we used is AOTools (webpage: http://www.tosc. com/software/software.html). The atmospheric thickness was set to 24km and, in order to compute the aberrated wavefront, we assumed the frozen flow hypothesis of atmospheric turbulence. The turbulence was computed according to the modified Hufnagel-Valley *C ^{2}_{n}* profile, given by [40]:

are meters to the −2/3 power. We assumed *C ^{2}_{n}(h)* trascurable when

*h*≥24km. The Fried parameter

*r*

_{0}[41] for the phase screens was 0.30m at the wavelength

*λ*=500nm. Our simulation took into account effects such as scintillation and anisoplanetism [15] as well. The wind speed was 1.25m/s. The telescope we simulated had a circular aperture of diameter 5m, and the diameter of the central obscuration due to the secondary mirror was 0.50m. The secondary mirror was supported by three arms. For the estimation of the template vector

*̂*

**W**_{Hot}in (10), we simulated

*L*=512 sequences containing

*J*=25 images each of size 64×64 (pixel size 5.96µm). The wavefront sensor apparatus of many AO systems includes a lenslet array. In our simulation, we simulated a 32×32 array of side length 0.02m. The total power entering the telescope was equally split between the wavefront sensor and the science camera by a 50/50 beamsplitter. The system was assumed idea, with an efficiency of 1 (i.e., no losses) and the AO loop was running at a speed of 1kHz. The quality of the AO correction can be quantified with an average Strehl ratio of about 0.67.

We applied the spatio-temporal Hotelling observer to the detector of dim planets orbiting a star (assuming the star and the planet in the same isoplanetic patch), and we simulated data sets for both hypotheses *H*
_{0} and *H*
_{1}. The apparent magnitude of the star was *m*=6 and the difference in apparent magnitude between the planet and the star was Δ*m*=16.74. More specifically, we simulated *L*=512 sequences for each hypothesis and, for each sequence, we used *J*=25 frames containing *M*=64^{2} pixels each. The simulation was set up in order to mimic an astronomical observation. For example, we used typical values for the readout noise as found in [42]. The exposure time was 0.1ms. The data used to estimate the template vector * W*̂

_{Hot}was noiseless and no noise was considered in the wavefront sensor. On the other hand, data on which the detection task was applied were noisy and were obtained with different phase screens.

We considered the SKE/BKE/PKS case and we compared the performance of the spatiotemporal Hotelling observer to that of the purely spatial Hotelling observer *t*
_{Hot}(* g*) and the purely spatial matched filter [43, 44] observer

*t*

_{mat}(

*)=*

**g**

**s**^{T}

*, where*

**g***is the signal to be detected. In an effort to mimic current practice in astronomical detection, both*

**s***t*

_{Hot}(

*) and*

**g***t*

_{mat}(

*) were applied to long-exposure data*

**g***obtained by on-chip integration of many short-exposure frames. The observers were run on simulated noisy data. We generated test noise-free image sequences for both the planet-absent and planet-present hypotheses and degraded them with Poisson photon noise and Gaussian readout noise to generate*

**g***n*=10,000 noisy sequences of short-exposure images for the planet-absent hypothesis and

*n*noisy sequences short-exposure images for the planet-present hypothesis. The corresponding long-exposure images were generated as well. These test data were supplied to the observers considered in this comparison, and the corresponding values of the test statistics

*t*

^{(1)}

_{0}, …,

*t*

^{(n)}

_{0}and

*t*

^{(1)}

_{1}, …,

*t*

^{(n)}

_{1}were collected. Binning these values would provide approximated plots of the densities pr(

*t*|

*H*

_{0}) and pr(

*t*|

*H*

_{1}) for a particular observer

*t*. For each observer, we estimated the values of the TPF and FPF [see (1)] as:

$\mathrm{TPF}\left(\tau \right)=\genfrac{}{}{0.1ex}{}{\mid \left\{i\phantom{\rule[-0ex]{.2em}{0ex}}\mathrm{such}\phantom{\rule[-0ex]{.2em}{0ex}}\mathrm{that}\phantom{\rule[-0ex]{.2em}{0ex}}{t}_{1}^{\left(i\right)}>\tau ,i=1,\dots ,n\right\}\mid}{n},$ $\mathrm{FPF}\left(\tau \right)=\genfrac{}{}{0.1ex}{}{\mid \left\{i\phantom{\rule[-0ex]{.2em}{0ex}}\mathrm{such}\phantom{\rule[-0ex]{.2em}{0ex}}\mathrm{that}\phantom{\rule[-0ex]{.2em}{0ex}}{t}_{0}^{\left(i\right)}>\tau ,i=1,\dots ,n\right\}\mid}{n},$

in which the notation |*S*| stands for the number of elements of the set *S*, and we varied the value of *τ* to obtain ROC curves. The ROC curves are reported in Fig. 1, and the corresponding values of the AUC, standard deviation *σ* on the AUC, and SNR are reported in Table 2. The SNR for the three observers was computed according to (2), in which conditional means 〈…〉 and variances Var{…} were replaced by the sample means and sample variances of the *t*
^{(i)}
_{0} and *t*
^{(i)}
_{1}. The values of *σ *were computed as described in [45].

The results reported in Fig. 1 and Table 2 confirm the superiority of the spatio-temporal Hotelling observer with respect to the spatial Hotelling observer and the matched filter observer. The results of Fig. 1 complete the ones reported in [25]. Indeed, in [25], we compared the spatial Hotelling observer with current techniques used in astronomy for point-source detection, and we noted that the spatial Hotelling observer outperforms popular detection algorithms, such as [46]. In this paper, we showed that short-exposure images retain temporal information, which increases detection performance. We see that the spatio-temporal Hotelling observer outperforms current long-exposure detection algorithms as well.

## 7. Conclusions

In this paper, statistical decision theory was rigorously applied to the problem of signal detection for spatio-temporal data. Three sources of randomness were initially considered: randomness in the object, randomness in the residual point spread function, and randomness due to measurement noise in the detector array. However, for simplicity, we assumed the signal in the object to be nonrandom and at a known location, and the background constant and known. We noted that, in many applications of interest, a complete description of the statistics of the data is not available. Only the first and second moments of the data are required to compute the Hotelling observer. We remarked the importance of the temporal correlations between pixels in the temporal data. Indeed, the Hotelling observer was applied to the whole sequence of temporally-related images, rather than to their average.

In some cases, such as adaptive optics systems, a complete analytical study of the first two moments of the data might be complicated. Therefore, we proposed to estimate means and covariance matrices using simulated data. We implemented one algorithm for the computation of the spatio-temporal Hotelling observer, and we ran the algorithm on a Sony PLAYSTATION 3 cluster. Our implementation took advantage of the computational capabilities of the Cell Broadband Engine Architecture (Cell BE) processor, which equips all of the Sony PLAYSTATION 3 units of our cluster. Thanks to the matrix-inversion lemma, the problem of computing the product between the inverse of a large covariance matrix and the signal was recast to the problem of computing matrix multiplications involving the inverse of a much smaller matrix.

Research concerning an analytical expression for the PSF covariance matrix is currently underway for the case of an ideal thin lens with a weak Gaussian phase perturbation in the pupil. This model might be appropriate for high-performance adaptive optics systems, for which the residual perturbation can be shown to be Gaussian and weak. This study would be of great importance, as it would eliminate the need for simulation in order to estimate the PSF covariance matrix. If an analytical expression for the data covariance matrix is available, other methods—based, for example, on the Landweber algorithm or on the Neumann series expansion—for the computation of the Hotelling template vector could be investigated as well [21]. In addition, it would be interesting to find an expression for the data’s probability density function. With such density, the likelihood ratio could be computed and compared with the Hotelling observer. We expect these two methods to deliver similar detection performance, because, for large mean values, Poisson random variables are very well approximated with Gaussian random variables.

In this study, we relied on simulation code to estimate the mean and covariance of the data. Differences between the actual mean and covariance present in real data and the mean and covariance used in the detection task can arise from two sources: errors in the simulation code or sampling errors because the mean and covariance are estimated from a finite number of simulated sample images. In this work and related previous studies [25,47], we have investigated the latter point in great detail. The former point, effect of model errors on detection performance, has not been investigated for the spatiotemporal Hotelling observer (implemented for the first time in this paper), but it has been studied in the medical literature for purely spatial Hotelling observers [21, 48]. The general conclusion is that even crude modeling of the covariance model affords a demonstrable improvement in detection performance, though of course more accurate models are always preferable. This issue will be discussed in a separate paper.

## Acknowledgments

The authors would like to acknowledge the Center for Gamma-Ray Imaging at the University of Arizona and NIH grant R37EB000803 for their support.

## References and links

**1. **D. M. Green and J. A. Swets, *Signal Detection Theory and Psychophysics* (JohnWiley and Sons, Inc., New York, NY, 1966).

**2. **H. L. Van Trees, *Detection, Estimation, and Modulation Theory. Part III, Radar-sonar Signal Processing and Gaussian Signals in Noise* (John Wiley and Sons, Inc., New York, NY, 2001).

**3. **K. J. Myers, H. H. Barrett, M. C. Borgstrom, D. D. Patton, and G. W. Seeley, “Effect of Noise Correlation on Detectability of Disk Signals in Medical Imaging,” J. Opt. Soc. Am. A **2**, 1752–1759 (1985).
[CrossRef] [PubMed]

**4. **S. J. Starr, C. E. Metz, L. B. Lusted, and D. J. Goodenough, “Visual detection and localization of radiographic images,” Radiology **116**, 533–538 (1975).
[PubMed]

**5. **D. W. Hughes, “Nonsolar Planets and Their Detection,” Nature **279**, 579 (1979).
[CrossRef]

**6. **M. Tamura, “Extra-solar Planet Detection,” Viva Origino **30**, 157–161 (2002).

**7. **J. O. Berger, *Statistical Decision Theory, Foundations, Concepts, and Methods* (Springer-Verlag, New York, NY, 1980).

**8. **R. O. Duda, P. E. Hart, and D. G. Stork, *Pattern Classification*, 2nd ed. (John Wiley and Sons, Inc., New York, NY, 2001).

**9. **J. L. Melsa and D. L. Cohn, *Decision and Estimation Theory* (McGraw-Hill, New York, NY, 1978).

**10. **R. N. McDonough and A. D. Whalen, *Detection of Signal in Noise* (Academic Press, San Diego, CA, 1995).

**11. **S. Park, E. Clarkson, M. A. Kupinski, and H. H. Barrett, “Efficiency of the human observer detecting random signals in random backgrounds,” J. Opt. Soc. Am. A **22**, 3–16 (2005).
[CrossRef]

**12. **M. P. Hobson and C. McLachlan, “A Bayesian Approach to Discrete Object Detection in Astronomical Data Sets,” Mon. Not. R. Astron. Soc. **338**, 765–784 (2003).
[CrossRef]

**13. **N. J. Kasdin and I. Braems, “Linear and Bayesian Planet Detection Algorithms for the *Terrestrial Planet Finder*,” Astrophys. J. **646**, 1260–1274 (2006).
[CrossRef]

**14. **H. W. Babcock, “The Possibility of Compensating Astronomical Seeing,” Publ. Astron. Soc. Pac. **65**, 229–236 (1953).
[CrossRef]

**15. **R. K. Tyson, *Adaptive Optics Engineering Handbook* (Marcel Dekker, New York, NY, 2000).

**16. **R. Akbarpour, S. N. Friedman, J. H. Siewerdsen, J. D. Neary, and I. A. Cunningham, “Signal and noise transfer in spatiotemporal quantum-based imaging systems,” J. Opt. Soc. Am. A **24**, B151–B164 (2007).
[CrossRef]

**17. **G. K. Yadava, S. Rudinacd, A. T. Kuhls-Gilcristab, and D. R. Bednarek, “Generalized Objective Performance Assessment of a New High-Sensitivity Microangiographic Fluoroscopic (HSMAF) Imaging System,” in *Medical Imaging 2008: Physics of Medical Imaging*,
E. S. Jiang Hsieh, ed., vol. 6913, p. 69130U (Proc. SPIE, 2008).

**18. **H. H. Barrett, C. K. Abbey, and E. Clarkson, “Objective Assessment of Image Quality. III. ROC Metrics, Ideal Observers, and Likelihood-generating Functions,” J. Opt. Soc. Am. A **15**, 1520–1535 (1998).
[CrossRef]

**19. **J. A. Hanley and B. J. McNeil, “The Meaning and Use of the Area Under a Receiver Operating Characteristic (ROC) Curve,” Radiology **143**, 29–36 (1982).
[PubMed]

**20. **S. H. Park, J. M. Goo, and C.-H. Jo, “Receiver Operating Characteristic (ROC) Curve: Practical Review for Radiologists,” Korean J. Radiol. **5**, 11–18 (2004).
[CrossRef] [PubMed]

**21. **H. H. Barrett and K. J. Myers, *Foundations of Image Science* (Wiley-Interscience, Hoboken, NJ, 2004).

**22. **H. Hotelling, “The Generalization of Student’s Ratio,” Ann. Math. Stat. **2**, 360–378 (1931).
[CrossRef]

**23. **H. H. Barrett, K. J. Myers, N. Devaney, and J. C. Dainty, “Objective Assessment of Image Quality: IV. Application to Adaptive Optics,” J. Opt. Soc. Am. A **23**, 3080–3105 (2006).
[CrossRef]

**24. **E. Clarkson, “Estimation receiver operating characteristic curve and ideal observers for combined detection/estimation tasks,” J. Opt. Soc. Am. A **24**, B91–B98 (2007).
[CrossRef]

**25. **L. Caucci, H. H. Barrett, N. Devaney, and J. J. Rodríguez, “Application of the Hotelling and ideal observers to detection and localization of exoplanets,” J. Opt. Soc. Am. A **24**, B13–B24 (2007).
[CrossRef]

**26. **D. Burke, N. Devaney, S. Gladysz, H. H. Barrett, M. K. Whitaker, and L. Caucci, “Optimal linear estimation of binary star parameters,” in *Adaptive Optics Systems*,
N. Hubin, C. E. Max, and P. L. Wizinowich, eds., vol. 7015, p. 70152J (Proc. SPIE, 2008).

**27. **H. H. Barrett, K. J. Myers, B. D. Gallas, E. Clarkson, and H. Zhang, “Megalopinakophobia: Its symptoms and cures,” in *Medical Imaging 2001: Physics of Medical Imaging*,
L. E. Antonuk and M. J. Yaffe, eds., vol. 4320, pp. 299–307 (Proc. SPIE, 2001).

**28. **D. J. Tylavsky and G. R. L. Sohie, “Generalization of the Matrix Inversion Lemma,” Proc. IEEE **74**, 1050–1052 (1986).
[CrossRef]

**29. **M. S. Bartlett, “An Inverse Matrix Adjustment Arising in Discriminant Analysis,” Ann. Math. Stat. **22**, 107–111 (1951).
[CrossRef]

**30. **J. Sherman and W. J. Morrison, “Adjustment of an Inverse Matrix Corresponding to a Change in One Element of a Given Matrix,” Ann. Math. Stat. **21**, 124–127 (1950).
[CrossRef]

**31. **H. V. Henderson and S. R. Searle, “On Deriving the Inverse of a Sum of Matrices,” SIAM Rev. **23**, 53–60 (1981).
[CrossRef]

**32. **J. A. Kahle, M. N. Day, H. P. Hofstee, C. R. Johns, T. R. Maeurer, and D. Shippy, “Introduction to the Cell Multiprocessor,” IMB J. Res. Dev. **49**, 589–604 (2005).
[CrossRef]

**33. **D. Pham, T. Aipperspach, D. Boerstler, M. Bolliger, R. Chaudhry, D. Cox, P. Harvey, P. Harvey, H. Hofstee, C. Johns, J. Kahle, A. Kameyama, J. Keaty, Y. Masubuchi, M. Pham, J. Pille, S. Posluszny, M. Riley, D. Stasiak, O. Suzuoki, M. Takahashi, J. Warnock, S. Weitzel, D. Wendel, and K. Yazawa, “Overview of the architecture, circuit design, and physical implementation of a first-generation Cell Processor,” IEEE J. Solid-State Circuits **41**, 179–196 (2006).
[CrossRef]

**34. **P. Hofstee, “Introduction to the Cell Broadband Engine,” Tech. rep., IBM Corporation, Riverton, NJ (2005).

**35. **S. Williams, J. Shalf, L. Oliker, S. Kamil, P. Husbands, and K. Yelick, “The potential of the Cell Processor for scientific computing,” in *Proceedings of the 3rd conference on Computing Frontiers*, pp. 9–20 (ACM, New York, NY, 2006).
[CrossRef]

**36. **D. A. Bader, V. Agarwal, K. Madduri, and S. Kang, “High performance combinatorial algorithm design on the Cell Broadband Engine processor,” Parallel Comput. **33**, 720–740 (2007).
[CrossRef]

**37. **I. W. C. Benthin, M. Scherbaum, and H. Friedrich, “Ray tracing on the Cell Processor,” in *IEEE Symposium on Interactive Ray Tracing*, pp. 15–23 (2006).
[CrossRef]

**38. **M. Sakamoto and M. Murase, “Parallel implementation for 3-D CT image reconstruction on Cell Broadband Engine^{™},” in *IEEE International Conference on Multimedia and Expo*, pp. 276–279 (2007).
[CrossRef]

**39. **M. Kachelrieß, M. Knaup, and O. Bockenbach, “Hyperfast parallel-beam and cone-beam backprojection using the Cell general purpose hardware,” Med. Phys. **34**, 1474–1486 (2007).
[CrossRef] [PubMed]

**40. **R. R. Parenti and R. J. Sasiela, “Laser-guide-star systems for astronomical applications,” J. Opt. Soc. Am. A **11**, 288–309 (1994).
[CrossRef]

**41. **D. L. Fried, “Statistics of a Geometric Representation of Wavefront Distortion,” J. Opt. Soc. Am **55**, 1427–1435 (1965).
[CrossRef]

**42. **M. P. Fitzgerald and J. R. Graham, “Speckle Statistics in Adaptively Corrected Images,” Astrophys. J. **637**, 541–547 (2006).
[CrossRef]

**43. **G. L. Turin, “An introduction to matched filters,” IRE Trans. Inf. Theory **6**, 311–329 (1960).
[CrossRef]

**44. **D. Middleton, *An Introduction to Statistical Communication Theory* (IEEE Press, Piscataway, NJ, 1960).

**45. **B. D. Gallas, “One-shot estimate of MRMC Variance: AUC,” Acad. Radiol. **13**, 353–362 (2006).
[CrossRef] [PubMed]

**46. **E. Bertin and S. Arnouts, “SEXTRACTOR: Software for source extraction,” Astron. Astrophys. Suppl. Ser. **117**, 393–404 (1996).
[CrossRef]

**47. **L. Caucci, “Point Detection and Hotelling Discriminant: An Application in Adaptive Optics,” Master’s thesis, University of Arizona (2007).

**48. **C. K. Abbey and H. H. Barrett, “Human- and model-observer performance in ramp-spectrum noise: effects of regularization and object variability,” J. Opt. Soc. Am. A **18**, 473–488 (2001).
[CrossRef]