## Abstract

This paper analyzes the tradeoff between spatial resolution and noise for simple pinhole imaging systems with position-sensitive photon-counting detectors. We consider image recovery algorithms based on density estimation methods using kernels that are based on apodized inverse filters. This approach allows a continuous-object, continuous-data treatment of the problem. The analysis shows that to minimize the variance of the emission-rate density estimate at a specified reconstructed spatial resolution, the pinhole size should be directly proportional to that spatial resolution. For a Gaussian pinhole, the variance-minimizing full-width half maximum (FWHM) of the pinhole equals the desired object spatial resolution divided by √2. Simulation results confirm this conclusion empirically. The general approach is a potentially useful addition to the collection of tools available for imaging system design.

© 1998 Optical Society of America

## 1. Introduction

The design of imaging systems and image recovery algorithms generally involves tradeoffs between spatial resolution and noise. For example, in a simple pinhole imaging system, a larger pinhole allows more photons to pass through, which reduces the relative uncertainty of the measurements, but at a price of degraded spatial resolution. The problem of specifying system parameters such as pinhole size is therefore frequently encountered in the system design process. This paper considers the image recovery problem as an indirect density estimation problem, and considers the following design criterion: minimize the variance of the object estimate subject to a prespecified object spatial resolution. We show analytically that the variance-minimizing spatial resolution of the imaging system is proportional to the desired spatial resolution of the object estimate, when the kernel of the density estimator is based on an apodized inverse filter. This is an intuitive relationship, but one that has not been previously established theoretically to our knowledge.

There are a variety of methods that have been proposed for “optimal” choice of system parameters in imaging system design. Each such design method has its own merits and limitations, and it is unlikely that any single design method will be universally accepted as the canonical choice. Since imaging systems are often built to serve multiple purposes, system designers can benefit from exploring multiple design criteria. We make no pretense that the criterion analyzed in this paper is always preferable over alternatives, but we believe that it is a potentially useful addition to the collection of tools available for imaging system design.

One very principled approach to imaging system design is to optimize the system for the performance of a certain task or collection of tasks, *e.g*. [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]. In the context of detecting a known Gaussian signal in a stationary nonuniform background, Myers *et al*.[3] found that the optimum aperture size was fairly close to the Gaussian signal width. One can evaluate and optimize task performance with respect to system parameters using either human observers or machine observers. When imaging systems are designed for specific tasks, such as detecting myocardial perfusion defects [11], task performance is a natural metric for design. Often imaging systems must serve multiple purposes, so more generic measures of performance, such as spatial resolution and noise, are useful to complement task-specific performance measures. Manufacturers of medical imaging instruments typically report only spatial resolution and sensitivity, despite their indirect relationships to task performance.

Another approach to analyzing system performance is the Cramer-Rao (CR) bound. The ordinary CR bound is applicable only to unbiased estimators, which limits its utility in imaging problems where bias is typically inevitable. The uniform CR bound [12] is a recent extension of the CR bound that allows for biased estimators. The uniform CR bound provides the minimum achievable variance of an estimator whose bias-gradient length is below a specified threshold. Although the bias gradient is related to spatial resolution in some cases [13], in general it is currently fairly challenging to interpret the tradeoff between variance and bias gradient length. In particular, we have observed some counter-intuitive results concerning optimal collimator resolution as a function of target image resolution, perhaps in part due to nonlinear and system-dependent relationships between bias gradient length and spatial resolution [14, 15]. Our intuition is that as one reduces the required reconstructed spatial resolution, the variance-minimizing collimator size should increase^{1} This intuitive relationship has not always been apparent in our uniform CR bound experiments, which motivated the work described in this paper.

One appeal of the uniform CR bound is that it is estimator independent. However, data from any system must eventually be reconstructed by some estimator, and the class of reasonable estimators is arguably fairly small. So as a part of exploration of system performance, it is sensible to also investigate resolution/noise tradeoffs for broad classes of estimators, albeit without the full generality of the uniform CR bound.

Another difficulty with CR bounds is that they (apparently) require an inherently discrete formulation both for the detector space (which is often but not always natural) and for the image space (which is somewhat unnatural since emission distributions are continuous entities). The discrete formulation leads to large matrix inversion problems, and, as shown in [15], challenges in interpretation due to differences in performance even for neighboring pixels depending on “small” discretization effects.

In this paper, we adopt a completely continuous formulation. The only discretization is at the final step (numerical integration), which is fundamentally different from initially formulating a discrete problem. The treatment is closely related to “indirect” density estimation [16, 17]. A good reference on direct density estimation is [18]. Other publications that are relevant to density estimation an image reconstruction include [19, 20, 21, 22, 23, 24].

Section 2 describes the problem generally. Section 3 describes the density estimation approach and analyzes it statistically. Section 4 focuses on the shift-invariant case, considers a specific kernel for the density estimator based on an apodized inverse filter, and derives analytic results for specific pinhole shapes. Section 5 reports numerical simulations that confirm the analysis.

## 2. Problem

Consider an emitting object with emission-rate density λ(*x̲*) having units emissions per unit time per unit volume. The emission-rate density λ(*x̲*) is defined over a subset Ω of IR^{d}, where typically *d* = 2 for planar imaging and *d* = 3 for volumetric imaging. We assume that the time-ordered sequence of emissions originate from statistically independent random spatial locations {*X̲*
_{1}, *X̲*
_{0},…} drawn from a Poisson spatial point process [25]. In particular, the joint probability that the first *n*
_{0} emissions originate in any (measurable) regions *B*_{n}
⊆ Ω is given by^{2}:

Spatial locations *x̲* ∈ Ω over which the emission-rate density λ(*x̲*) has relatively greater values are the “hot regions” of the object.

For any emission imaging system, not all emitted photons are detected. Let *s*(*x̲*) denote the *sensitivity function* of the emission system, *i.e*., *s*(*x̲*) is the probability that a photon emitted from location *x̲* is detected (somewhere) by the system. Then when the system detects an emission, the probability (density^{3}) that that emission originated from spatial location *x̲* is given by

where

is the total rate of detected events (with units detected counts per unit time^{4}).

Unfortunately, emission imaging systems never observe the emission locations {*X̲*_{n}
} directly. Instead, the *n*th emitted photon is detected by a position-sensitive measurement device, which records a position *V̲*_{n}
. (The detector may also record other event attributes such as energy, and our formulation allows for this generality, but for simplicity one can think of *V̲*_{n}
as position.)

For a planar emitting object imaged by an ideal planar detector through an ideal pinhole located at the center of the transverse coordinates, the recorded spatial locations would be related to the locations of the emissions through the simple relationship *V̲*_{n}
= *mX̲*_{n}
, *n* = 1,2,…, where *m* is the (negative) source magnification factor [26]. In this hypothetical case one could exactly recover the emission location from the measured event positions via the simple relationship ${\underline{X}}_{n}=\frac{1}{m}{\underline{V}}_{n}$. Given the recorded positions of many such photons *V̲*
_{1},…, *V̲*_{N}
, and therefore the positions of many emissions, one could estimate the density *f*(*x̲*) by a variety of well-known methods for density estimation, such as a simple kernel estimate:

where *k* is a nonnegative 2D kernel function (*e.g*. a Gaussian kernel) that integrates to unity [18]. This type of problem is called “direct” density estimation, since the measurements {*X̲*_{n}
} are drawn directly from the density *f*(*x*) that we wish to estimate. The “bandwidth” parameter *β* controls the tradeoff between spatial resolution and “noise” (variance of *f̂*). When more detected events are available, one typically uses narrower kernels [18]. The problem of choosing the bandwidth for kernel density estimation for direct observations is well studied, and data-driven methods that are efficient in terms of mean squared-error are available [27]. However, in the context of image recovery, the mean squared-error metric, which equally weights bias and variance, may not be the appropriate loss function.

An ideal pinhole does not allow very many photons to pass, so in practice one must use a finite-sized pinhole. Furthermore, the position-sensitive detector does not record the *exact* position of the incident photon, but rather a noisy version thereof. In general the recorded positions {*V̲*_{n}
} are only indirectly related to the emitted positions {*X̲*_{n}
} through some conditional pdf:

which describes the “infinitesimal probability” that an emission at location *x̱*
*that is detected* will be recorded at detector position *v̲*. The pdf *f*(*v̲*|*x̲*) includes both the pinhole collimator response function as well as the detector response function. A maximum smoothed likelihood approach to the indirect density estimation problem of estimating *f*(*x̲*) from the measurements {*V̲*_{n}
} is considered in [28], although without analyzing the variance or spatial resolution of the estimator.

Note that since *f*(*v̲*|*x̲*) is a conditional pdf, it integrates to unity over *v̲*. We assume that this conditional pdf holds for *all* detected events, *i.e*.

This is reasonable assumption, except possibly at high count rates when deadtime factors and pulse pile-up effects are significant, *e.g*. [29].

#### 2.1 The estimation problem

Suppose the imaging system records a total of *N* events during a prespecified period of time *t*
_{0}. By assumption, *N* is a Poisson random variable with mean

Note that for a pinhole system the sensitivities *s*(*x̲*) of the system increase with pinhole size, and thus so does the expected number of recorded events. For each of these *N* events the system records independent and identically distributed position attributes *V̲*
_{1},…,*V̲*_{N}
that each have the following marginal pdf

by total probability. This is a list-mode formulation [30, 31].

We would like to estimate λ(*x̲*) from the observed random variables *N* and {*V̲*_{n}
${\}}_{n=1}^{N}$. We assume that *t*
_{0}, *s*(*x̲*), and *f*(*v̲*|*x̲*) are known, *i.e*., previously determined by some combination of modeling and system measurements. If we can find an estimate *f̂*(*x̲*) of *f*(*x̲*), then we can also easily estimate λ(*x̲*). Combining (1) and (3) we see that

Thus a natural estimator for the emission-rate density λ(*x̲*) is simply

We now turn to the problem of finding a suitable estimator *f̂*(*x̲*). This is called an *indirect* density estimation problem, since the observed measurements {*V̲*_{n}
} are only indirectly related to the density *f*(*x̲*) of interest through (4).

## 3. Kernel-based indirect density estimator

In this paper, we consider the following class of kernel-based indirect density estimators^{5}:

where *g*_{β}
(*x̂*, *v̲*) is a user-defined function that typically partially inverts the blurring caused by the system response *f*(*v̲*|*x̲*). A concrete example of *g*_{β}
is given in (22) below. The function *g*_{β}
depends on a user-selected parameter *β* that determines the spatial resolution of *f̂*(*x̲*). For *f̂*(*x̲*) to be a valid pdf, it must integrate to unity. Therefore the function *g*_{β}
(*x̲*, *v̲*) should integrate to unity over *x̲*:

In direct density estimation, one usually chooses kernel functions *k*(∙) in (2) that are nonnegative, since *f*(*x̲*) must be nonnegative, although it is possible to reduce bias by allowing kernels with negative values [18, p. 66]. In the context of *indirect* density estimation, the function *g*_{β}
generally *must* contain negative values in order to partially deconvolve the blur in *f*(*v̲*|*x̲*). In the context of image reconstruction, one can think of the estimator (6) as an event-by-event backprojector^{6}, where the backprojector includes the ramp filter, apodizer, etc. Note that estimators in this class are probably suboptimal since they treat all photons equally. Nevertheless it is a useful class for examining resolution/noise tradeoffs.

Combining (6) with (5), the corresponding estimator for the emission-rate density is

This emission-rate density estimate is a function defined for all *x̲*. There are no pixels or voxels involved, which simplifies the analysis. The effective number of “degrees of freedom” is determined by *N* and by *β*. In the following we examine the statistical properties of the above estimator *$\widehat{\lambda}$*(*x̲*).

#### 3.1 Mean function

The mean function for the estimator $\widehat{\lambda}$
(*x̲*) is derived as follows^{7}:

$$\phantom{\rule{2em}{0ex}}={E}_{N}\left[{E}_{V}\left[\hat{\lambda}\left(\underset{\xaf}{x}\right)\right|N\right]]$$

$$\phantom{\rule{2em}{0ex}}={E}_{N}\left[\frac{N}{{t}_{0}s\left(\underset{\xaf}{x}\right)}{E}_{V}\left[{g}_{\beta}(\underset{\xaf}{x},\underset{\xaf}{V})\right]\right]$$

$$\phantom{\rule{2em}{0ex}}=\frac{r}{s\left(\underset{\xaf}{x}\right)}{E}_{V}\left[{g}_{\beta}(\underset{\xaf}{x},\underset{\xaf}{V})\right]$$

$$\phantom{\rule{2em}{0ex}}=\frac{r}{s\left(\underset{\xaf}{x}\right)}\int {g}_{\beta}(\underset{\xaf}{x},\underset{\xaf}{v}){f}_{V}\left(\underset{\xaf}{v}\right)d\underset{\xaf}{v}$$

$$\phantom{\rule{2em}{0ex}}=\frac{r}{s\left(\underset{\xaf}{x}\right)}\int {g}_{\beta}(\underset{\xaf}{x},\underset{\xaf}{v})[\int f\left(\underset{\xaf}{v}\mid \underset{\xaf}{x\prime}\right)f\left(\underset{\xaf}{x}\prime \right)d\underset{\xaf}{x}\prime ]d\underset{\xaf}{v}$$

$$\phantom{\rule{2em}{0ex}}=\frac{r}{s\left(\underset{\xaf}{x}\right)}\int [\int {g}_{\beta}(\underset{\xaf}{x},\underset{\xaf}{v})f\left(\underset{\xaf}{v}\mid \underset{\xaf}{x}\prime \right)d\underset{\xaf}{v}]f\left(\underset{\xaf}{x}\prime \right)d\underset{\xaf}{x}\prime $$

$$\phantom{\rule{2em}{0ex}}=\int [\frac{s\left(\underset{\xaf}{x}\prime \right)}{s\left(\underset{\xaf}{x}\right)}\int {g}_{\beta}(\underset{\xaf}{x},\underset{\xaf}{v})f\left(\underset{\xaf}{v}\mid \underset{\xaf}{x}\prime \right)d\underset{\xaf}{v}]\lambda \left(\underset{\xaf}{x}\prime \right)d\underset{\xaf}{x}\prime .$$

Thus we have the following *linear*
* relationship between the estimator mean and the true emission density:*

*$$\mu \left(\underset{\xaf}{x}\right)=\int \mathrm{psf}(\underset{\xaf}{x},\underset{\xaf}{x}\prime )\lambda \left(\underset{\xaf}{x}\prime \right)d\underset{\xaf}{x}\prime $$*

*where*

*$$\mathrm{psf}(\underset{\xaf}{x},\underset{\xaf}{x}\prime )\triangleq \frac{s\left(\underset{\xaf}{x}\prime \right)}{s\left(\underset{\xaf}{x}\right)}\int {g}_{\beta}(\underset{\xaf}{x},\underset{\xaf}{v})f\left(\underset{\xaf}{v}|\underset{\xaf}{x}\prime \right)d\underset{\xaf}{v}$$*

*is effectively the overall point-spread function (PSF) for the combined image acquisition / reconstruction process. This PSF depends on the system response, which is contained in f(v̲|x̲), as well as the regularization in the reconstruction algorithm, which is contained in g_{β}
. Equations (9) and (10) are space-varying generalizations of equation (10.32) in Barrett and Swindell’s text [32] for the mean of a filtered Poisson point process. If one uses a g_{β}
function that has negative values, then the PSF may also have negative values. The reconstructed spatial resolution is controlled by the PSF (10), so for good spatial resolution, g_{β}
must partially “deconvolve” any blur caused by f(v̲|x̲).*

*3.2 Second-moment functions*

*Before computing the autocorrelation function of $\widehat{\lambda}$
, we first note that since N is Poisson,*

*$$E\left[{N}^{2}\right]=\mathrm{Var}\left\{N\right\}+{\left(E\left[N\right]\right)}^{2}=E\left[N\right]+{\left(E\left[N\right]\right)}^{2}={t}_{0}r+{\left({t}_{0}r\right)}^{2}$$*

*so E[N
^{2} - N] = (t
_{0}
r)^{2}. Then from (7), the autocorrelation function for $\widehat{\lambda}$
(x̲) is derived as follows:*

*$${R}_{\hat{\lambda}}({\underset{\xaf}{x}}_{1},{\underset{\xaf}{x}}_{2})=E\left[\hat{\lambda}\left({\underset{\xaf}{x}}_{1}\right)\hat{\lambda}\left({\underset{\xaf}{x}}_{2}\right)\right]$$*

$$\phantom{\rule{4em}{0ex}}={E}_{N}\left[{E}_{V}\left[\hat{\lambda}\left({\underset{\xaf}{x}}_{1}\right)\hat{\lambda}\left({\underset{\xaf}{x}}_{2}\right)\mid N\right]\right]$$

$$\phantom{\rule{4em}{0ex}}=\frac{1}{{t}_{0}^{2}s\left({\underset{\xaf}{x}}_{1}\right)s\left({\underset{\xaf}{x}}_{2}\right)}{E}_{N}[{E}_{V}\left[\sum _{n=1}^{N}\sum _{m=1}^{N}{g}_{\beta}({\underset{\xaf}{x}}_{1},{\underset{\xaf}{V}}_{n}){g}_{\beta}({\underset{\xaf}{x}}_{2},{\underset{\xaf}{V}}_{m})\mid N\right]]$$

$$\phantom{\rule{4em}{0ex}}=\frac{1}{{t}_{0}^{2}s\left({\underset{\xaf}{x}}_{1}\right)s\left({\underset{\xaf}{x}}_{2}\right)}{E}_{N}[\left({N}^{2}-N\right){E}_{V}\left[{g}_{\beta}({\underset{\xaf}{x}}_{1},\underset{\xaf}{V})\right]{E}_{V}\left[{g}_{\beta}({\underset{\xaf}{x}}_{2},\underset{\xaf}{V})\right]$$

$$\phantom{\rule{4em}{0ex}}+N{E}_{V}[{g}_{\beta}({\underset{\xaf}{x}}_{1},\underset{\xaf}{V}){g}_{\beta}({\underset{\xaf}{x}}_{2},\underset{\xaf}{V})]]$$

$$\phantom{\rule{4em}{0ex}}=\left(\frac{{r}^{2}}{s\left({\underset{\xaf}{x}}_{1}\right)s\left({\underset{\xaf}{x}}_{2}\right)}\right){E}_{V}\left[{g}_{\beta}({\underset{\xaf}{x}}_{1},\underset{\xaf}{V})\right]{E}_{V}\left[{g}_{\beta}({\underset{\xaf}{x}}_{2},\underset{\xaf}{V})\right]$$

$$\phantom{\rule{4em}{0ex}}+\frac{r}{{t}_{0}s\left({\underset{\xaf}{x}}_{1}\right)s\left({\underset{\xaf}{x}}_{2}\right)}{E}_{V}\left[{g}_{\beta}({\underset{\xaf}{x}}_{1},\underset{\xaf}{V}){g}_{\beta}({\underset{\xaf}{x}}_{2},\underset{\xaf}{V})\right]\phantom{\rule{.2em}{0ex}}$$

$$\phantom{\rule{4em}{0ex}}=\mu ({\underset{\xaf}{x}}_{1})\mu \left({\underset{\xaf}{x}}_{2}\right)+\frac{r}{{t}_{0}s\left({\underset{\xaf}{x}}_{1}\right)s\left({\underset{\xaf}{x}}_{2}\right)}{E}_{V}\left[{g}_{\beta}({\underset{\xaf}{x}}_{1},\underset{\xaf}{V}){g}_{\beta}({\underset{\xaf}{x}}_{2},\underset{\xaf}{V})\right].$$

$$\phantom{\rule{4em}{0ex}}={E}_{N}\left[{E}_{V}\left[\hat{\lambda}\left({\underset{\xaf}{x}}_{1}\right)\hat{\lambda}\left({\underset{\xaf}{x}}_{2}\right)\mid N\right]\right]$$

$$\phantom{\rule{4em}{0ex}}=\frac{1}{{t}_{0}^{2}s\left({\underset{\xaf}{x}}_{1}\right)s\left({\underset{\xaf}{x}}_{2}\right)}{E}_{N}[{E}_{V}\left[\sum _{n=1}^{N}\sum _{m=1}^{N}{g}_{\beta}({\underset{\xaf}{x}}_{1},{\underset{\xaf}{V}}_{n}){g}_{\beta}({\underset{\xaf}{x}}_{2},{\underset{\xaf}{V}}_{m})\mid N\right]]$$

$$\phantom{\rule{4em}{0ex}}=\frac{1}{{t}_{0}^{2}s\left({\underset{\xaf}{x}}_{1}\right)s\left({\underset{\xaf}{x}}_{2}\right)}{E}_{N}[\left({N}^{2}-N\right){E}_{V}\left[{g}_{\beta}({\underset{\xaf}{x}}_{1},\underset{\xaf}{V})\right]{E}_{V}\left[{g}_{\beta}({\underset{\xaf}{x}}_{2},\underset{\xaf}{V})\right]$$

$$\phantom{\rule{4em}{0ex}}+N{E}_{V}[{g}_{\beta}({\underset{\xaf}{x}}_{1},\underset{\xaf}{V}){g}_{\beta}({\underset{\xaf}{x}}_{2},\underset{\xaf}{V})]]$$

$$\phantom{\rule{4em}{0ex}}=\left(\frac{{r}^{2}}{s\left({\underset{\xaf}{x}}_{1}\right)s\left({\underset{\xaf}{x}}_{2}\right)}\right){E}_{V}\left[{g}_{\beta}({\underset{\xaf}{x}}_{1},\underset{\xaf}{V})\right]{E}_{V}\left[{g}_{\beta}({\underset{\xaf}{x}}_{2},\underset{\xaf}{V})\right]$$

$$\phantom{\rule{4em}{0ex}}+\frac{r}{{t}_{0}s\left({\underset{\xaf}{x}}_{1}\right)s\left({\underset{\xaf}{x}}_{2}\right)}{E}_{V}\left[{g}_{\beta}({\underset{\xaf}{x}}_{1},\underset{\xaf}{V}){g}_{\beta}({\underset{\xaf}{x}}_{2},\underset{\xaf}{V})\right]\phantom{\rule{.2em}{0ex}}$$

$$\phantom{\rule{4em}{0ex}}=\mu ({\underset{\xaf}{x}}_{1})\mu \left({\underset{\xaf}{x}}_{2}\right)+\frac{r}{{t}_{0}s\left({\underset{\xaf}{x}}_{1}\right)s\left({\underset{\xaf}{x}}_{2}\right)}{E}_{V}\left[{g}_{\beta}({\underset{\xaf}{x}}_{1},\underset{\xaf}{V}){g}_{\beta}({\underset{\xaf}{x}}_{2},\underset{\xaf}{V})\right].$$

*Therefore the autocovariance function for $\widehat{\lambda}$
is*

*$${K}_{\hat{\lambda}}({\underset{\xaf}{x}}_{1},{\underset{\xaf}{x}}_{2})=E\left[\hat{\lambda}\left({\underset{\xaf}{x}}_{1}\right)\hat{\lambda}\left({\underset{\xaf}{x}}_{2}\right)\right]-\mu \left({\underset{\xaf}{x}}_{1}\right)\mu \left({\underset{\xaf}{x}}_{2}\right)$$*

$$\phantom{\rule{4em}{0ex}}=\frac{r}{{t}_{0}s\left({\underset{\xaf}{x}}_{1}\right)s\left({\underset{\xaf}{x}}_{2}\right)}E\left[{g}_{\beta}({\underset{\xaf}{x}}_{1},\underset{\xaf}{V}){g}_{\beta}({\underset{\xaf}{x}}_{2},\underset{\xaf}{V})\right].$$

$$\phantom{\rule{4em}{0ex}}=\frac{r}{{t}_{0}s\left({\underset{\xaf}{x}}_{1}\right)s\left({\underset{\xaf}{x}}_{2}\right)}E\left[{g}_{\beta}({\underset{\xaf}{x}}_{1},\underset{\xaf}{V}){g}_{\beta}({\underset{\xaf}{x}}_{2},\underset{\xaf}{V})\right].$$

*To simplify, note that*

*$$E\left[{g}_{\beta}({\underset{\xaf}{x}}_{1},\underset{\xaf}{V}){g}_{\beta}({\underset{\xaf}{x}}_{2},\underset{\xaf}{V})\right]=\int {g}_{\beta}({\underset{\xaf}{x}}_{1},\underset{\xaf}{v}){g}_{\beta}({\underset{\xaf}{x}}_{2},\underset{\xaf}{v}){f}_{V}\left(\underset{\xaf}{v}\right)d\underset{\xaf}{v}$$*

$$\phantom{\rule{8em}{0ex}}=\int {g}_{\beta}({\underset{\xaf}{x}}_{1},\underset{\xaf}{v}){g}_{\beta}({\underset{\xaf}{x}}_{2},\underset{\xaf}{v})[\int f\left(\underset{\xaf}{v}\mid \underset{\xaf}{x}\prime \right)f\left(\underset{\xaf}{x}\prime \right)d\underset{\xaf}{x}\prime ]d\underset{\xaf}{v}$$

$$\phantom{\rule{8em}{0ex}}=\int [\int {g}_{\beta}({\underset{\xaf}{x}}_{1},\underset{\xaf}{v}){g}_{\beta}({\underset{\xaf}{x}}_{2},\underset{\xaf}{v})f\left(\underset{\xaf}{v}\mid \underset{\xaf}{x}\prime \right)d\underset{\xaf}{v}]f\left(\underset{\xaf}{x}\prime \right)d\underset{\xaf}{x}\prime ,$$

$$\phantom{\rule{8em}{0ex}}=\int {g}_{\beta}({\underset{\xaf}{x}}_{1},\underset{\xaf}{v}){g}_{\beta}({\underset{\xaf}{x}}_{2},\underset{\xaf}{v})[\int f\left(\underset{\xaf}{v}\mid \underset{\xaf}{x}\prime \right)f\left(\underset{\xaf}{x}\prime \right)d\underset{\xaf}{x}\prime ]d\underset{\xaf}{v}$$

$$\phantom{\rule{8em}{0ex}}=\int [\int {g}_{\beta}({\underset{\xaf}{x}}_{1},\underset{\xaf}{v}){g}_{\beta}({\underset{\xaf}{x}}_{2},\underset{\xaf}{v})f\left(\underset{\xaf}{v}\mid \underset{\xaf}{x}\prime \right)d\underset{\xaf}{v}]f\left(\underset{\xaf}{x}\prime \right)d\underset{\xaf}{x}\prime ,$$

*so the autocovariance function is*

*$${K}_{\hat{\lambda}}({\underset{\xaf}{x}}_{1},{\underset{\xaf}{x}}_{2})=\frac{r}{{t}_{0}s\left({\underset{\xaf}{x}}_{1}\right)s\left({\underset{\xaf}{x}}_{2}\right)}\int [\int {g}_{\beta}({\underset{\xaf}{x}}_{1},\underset{\xaf}{v}){g}_{\beta}({\underset{\xaf}{x}}_{2},\underset{\xaf}{v})f\left(\underset{\xaf}{v}|\underset{\xaf}{x}\prime \right)d\underset{\xaf}{v}]s\left(\underset{\xaf}{x}\prime \right)\lambda \left(\underset{\xaf}{x}\prime \right)d\underset{\xaf}{x}\prime .$$*

*In particular, the variance function is*

*$${\sigma}^{2}\left(\underset{\xaf}{x}\right)\triangleq \mathrm{Var}\left\{\hat{\lambda}\left(\underset{\xaf}{x}\right)\right\}={K}_{\hat{\lambda}}(\underset{\xaf}{x},\underset{\xaf}{x})=\frac{1}{{t}_{0}{s}^{2}\left(\underset{\xaf}{x}\right)}\int [\int {g}_{\beta}^{2}(\underset{\xaf}{x},\underset{\xaf}{v})f\left(\underset{\xaf}{v}|\underset{\xaf}{x}\prime \right)d\underset{\xaf}{v}]s\left(\underset{\xaf}{x}\prime \right)\lambda \left(\underset{\xaf}{x}\prime \right)d\underset{\xaf}{x}\prime .$$*

*(This equation is a space-variant generalization of (10.31) in [32].) Note that the variance depends inversely on the scan time t
_{0}, which is expected.*

*For a specific imaging system f(v̲|x̲), object λ(x̲), and reconstruction method g_{β}
of interest, one could compute (9) and (12) for a range of β values or pinhole sizes to investigate the resolution/noise tradeoff. The computational tractability of such evaluations will depend on the complexity of f(v̲|x̲) and g_{β}
. To obtain insight into the tradeoffs, we consider the simpler shift-invariant case in the remainder of this paper.*

*4. Shift-invariant case*

*Suppose the system is shift-invariant, i.e. f(v̲|x̲) = h(v̲ - x̲), where for example h is a normalized pinhole response function. A pinhole that is mechanically scanned over the emitting object is an example of a shift-invariant system^{8}. Such systems have been used for many years [26] and continue to find specialized applications, e.g. [33]. Note that since f(v̲|x̲) is a pdf, it must integrate to unity over v̲, so we must also have h integrate to unity. Suppose also that the reconstruction algorithm is shift invariant, i.e. g_{β}
(x̲,v̲ ) = g_{β}
(x̲ - v̲) (with a slight notation abuse/reuse). Finally, assume that the sensitivity is also space-invariant, i.e. s(x̲) = s
_{0} for some positive constant s
_{0}. Then the above expressions simplify as follows.*

*The mean expression (9) becomes:*

*$$\mu \left(\underset{\xaf}{x}\right)=\int [\int {g}_{\beta}(\underset{\xaf}{x},\underset{\xaf}{v})f\left(\underset{\xaf}{v}\mid \underset{\xaf}{x}\prime \right)d\underset{\xaf}{v}]\lambda \left(\underset{\xaf}{x}\prime \right)d\underset{\xaf}{x}\prime $$*

$$=\int [\int {g}_{\beta}\left(\underset{\xaf}{x}-\underset{\xaf}{\nu}\right)h\left(\underset{\xaf}{v}-{\underset{\xaf}{x}}^{\prime}\right)d\underset{\xaf}{v}]\lambda \left({\underset{\xaf}{x}}^{\prime}\right)d{\underset{\xaf}{x}}^{\prime}$$

$$\phantom{\rule{1.5em}{0ex}}=\int [\int {g}_{\beta}\left(\underset{\xaf}{x}-\underset{\xaf}{x}\prime -\underset{\xaf}{x}\u2033\right)h(\underset{\xaf}{x}\u2033)d\underset{\xaf}{x}\u2033]\lambda \left(\underset{\xaf}{x}\prime \right)d\underset{\xaf}{x}\prime $$

$$\phantom{\rule{1.5em}{0ex}}=\int ({g}_{\beta}*h)\left(\underset{\xaf}{x}-\underset{\xaf}{x}\prime \right)\lambda \left(\underset{\xaf}{x}\prime \right)d\underset{\xaf}{x}\prime ,$$

$$=\int [\int {g}_{\beta}\left(\underset{\xaf}{x}-\underset{\xaf}{\nu}\right)h\left(\underset{\xaf}{v}-{\underset{\xaf}{x}}^{\prime}\right)d\underset{\xaf}{v}]\lambda \left({\underset{\xaf}{x}}^{\prime}\right)d{\underset{\xaf}{x}}^{\prime}$$

$$\phantom{\rule{1.5em}{0ex}}=\int [\int {g}_{\beta}\left(\underset{\xaf}{x}-\underset{\xaf}{x}\prime -\underset{\xaf}{x}\u2033\right)h(\underset{\xaf}{x}\u2033)d\underset{\xaf}{x}\u2033]\lambda \left(\underset{\xaf}{x}\prime \right)d\underset{\xaf}{x}\prime $$

$$\phantom{\rule{1.5em}{0ex}}=\int ({g}_{\beta}*h)\left(\underset{\xaf}{x}-\underset{\xaf}{x}\prime \right)\lambda \left(\underset{\xaf}{x}\prime \right)d\underset{\xaf}{x}\prime ,$$

*where x̲′′ = v̲ - x̲′ and ∗ denotes d-dimensional convolution. Thus we have the following convolution relationship (cf (10.11) of [32]):*

*i.e*., the estimator mean is the convolution of the underlying emission-rate density with the system PSF *h*(∙) and the recovery kernel *g*_{β}
(∙). Therefore the spatial resolution is controlled by

*$$\mathrm{psf}\left(\underset{\xaf}{x}\right)\triangleq ({g}_{\beta}*h)\left(\underset{\xaf}{x}\right),$$*

*with corresponding frequency response or overall transfer function*

*$$\mathrm{PSF}\left(\underset{\xaf}{u}\right)\triangleq {G}_{\beta}\left(\underset{\xaf}{u}\right)H\left(\underset{\xaf}{u}\right),$$*

*where F(u̲) ≜ ∫ f(x̲)e
^{-i2πu̲∙x̲}
d
x̲ denotes the d-dimensional Fourier transform of f(x̲).*

*Similarly, the inner variance term in (12) becomes:*

*$$\int {g}_{\beta}^{2}(\underset{\xaf}{x},\underset{\xaf}{v})f\left(\underset{\xaf}{v}\mid \underset{\xaf}{x}\prime \right)d\underset{\xaf}{v}=\int {g}_{\beta}^{2}\left(\underset{\xaf}{x}-\underset{\xaf}{v}\right)h\left(\underset{\xaf}{v}-\underset{\xaf}{x}\prime \right)d\underset{\xaf}{v}$$*

$$\phantom{\rule{7em}{0ex}}=\int {g}_{\beta}^{2}\left(\underset{\xaf}{x}-\underset{\xaf}{x}\prime -\underset{\xaf}{x}\u2033\right)h(\underset{\xaf}{x}\u2033)d\underset{\xaf}{x}\u2033$$

$$\phantom{\rule{7em}{0ex}}=({g}_{\beta}^{2}*h)\left(\underset{\xaf}{x}-\underset{\xaf}{x}\prime \right),$$

$$\phantom{\rule{7em}{0ex}}=\int {g}_{\beta}^{2}\left(\underset{\xaf}{x}-\underset{\xaf}{x}\prime -\underset{\xaf}{x}\u2033\right)h(\underset{\xaf}{x}\u2033)d\underset{\xaf}{x}\u2033$$

$$\phantom{\rule{7em}{0ex}}=({g}_{\beta}^{2}*h)\left(\underset{\xaf}{x}-\underset{\xaf}{x}\prime \right),$$

*which is equivalent to the “noise kernel” of (10.35) of [32]. Thus, in the shift-invariant case the variance function (cf (10.10) of [32]) simplifies to*

*$${\sigma}^{2}\left(\underset{\xaf}{x}\right)=\frac{1}{{t}_{0}{s}_{0}}\int ({g}_{\beta}^{2}*h)\left(\underset{\xaf}{x}-\underset{\xaf}{x}\prime \right)\lambda \left(\underset{\xaf}{x}\prime \right)d\underset{\xaf}{x}\prime $$*

$$\phantom{\rule{2em}{0ex}}=\frac{1}{{t}_{0}{s}_{0}}({g}_{\beta}^{2}*h*\lambda )\left(\underset{\xaf}{x}\right).$$

$$\phantom{\rule{2em}{0ex}}=\frac{1}{{t}_{0}{s}_{0}}({g}_{\beta}^{2}*h*\lambda )\left(\underset{\xaf}{x}\right).$$

*Therefore in the shift-invariant case it is straightforward to compute variances (approximately) using FFTs to calculate the convolutions.*

*4.1 Spatially smooth objects λ(**x̲*)

*x̲*)

*If the object is spatially smooth, i.e. the scale of the spatial variations of (h ∗ λ)(x̲) is large relative to the support of ${g}_{\beta}^{2}$(x̲), then the variance expression simplifies as follows.*

*$${t}_{0}{s}_{0}{\sigma}^{2}\left(\underset{\xaf}{x}\right)=({g}_{\beta}^{2}*h*\lambda )\left(\underset{\xaf}{x}\right)$$*

$$\phantom{\rule{3em}{0ex}}=\int {g}_{\beta}^{2}\left(\underset{\xaf}{x}\prime \right)\left(h*\lambda \right)\left(\underset{\xaf}{x}-\underset{\xaf}{x}\prime \right)d\underset{\xaf}{x}\prime $$

$$\phantom{\rule{3em}{0ex}}\approx \left(h*\lambda \right)\left(\underset{\xaf}{x}\right){\int}_{-\infty}^{\infty}{g}_{\beta}^{2}\left(\underset{\xaf}{x}\prime \right)d\underset{\xaf}{x}\prime $$

$$\phantom{\rule{3em}{0ex}}=\tilde{\lambda}\left(\underset{\xaf}{x}\right)\int {g}_{\beta}^{2}\left(\underset{\xaf}{x}\prime \right)d\underset{\xaf}{x}\prime ,$$

$$\phantom{\rule{3em}{0ex}}=\int {g}_{\beta}^{2}\left(\underset{\xaf}{x}\prime \right)\left(h*\lambda \right)\left(\underset{\xaf}{x}-\underset{\xaf}{x}\prime \right)d\underset{\xaf}{x}\prime $$

$$\phantom{\rule{3em}{0ex}}\approx \left(h*\lambda \right)\left(\underset{\xaf}{x}\right){\int}_{-\infty}^{\infty}{g}_{\beta}^{2}\left(\underset{\xaf}{x}\prime \right)d\underset{\xaf}{x}\prime $$

$$\phantom{\rule{3em}{0ex}}=\tilde{\lambda}\left(\underset{\xaf}{x}\right)\int {g}_{\beta}^{2}\left(\underset{\xaf}{x}\prime \right)d\underset{\xaf}{x}\prime ,$$

*where we define $\tilde{\lambda}$
≜ h ∗ λ (cf (10.41) of [32]). For small β or for spatially smooth objects this approximation should be fairly accurate^{9}. For pinhole imaging, gβ is a real function, so by combining the above approximation with Parseval’s theorem:*

*$${\sigma}^{2}\left(\underset{\xaf}{x}\right)\approx \frac{\tilde{\lambda}\left(\underset{\xaf}{x}\right)}{{t}_{0}{s}_{0}}\int {g}_{\beta}^{2}\left(\underset{\xaf}{x\prime}\right)d\underset{\xaf}{x\prime}=\frac{\tilde{\lambda}\left(\underset{\xaf}{x}\right)}{{t}_{0}{s}_{0}}\int {\mid {G}_{\beta}\left(\underset{\xaf}{u}\right)\mid}^{2}d\underset{\xaf}{u}.$$*

*This is a very tractable approximation to the estimator variance.*

*4.2 Resolution-noise tradeoffs*

*In general, both the sensitivity s_{0} and the overall transfer function PSF(u̲) = G_{β}
(u̲)H(u̲) depend on the pinhole size. Therefore the expression (17) does not immediately provide the optimal choice for the pinhole size. In the following we consider a specific class of choices for g_{β}
(∙), and show that the variance-minimizing pinhole size is proportional to the specified reconstructed spatial resolution.*

*The relationships (15) and (17) epitomize the resolution-noise tradeoff. For good spatial resolution in (15), we would like G_{β}
(u̲) ≈ 1/H(u̲), but if H(u̲) is small, then such a G_{β}
(u̲) is large, which amplifies the variance term in (17).*

*4.3 Apodized inverse filter*

*Consider a general pinhole with transmissivity function t(x̲) ≥ 0, which we assume is normalized so that ∫t(x̲)d
x̲= 1. Let T(u̲) be the d-dimensional Fourier transform of t(x̲). The design problem is to choose the pinhole size w, where the normalized pinhole response function is defined by $h\left(\underline{x}\right)=\frac{1}{{w}^{d}}t\left(\frac{\underline{x}}{w}\right)$ for which H(u̲) = T(wu̲). Define the apodized inverse filter*

*$${G}_{\beta}\left(\underset{\xaf}{u}\right)\triangleq \frac{A\left(\beta \underset{\xaf}{u}\right)}{H\left(\underset{\xaf}{u}\right)}=\frac{A\left(\beta \underset{\xaf}{u}\right)}{T\left(w\underset{\xaf}{u}\right)},$$*

*where A(βu̲) is a user-chosen apodizing function which we assume to be real and symmetric. Without loss of generality, we assume A(u̲) has been defined so that the FWHM of a(x̲) has unit length. From (15), the overall transfer function of this system is*

*$$\mathrm{PSF}\left(\underset{\xaf}{u}\right)={G}_{\beta}\left(\underset{\xaf}{u}\right)H\left(\underset{\xaf}{u}\right)=A\left(\beta \underset{\xaf}{u}\right),$$*

*so the overall PSF is simply psf( x̲) = β
^{-d}
a(x̲|β). Therefore the FWHM of the overall PSF is precisely β for this estimator for any pinhole size w. We now show that the variance-minimizing choice for the pinhole width w is directly proportional to β.*

*We assume s
_{0} = c
_{0}
W^{p}
for some constant c
_{0} independent of w and for some power p > 0. Typically p = d; for example, the sensitivity of a circular pinhole is proportional to its area, which is proportional to w
^{2}. From (17) the variance is approximately:*

*$${\sigma}^{2}\left(\underset{\xaf}{x}\right)\approx \frac{\tilde{\lambda}\left(\underset{\xaf}{x}\right)}{{t}_{0}{c}_{0}{w}^{p}}\int {\mid {G}_{\beta}\left(\underset{\xaf}{u}\right)\mid}^{2}d\underset{\xaf}{u}$$*

$$\phantom{\rule{2.0em}{0ex}}=\frac{{c}_{1}}{{w}^{p}}\int {\mid T\left(w\underset{\xaf}{u}\right)\mid}^{-2}{A}^{2}\left(\beta \underset{\xaf}{u}\right)d\underset{\xaf}{u}$$

$$\phantom{\rule{2.0em}{0ex}}=\frac{{c}_{1}}{{w}^{p+d}}\int {\mid T\left(\underset{\xaf}{z}\right)\mid}^{-2}{A}^{2}\left(\frac{\underset{\xaf}{z}\beta}{w}\right)d\underset{\xaf}{z}$$

$$\phantom{\rule{2.0em}{0ex}}=\frac{{c}_{1}}{{w}^{p}}\int {\mid T\left(w\underset{\xaf}{u}\right)\mid}^{-2}{A}^{2}\left(\beta \underset{\xaf}{u}\right)d\underset{\xaf}{u}$$

$$\phantom{\rule{2.0em}{0ex}}=\frac{{c}_{1}}{{w}^{p+d}}\int {\mid T\left(\underset{\xaf}{z}\right)\mid}^{-2}{A}^{2}\left(\frac{\underset{\xaf}{z}\beta}{w}\right)d\underset{\xaf}{z}$$

*where z̲ ≜ wu̲ and c
_{1} ≜ $\tilde{\lambda}$
(x̲)/t
_{0}
c
_{0}. To find the pinhole width w that minimizes the variance, we zero the partial derivative of the variance σ
^{2} with respect to the width w:*

*$$0=\frac{-\left(p+d\right){c}_{1}}{{w}^{p+d+1}}\int {\mid T\left(\underset{\xaf}{z}\right)\mid}^{-2}{A}^{2}\left(\frac{\underset{\xaf}{z}\beta}{w}\right)d\underset{\xaf}{z}$$*

$$\phantom{\rule{1.2em}{0ex}}+\frac{{c}_{1}}{{w}^{p+d}}\int {\mid T\left(\underset{\xaf}{z}\right)\mid}^{-2}2A\left(\frac{\underset{\xaf}{z}\beta}{w}\right)\stackrel{\u0307}{A}\left(\frac{\underset{\xaf}{z}\beta}{w}\right)\left(\frac{-\underset{\xaf}{z}\beta}{{w}^{2}}\right)d\underset{\xaf}{z}$$

$$\phantom{\rule{1.2em}{0ex}}+\frac{{c}_{1}}{{w}^{p+d}}\int {\mid T\left(\underset{\xaf}{z}\right)\mid}^{-2}2A\left(\frac{\underset{\xaf}{z}\beta}{w}\right)\stackrel{\u0307}{A}\left(\frac{\underset{\xaf}{z}\beta}{w}\right)\left(\frac{-\underset{\xaf}{z}\beta}{{w}^{2}}\right)d\underset{\xaf}{z}$$

*or, by defining α = β/w:*

*$$0=\int \frac{\frac{p+d}{2}{A}^{2}\left(\alpha \underset{\xaf}{z}\right)+A\left(\alpha \underset{\xaf}{z}\right)\stackrel{\u0307}{A}\left(\alpha \underset{\xaf}{z}\right)\alpha \underset{\xaf}{z}}{{\mid T\left(\underset{\xaf}{z}\right)\mid}^{2}}d\underset{\xaf}{z}.$$*

*The above equality depends only on the ratio α = β/w. So if there is a root α
_{0} > 0 that corresponds to a global minimizer of the variance σ
^{2}, then the variance-minimizing w is proportional to the reconstructed spatial resolution β through the relationship W
_{min} = ${\alpha}_{0}^{-1}$
β.*

*4.4 Relationship to sieves*

*The above apodized inverse filter is closely related to the method of sieves for density estimation [34], in the sense that $\widehat{\lambda}$
( x̲) is an unbiased estimate of β^{-d}a(x̲|$\underset{\_}{\beta}$) ∗ λ(x̲).*

*4.5 Gaussian pinhole example*

*As a concrete example, consider the Gaussian pinhole illustrated in Fig. 1 for d = 2 dimensional imaging. To simplify notation, define r = ∥x̲∥ and ρ = ∥u∥ for circularly symmetric 2D imaging.*

*The exact transmissivity of this aperture is*

*$${\tau}_{w}\left(r\right)=\{\begin{array}{c}{e}^{-\mathit{\mu l}{\left(\frac{r}{{r}_{b}}\right)}^{2}},\phantom{\rule{.2em}{0ex}}r\le {r}_{b}\\ {e}^{-\mathit{\mu l}},\phantom{\rule{2.0em}{0ex}}r\ge {r}_{b}\phantom{\rule{2.0em}{0ex}}\end{array}.$$*

*However, if μl is sufficiently large, then we can approximate this transmissivity by*

*where w is the FWHM of the pinhole response (i.e. τ_{w}
(w/2) = 1/2) and*

*The sensitivity of this pinhole is therefore*

*$${\mathbf{s}}_{w}=\int {\tau}_{w}\left(\mid \mid \underset{\xaf}{x}\mid \mid \right)d\underset{\xaf}{x}={\left(\frac{w}{\kappa}\right)}^{d},$$*

*which is proportional to w^{d}
as expected. The normalized transmissivity (for unit pinhole width w = 1) is*

*$$t\left(r\right)=\frac{{\tau}_{1}\left(r\right)}{{\mathbf{s}}_{1}}={\kappa}^{d}{e}^{-\pi {\left(\mathit{kr}\right)}^{2},}$$*

*with corresponding frequency response*

*We choose a Gaussian apodizing function A(u̲) = e
^{-π(ρ/κ)2} so that the PSF corresponding to A(βρ) has FWHM β. The corresponding recovery filter is thus*

*$${G}_{\beta}\left(\underset{\xaf}{u}\right)=\frac{A\left(\mathit{\beta \rho}\right)}{T\left(\mathit{w\rho}\right)}=\frac{{e}^{-\pi {\left(\frac{\mathit{\beta \rho}}{\kappa}\right)}^{2}}}{{e}^{-\pi {\left(\frac{\mathit{w\rho}}{\kappa}\right)}^{2}}}=\mathrm{exp}\left(-\pi {\left(\frac{\rho \sqrt{{\beta}^{2}-{w}^{2}}}{\kappa}\right)}^{2}\right),$$*

*with corresponding space-domain recovery kernel*

*$${g}_{\beta}\left(\underset{\xaf}{x}\right)=\frac{{\kappa}^{2}}{{\beta}^{2}-{w}^{2}}\mathrm{exp}\left(-\pi {\left(\frac{\mathit{r\kappa}}{\sqrt{{\beta}^{2}-{w}^{2}}}\right)}^{2}\right).$$*

*Substituting A(∙) into (19), the variance function is approximately*

*$${\sigma}^{2}\left(\underset{\xaf}{x}\right)\approx \tilde{\lambda}\left(\underset{\xaf}{x}\right)\frac{{\kappa}^{d}}{{t}_{0}{w}^{2d}}\int \int {e}^{\pi 2{\left(\frac{\rho}{\kappa}\right)}^{2}}\mathrm{exp}\left(-\pi 2{\left(\frac{\mathit{\rho \beta}}{\mathit{w\kappa}}\right)}^{2}\right)d\underset{\xaf}{u}$$*

$$\phantom{\rule{2.2em}{0ex}}=\tilde{\lambda}\left(\underset{\xaf}{x}\right)\frac{{\kappa}^{d}}{{t}_{0}{w}^{2d}}\int \int \mathrm{exp}\left(-\pi {\rho}^{2}\frac{2}{{\kappa}^{2}}\left({\left(\frac{\beta}{w}\right)}^{2}-1\right)\right)d\underset{\xaf}{u}$$

$$\phantom{\rule{2.2em}{0ex}}=\tilde{\lambda}\left(\underset{\xaf}{x}\right)\frac{{\kappa}^{d}}{{t}_{0}{w}^{2d}}{\left[\frac{2}{{\kappa}^{2}}\left({\left(\frac{\beta}{w}\right)}^{2}-1\right)\right]}^{\frac{-d}{2}}$$

$$\phantom{\rule{2.2em}{0ex}}=\tilde{\lambda}\left(\underset{\xaf}{x}\right)\frac{{\kappa}^{2d}}{{2}^{\frac{d}{2}}{t}_{0}}{\left(\beta {w}^{2}-{w}^{4}\right)}^{\frac{-d}{2}},$$

$$\phantom{\rule{2.2em}{0ex}}=\tilde{\lambda}\left(\underset{\xaf}{x}\right)\frac{{\kappa}^{d}}{{t}_{0}{w}^{2d}}\int \int \mathrm{exp}\left(-\pi {\rho}^{2}\frac{2}{{\kappa}^{2}}\left({\left(\frac{\beta}{w}\right)}^{2}-1\right)\right)d\underset{\xaf}{u}$$

$$\phantom{\rule{2.2em}{0ex}}=\tilde{\lambda}\left(\underset{\xaf}{x}\right)\frac{{\kappa}^{d}}{{t}_{0}{w}^{2d}}{\left[\frac{2}{{\kappa}^{2}}\left({\left(\frac{\beta}{w}\right)}^{2}-1\right)\right]}^{\frac{-d}{2}}$$

$$\phantom{\rule{2.2em}{0ex}}=\tilde{\lambda}\left(\underset{\xaf}{x}\right)\frac{{\kappa}^{2d}}{{2}^{\frac{d}{2}}{t}_{0}}{\left(\beta {w}^{2}-{w}^{4}\right)}^{\frac{-d}{2}},$$

*where we have applied Parseval’s theorem in conjunction with the Hankel transform to evaluate the integral. Note that we must have w < β for the integral to be finite, i.e. the pinhole width must be no larger than the desired spatial resolution. Figure 2 plots the variance versus pinhole width w. Differentiating the variance with respect to w and zeroing yields the following relationship:*

*Taking the second derivative confirms that this is the variance-minimizing choice. A plot of the variance as a function of pinhole width is shown in Figure 2.*

*Therefore, we have shown that for a Gaussian pinhole imaging system and an apodized inverse filter reconstruction method, the variance-minimizing pinhole width is proportional to the desired reconstructed spatial resolution.*

*5. Laplacian pinhole example*

*A somewhat more conventional pinhole is the Laplacian ^{10} pinhole shown in Fig. 3. The exact transmissivity of such a pinhole is*

*$${\tau}_{\omega}\left(r\right)=\{\begin{array}{c}{e}^{\frac{-\mu lr}{{r}_{b}}},\phantom{\rule{.2em}{0ex}}\phantom{\rule{.2em}{0ex}}\phantom{\rule{.2em}{0ex}}\phantom{\rule{.2em}{0ex}}r\le {r}_{b}\\ {e}^{-\mu l},\phantom{\rule{.2em}{0ex}}\phantom{\rule{.2em}{0ex}}\phantom{\rule{.2em}{0ex}}\phantom{\rule{.2em}{0ex}}\phantom{\rule{.2em}{0ex}}r\ge {r}_{b}.\end{array}$$*

*If μl is sufficiently large, we can approximate this transmissivity by*

*where γ = 2 log 2 and w is the FWHM of the pinhole response. The sensitivity of this pinhole is*

*$${s}_{w}=\int {\tau}_{w}\left(\mid \mid \underset{\xaf}{x}\mid \mid \right)d\underset{\xaf}{x}=2\pi {\left(\frac{w}{\gamma}\right)}^{2},$$*

*which is also proportional to w
^{2}. The normalized transmissivity is*

*$$t\left(r\right)=\frac{{\tau}_{1}\left(r\right)}{{s}_{1}}=\frac{{\gamma}^{2}}{2\pi}{e}^{-\mathit{\gamma r}}$$*

*which has corresponding frequency response [35]*

*$$T\left(\rho \right)=\frac{{\gamma}^{3}}{{\left[{\left(2\mathit{\pi \rho}\right)}^{2}+{\gamma}^{2}\right]}^{\frac{3}{2}}}.$$*

*We again choose a Gaussian apodizing function A(ρ) = e
^{-π(ρ/κ)2} where κ was defined in (20), so the PSF again has FWHM β. Substituting into (19), the estimate variance is approximately*

*$${\sigma}^{2}\left(\underset{\xaf}{x}\right)\approx \frac{\tilde{\lambda}\left(\underset{\xaf}{x}\right)}{\frac{{t}_{0}2\pi {w}^{2}}{{\gamma}^{2}}}{\int}_{0}^{2\pi}{\int}_{0}^{\infty}\frac{1}{{\gamma}^{6}}{\left[{\left(2\mathit{\pi \rho}\right)}^{2}+{\gamma}^{2}\right]}^{3}\mathrm{exp}\left(-\pi 2{\left(\frac{\mathit{\rho \beta}}{\mathit{w\kappa}}\right)}^{2}\right)\phantom{\rule{.2em}{0ex}}\rho \phantom{\rule{.2em}{0ex}}\mathit{d\rho}\phantom{\rule{.2em}{0ex}}\mathit{d\theta}$$*

$$\phantom{\rule{2.2em}{0ex}}=\frac{\tilde{\lambda}\left(\underset{\xaf}{x}\right)}{{t}_{0}{w}^{2}}\frac{1}{{\gamma}^{4}}{\int}_{0}^{\infty}\rho {\left[{\left(2\mathit{\pi \rho}\right)}^{2}+{\gamma}^{2}\right]}^{3}\mathrm{exp}\left(-\pi 2{\left(\frac{\mathit{\rho \beta}}{\mathit{w\kappa}}\right)}^{2}\right)\phantom{\rule{.2em}{0ex}}\mathit{d\rho}.$$

$$\phantom{\rule{2.2em}{0ex}}=\frac{\tilde{\lambda}\left(\underset{\xaf}{x}\right)}{{t}_{0}{w}^{2}}\frac{1}{{\gamma}^{4}}{\int}_{0}^{\infty}\rho {\left[{\left(2\mathit{\pi \rho}\right)}^{2}+{\gamma}^{2}\right]}^{3}\mathrm{exp}\left(-\pi 2{\left(\frac{\mathit{\rho \beta}}{\mathit{w\kappa}}\right)}^{2}\right)\phantom{\rule{.2em}{0ex}}\mathit{d\rho}.$$

*After some tedious integration, we arrive at*

*$${\sigma}^{2}\left(\underset{\xaf}{x}\right)\approx \frac{\tilde{\lambda}\left(\underset{\xaf}{x}\right)}{{t}_{0}}\frac{1}{2{\left(\mathit{\beta \kappa}\right)}^{4}}\left[6{y}^{2}+6y+3+{y}^{-1}\right]\phantom{\rule{.9em}{0ex}}\mathrm{where}\phantom{\rule{.9em}{0ex}}y=2\pi {\left(\frac{w}{\mathit{\kappa \lambda \beta}}\right)}^{2}.$$*

*The variance-minimizing pinhole size can be found numerically to be w
_{min} ≈ 0.3β.*

*The variance for the Laplacian pinhole is also plotted in Fig. 2. The minimum standard deviation for the Gaussian pinhole is about 2.4 times lower than that of the Laplacian pinhole, presumably because the Gaussian pinhole is better matched to the Gaussian-apodized inverse filter.*

*6. Simulation results*

*Since the analytical development for the variance-minimizing pinhole width involved approximations, we performed Monte Carlo simulations to evaluate the empirical variance of the estimators as a function of pinhole size.*

*We used a 1D object of the form*

*$$\lambda \left(\underset{\xaf}{x}\right)\propto 9\delta \left(x-146\right)+\mathrm{rect}\left(\frac{\left(x-208\right)}{64}+2\Lambda \frac{\left(x-64\right)}{44}\right),$$*

*where Λ( x) = (1 - |x|) rect(x/2) is the unit triangular function. The desired reconstructed spatial resolution was arbitrarily chosen to be β = 3 mm. The system response f(v̲|x) was a 1D Gaussian pinhole whose FWHM w varied from 0.9 to 2.9 mm. For each pinhole size, we performed 4000 realizations, where the mean number of photons per realization was 100w, i.e. the sensitivity increased linearly with pinhole size (see (21)). An estimate $\widehat{\lambda}$
(x̲) was computed for each realization using the apodized inverse filter (18), which in this case corresponds to a Gaussian filter with FWHM √β
^{2} - w
^{2}, as shown in (22).*

*Figure 4 shows the sample means of the 4000 realizations for each of the 21 pinhole sizes considered, ranging from 0.9 to 2.9 mm FWHM in 0.1 mm increments. The 21 curves are indistinguishable because we are fixing the reconstructed spatial resolution to β = 3mm FWHM, as confirmed by Figure 4. We also computed the sample standard deviations from the 4000 realizations for each pinhole size. Three of the 21 curves are shown in Fig. 5. Note that the variance-minimizing pinhole size is 3/√2 ≈ 2.1mm FWHM, which has the lowest empirical variance of the three curves shown. (The plot would be difficult to interpret if all 21 curves were shown.) To verify that the theoretically predicted variance-minimizing pinhole size indeed yields the lowest variance, Fig. 6 shows the relative empirical standard deviations for each of the 119 spatial positions x for which λ(x̲) > 0 as a function of pinhole size w. All of the curves have a minimum near the predicted value of 2.1 mm FWHM.*

*7. Discussion*

*We have analyzed the performance of a kernel-based indirect density estimation method for image recovery from list-mode measurements. We showed, under a few simplifying assumptions, that the variance-minimizing pinhole width is proportional to the desired reconstructed spatial resolution. The simplifying assumptions include consideration of a shift-invariant imaging system, a spatially smooth emitting object, and a particular kernel based on an apodized inverse filter. Empirical results confirmed that the predicted variance-minimizing pinhole size yielded the lowest variability estimates, even for an object that was far from “spatially smooth.” We conjecture that there should be a monotonic relationship between desired reconstructed spatial resolution and variance-minimizing pinhole width even for broader classes of image recovery methods and more general imaging systems. Exploring this conjecture will be the subject of future work.*

*Although we have focused on pinhole size in this paper, a more general question would be ‘what is the optimal pinhole transmissivity function for a given target reconstructed spatial resolution?’. We conjecture that the density estimation approach described in this paper may be useful for exploring this question.*

*Acknowledgement*

*This work was supported in part by NIH grants CA-60711 and CA-54362 and by the Whitaker Foundation. The author gratefully acknowledges K. M. Brown and W. L. Rogers for helpful discussions.*

*Footnotes*

^{1} | This intuition is somewhat consistent with the findings of Myers et al.[3] on the relationship between optimum aperture size and signal size in the context of signal detection, although low image variance need not necessarily be associated with high detection SNR since correlation properties are also important. |

^{2} | All integrals over d
x̲ are d-dimensional. |

^{3} | Strictly speaking this is a conditional pdf, conditioned on the event that the emission is detected. We consider only the detected emissions, so for simplicity we omit the notation for conditioning on detection. With such notation, (1) is just a form of Bayes rule. |

^{4} | Without loss of generality, one can rescale the time axis exponentially to account for radioactive decay. |

^{5} | Silverman [18, p. 27] refers to such methods as general weight function estimates in the context of direct density estimation. |

^{6} | Our purpose here is to analyze such estimators for the goal of system design, not to argue the merits of such estimators over alternatives. |

^{7} | Equation (8) is closely related to equation (3.6) on on p. 36 of [18] for direct density estimation; the remainder of the derivation is distinct to indirect density estimation. |

^{8} | Neglecting edge effects at the boundaries of the field-of-view, and assuming that any magnification factor has already been accounted for in the V̲_{n}
’s [26]. |

^{9} | As a further approximation, one can assume $\widehat{\lambda}$
≈ λ if the scale of the spatial variations in λ is large relative to the FWHM of h. |

^{10} | The transmissivity of the 1D version of this pinhole has the form of the Laplacian pdf ½e
^{-|x|}, hence the name—for lack of a better name. |

*References*

**1. **B M Tsui, C E Metz, F B Atkins, S J Starr, and R N Beck, “A comparison of optimum detector spatial resolution in nuclear imaging based on statistical theory and on observer performance,” Phys. Med. Biol. **23**654–676 (1978). [CrossRef] [PubMed]

**2. **H H Barrett, J N Aarsvold, H B Barber, E B Cargill, R D Fiete, T S Hickernell, T D Milster, K J Myers, D D Patton, R K Rowe, R H Seacat, W E Smith, and J M Woolfenden, “Applications of statistical decision theory in nuclear medicine,” In C N de Graaf and M A Viergever, editors, Proc. Tenth Intl. Conf. on Information Processing in Medical Im. , (Plenum Press, New York, 1987) pp. 151–166.

**3. **K J Myers, J P Rolland, H H Barrett, and R F Wagner, “Aperture optimization for emission imaging: effect of a spatially varying background,” J. Opt. Soc. Am. A **7**1279–1293 (1990). [CrossRef] [PubMed]

**4. **H H Barrett, “Objective assessment of image quality: effects of quantum noise and object variability,” J. Opt. Soc. Am. A **7**1266–1278 (1990). [CrossRef] [PubMed]

**5. **J P Rolland, H H Barrett, and G W Seeley, “Ideal versus human observer for long-tailed point spread functions: does deconvolution help?” Phys. Med. Biol. **36**1091–1109 (1991). [CrossRef] [PubMed]

**6. **H H Barrett, J Yao, J P Rolland, and K J Myers, “Model observers for assessment of image quality,” Proc. Natl. Acad. Sci. **90**9758–65 (1993). [CrossRef] [PubMed]

**7. **C K Abbey and H H Barrett, “Linear iterative reconstruction algorithms: study of observer performance,” In *Information Processing in Medical Imaging*, (Kluwer, Dordrect, 1995) pp 65–76.

**8. **H H Barrett, J L Denny, R F Wagner, and K J Myers, “Objective assessment of image quality. II. Fisher information, Fourier crosstalk, and figures of merit for task performance,” J. Opt. Soc. Am. A **12**834–852 (1995). [CrossRef]

**9. **N E Hartsough, H H Barrett, H B Barber, and J M Woolfenden, “Intraoperative tumor detection: Relative performance of single-element, dual-element, and imaging probes with various collimators,” IEEE Trans. Med. Imaging **14**259–265 (1995). [CrossRef] [PubMed]

**10. **T Kanungo, M Y Jaisimha, J Palmer, and R M Haralick, “A methodology for quantitative performance evaluation of detection algorithms,” IEEE Trans. Image Process. **4**1667–1674 (1995). [CrossRef] [PubMed]

**11. **E P Ficaro, J A Fessler, P D Shreve, J N Kritzman, P A Rose, and J R Corbett, “Simultaneous transmission/emission myocardial perfusion tomography: Diagnostic accuracy of attenuation-corrected 99m-Tc-Sestamibi SPECT,” Circulation **93**463–473 (1996). http://www.eecs.umich.edu/?fessler [CrossRef] [PubMed]

**12. **A O Hero, J A Fessler, and M Usman, “Exploring estimator bias-variance tradeoffs using the uniform CR bound,” IEEE Trans. Signal Process. **44**2026–2041 (1996). http://www.eecs.umich.edu/?fessler [CrossRef]

**13. **J A Fessler and A O Hero, “Cramer-Rao lower bounds for biased image reconstruction,” In *Proc. Midwest Symposium on Circuits and Systems*, Vol. 1, (IEEE, New York, 1993) pp 253–256. http://www.eecs.umich.edu/~fessler

**14. **Mohammad Usman, “Biased and unbiased Cramer-Rao bounds: computational issues and applications,” PhD thesis, Univ. of Michigan, Ann Arbor, MI, 48109-2122, Ann Arbor, MI., 1994.

**15. **Chor-Yi Ng, “Preliminary studies on the feasibility of addition of vertex view to conventional brain SPECT imaging,” PhD thesis, Univ. of Michigan, Ann Arbor, MI, 48109-2122, Ann Arbor, MI., January 1997.

**16. **F O’Sullivan and Y Pawitan, “Bandwidth selection for indirect density estimation based on corrupted histogram data,” J. Am. Stat. Assoc. , **91**(434):610–26, June (1996). [CrossRef]

**17. **P P B Eggermont and V N LaRiccia, “Nonlinearly smoothed EM density estimation with automated smoothing parameter selection for nonparametric deconvolution problems,” J. Am. Stat. Assoc. , **92**(440):1451–1458, December (1997). [CrossRef]

**18. **B W Silverman, *Density estimation for statistics and data analysis*, (Chapman and Hall, New York, 1986).

**19. **I M Johnstone, “On singular value decompositions for the Radon Transform and smoothness classes of functions,” Technical Report 310, Dept. of Statistics, Stanford Univ., January 1989.

**20. **I M Johnstone and B W Silverman, “Discretization effects in statistical inverse problems,” Technical Report 310, Dept of Statistics, Stanford Univ., August 1990.

**21. **I M Johnstone and B W Silverman, “Speed of estimation in positron emission tomography,” Ann. Stat. **18**251–280 (1990). [CrossRef]

**22. **P J Bickel and Y Ritov, “Estimating linear functionals of a PET image,” IEEE Trans. Med. Imaging **14**81–87 (1995). [CrossRef] [PubMed]

**23. **B W Silverman, “Kernel density estimation using the fast Fourier transform,” Appl. Stat. **31**93–99 (1982). [CrossRef]

**24. **B W Silverman, “On the estimation of a probability density function by the maximum penalized likelihood method,” Ann. Stat. **10**795–810 (1982). [CrossRef]

**25. **D L Snyder and M I Miller, *Random point processes in time and space*, (Springer Verlag, New York, 1991). [CrossRef]

**26. **A Macovski, *Medical imaging systems*, (Prentice-Hall, New Jersey, 1983).

**27. **M C Jones, J S Marron, and S J Sheather, “A brief survey of bandwidth selection for density estimation,” J. Am. Stat. Assoc. , **91**(433):401–407, March (1996). [CrossRef]

**28. **P P B Eggermont and V N LaRiccia, “Maximum smoothed likelihood density estimation for inverse problems,” Ann. Stat. **23**199–220 (1995). [CrossRef]

**29. **Y-C Tai, A Chatziioannou, M Dahlbom, and E J Hoffman, “Investigation on deadtime characteristics for simultaneous emission-transmission data acquisition in PET,” In *Proc. IEEE Nuc. Sci. Symp. Med. Im. Conf.*, (IEEE, New York, 1997).

**30. **D L Snyder and D G Politte, “Image reconstruction from list-mode data in emission tomography system having time-of-flight measurements,” IEEE Trans. Nucl. Sci. **20**1843–1849 (1983). [CrossRef]

**31. **H H Barrett, Timothy White, and Lucas C Parra, “List-mode likelihood,” J. Opt. Soc. Am. A **14**2914–2923 (1997). [CrossRef]

**32. **H H Barrett and W Swindell, *Radiological imaging: the theory of image formation, detection, and processing*, (Academic, New York, 1981).

**33. **V Ochoa, R Mastrippolito, Y Charon, P Laniece, L Pinot, and L Valentin, “TOHR: Prototype design and characterization of an original small animal tomograph,” In *Proc. IEEE Nuc. Sci. Symp. Med. Im. Conf.*, (IEEE, New York, 1997).

**34. **S Geman and C R Hwang, “Nonparametric maximum likelihood estimation by the method of sieves,” Ann. Stat. **10**401–414 (1982). [CrossRef]

**35. **R Bracewell, *The Fourier transform and its applications*, (McGraw-Hill, New York, 1978).