Abstract

Uniformly redundant arrays (URA) have autocorrelation functions with perfectly flat sidelobes. The URA combines the high-transmission characteristics of the random array with the flat sidelobe advantage of the nonredundant pinhole arrays. This gives the URA the capability to image low-intensity, low-contrast sources. Furthermore, whereas the inherent noise in random array imaging puts a limit on the obtainable SNR, the URA has no such limit. Computer simulations show that the URA with significant shot and background noise is vastly superior to random array techniques without noise. Implementation permits a detector which is smaller than its random array counterpart.

© 1978 Optical Society of America

I. Introduction

The concept of using a coded aperture was first introduced by Dicke[1] and Abies.[2] In the original formulation the single opening of a simple pinhole camera is replaced by many pinholes (called collectively the aperture) arranged randomly. Figure 1 gives a simple view of how the concept is implemented. Each point on a self-luminous object deposits a shadow of the aperture on the picture. Subsequent processing of the picture yields the reconstructed image which should resemble the original object.

There are two primary motivations for using a coded aperture approach. The original motivation was to obtain an imaging system which maintained the high angular resolution of a small single pinhole but produced images that have a SNR commensurate with the total open area of the aperture. The technique is usually applied to x-ray imaging because most x-ray sources are so weak that a single pinhole camera would have to have a very large opening in order to obtain a reasonable SNR. The large hole precludes good angular resolution.

If there are N pinholes in the aperture, the picture consists of N overlapping images of the object. The coded aperture technique (for a point source) can improve the SNR by roughly √N when compared to the single pinhole camera.[1] Since N might be as large as 105, the goal of an improved SNR is obtainable.

The second primary motivation is to perform tomography as shown by Barrett.[3] Object points at different distances from the aperture will cast shadows of the aperture onto the picture with different over-all sizes. One can reconstruct a particular depth in the object by treating the picture as if it was formed by an aperture scaled to the size of the shadow produced by the depth under consideration. This property of coded apertures is particularly beneficial in medical applications, although uses in industrial inspection can also be easily envisioned.

The recorded picture is not recognizable as the object because the many pinholes cause the picture to consist of many overlapping images. In order to be useful, the picture must be subjected to a reconstruction method which will compensate for the effects of the imaging system. The reconstruction procedure is designed to give the location and intensity of each source in the field of view. Basically this is accomplished by detecting the location and strength of aperture patterns in the picture. The analysis methods developed so far can be categorized as either a deconvolution or a correlation. The following is a heuristic view of these two methods.

If the recorded picture is represented by the function P, the aperture by A and the object by O,

P=(O*A)+N,
where * is the correlation operator and N is some noise function. In the deconvolution methods, the object is solved for by
Ô=R1[(P)/(A)]=O+R1[(N)/(A)],
where ℱ,ℱ−1, and R are, respectively, the Fourier transform, the inverse Fourier transform, and the reflection operator.

The main problem with deconvolution methods is that ℱ(A) might have small terms. A is usually defined as an array of ones and zeros where the ones have the same pattern in the array as do the pinholes in the aperture. We have empirically determined that roughly 15% of the Fourier transforms of 32 × 32 random arrays have at least one term which is zero. It is possible to avoid these particular arrays, but it appears that it is a general property of large binary random arrays to have some small terms in their Fourier transform. These small terms can cause the noise to dominate the reconstructed object. Although the situation with deconvolution methods has been improved by using Wiener filtering (Woods et al.[4]), the major problem with the method is still the possibility of small terms in ℱ(A) resulting in an unacceptably noisy reconstruction. Therefore, this paper will consider only correlation methods of analysis.

In the correlation method the reconstructed object is defined to be[5],[6]

Ô=P*G=RO*(A*G)+N*G,
where G is called the postprocessing array and is chosen such that A * G approximates a delta function. In general, we do not mean G to be the convolutional inverse function (A−1), rather G will be selected in an ad hoc manner such that A * G has desirable properties. Normally G will be a binary array (as is A). If A * G is a delta function, Ô = O + N * G, and the object has been perfectly reconstructed except for the presence of the noise term. Note that the noise term in Eq. (3) will not have singularities as in the deconvolution method.

The original expectation of obtaining a roughly √N improvement in the SNR has not been realized because A * G in general will not be a delta function. A point on the object will contribute to the reconstructed object the distribution A * G instead of a delta function. Thus, even if there is no background noise and the source is intense enough such that shot noise is not a problem, the SNR for a point source becomes a fixed number regardless of the exposure time. The SNR becomes the ratio of the central peak in A * G to the noise in A * G, that is, the square root of the variance of the sidelobes. This noise is called the inherent noise[5] and puts a limit on the possible SNR improvement. The situation is much worse when the object is not a point source but is extended. In the extended case, the inherent noise from all points in the object contribute noise to each point in the reconstructed object. The result is a low SNR which cannot be improved because the noise is set by the structure in A * G rather than counting statistics or background levels. In fact, the SNR for the coded aperture technique can be smaller than the SNR for a comparative single pinhole camera if the object is extended.

There are a few arrays such that A * G is effectively a delta function (assuming A * G is sampled on the same scale as the size of the pinholes). Nonredundant arrays (NRA) have the property that their autocorrelations (i.e., A * A) consist of a central spike with the sidelobes equal to unity out to some particular lag L and either zero or unity beyond that.[7] A true delta function would have all sidelobes out to infinite lags equal to zero. If all the sidelobes are equal to a constant value (such as unity), the only effect on the reconstructed object is the addition of a removable dc level. However, the sidelobes of the NRA are not all equal to the same value, and thus the reconstructed object will contain inherent noise. This is true except for extremely small objects (see discussion of the system point-spread function in Sec. III).

The reason the sidelobes of an NRA are flat can be seen if one measures all the separations between all possible pairs of holes. Each separation (out to L) will occur once and only once, and thus the separations are nonredundant. Although this property is the origin of the major advantage of the NRA (i.e., no inherent noise) it also puts such a severe restriction on generating an NRA that they never have very many holes. In fact, one of the largest NRAs known has only twenty-four holes and a density (i.e., the ratio of the open area of an aperture to the total area) of only 0.03.[8] The small number of holes means that the SNR increase will not be large even though there is no inherent noise.

There is a class of arrays called pseudonoise arrays[9],[10] from which an A and G can be generated such that A * G is a delta function. In the pseudonoise arrays the number of times that a particular separation (between a pair of holes or ones in the aperture array) occurs is a constant regardless of which separation distance is under consideration, that is, the separations are uniformly redundant. We will label all arrays for which all separations (less than some maximum L) between pairs of holes occur a constant number of times as uniformly redundant arrays (URA). Thus both the NRA and the arrays derived from pseudonoise arrays are uniformly redundant arrays. The purpose of this paper is to describe various ways that the URA can be employed to form an A * G which eliminates the inherent noise while maintaining a high SNR and high angular resolution.

II. Implementation

In order to implement the URAs we must establish the relationship between the object, the aperture, and the picture. The digital reconstruction schemes discussed in this paper will be based on the following model for the coded aperture camera (see Brown[5]). Let p(x,y) be the photons received at position x,y at the detector (film, position sensitive proportional counter, etc.). Define o(ξ,η,b) to be the object intensity distribution as a function of spatial coordinates ξ and η (measured in a plane parallel to the aperture) and the distance b from the plane to the aperture. If a(x,y) is the aperture transmission as a function of spatial coordinates, then assuming no diffraction or noise,

p(x,y)=o(ξ,η,b)a(fξ/b+xm,fη/b+ym)dξdηdb,
where f is the distance from the aperture to the detector, m is the magnification factor, (b + f)/b, and the integration is over the extent of the object. For most of this paper we will assume that the object consists of an emitting plane at constant known distance b from the aperture. This removes the need to integrate over b in Eq. (4).

Note that Eq. (4) has the mathematical form of a correlation. If o(ξ,η,b) was defined to be the image formed on the detector by a single pinhole—as done by several authors—the equation would have taken on the form of a convolution. o(ξ,η,b) was defined in the above manner so that by solving for o(ξ,η,b), one obtains the actual distribution of intensity in the source rather than a reflected version.

In order to perform digital analysis of the picture, Eq. (4) must be quantized. Define O(i,j) to be an array whose elements represent the number of photons observed during the exposure time in an area equal to that of a single pinhole from a ΔαΔβ region of the source centered at (iΔα,jΔβ,b). Let Δα = Δβ = c/f rad where each pinhole in the aperture is a c by c square hole. Define A(i,j) to be an array with each element denoting the presence or absence of a pinhole in the aperture. If there is a hole at (i·c, j·c), A(i,j) has the value one, otherwise it is zero. The possible locations for the pinholes are restricted to a grid of discrete points with a spacing equal to c.

Equation (4) can be approximated to have the same form as Eq. (1):

P(k,l)O*A+NijO(i,j)A(i+k,j+l)+N(k,l),
where P(k,l) should be interpreted as the number of photons received from the object in an m·c by m·c area of the detector centered at (k·m·c, l·m·c) plus some noise N(k,l).

The P array is measured experimentally and since the A array is known, Eq. (5) is used to determine an estimate of the object intensity distribution. In the correlation analysis methods, the reconstructed object is determined from P and A by

Ô(i,j)=P*GklP(k,l)G(k+i,l+j),
where G will be chosen such that A * G is approximately (or exactly) a delta function.

The above is applicable to all coded aperture techniques. We will now employ the above in the implementation of URAs.

The choice of A and G depends on many factors, and often the choice is a compromise dictated by restrictions on the other parameters (b,c,f, the object size, etc.). It will be shown that A and/or G will be a mosaic of basic URA patterns. In general the size and mosaic characteristics will depend on these parameters whereas the details of exactly where each pinhole is located can be chosen independently such that A * G is a delta function. First we will describe how to locate the holes in the aperture and generate the G array, and then we will describe how the parameters b,c,f, etc. influence the over-all design of the imaging system.

The aperture will be a section (whose size we will determine later) of an infinite uniformly redundant array consisting of a mosaic of identical basic arrays. The arrays used in this paper follow from the pseudonoise arrays described by Calabro and Wolf.[9] The basic array will have dimensions r by s where r and s are prime numbers, and rs equals 2. Thus A(i,j) = A(I,J), where I = modri and J = modsj. Furthermore,

A(I,J)=0ifI=0,1ifJ=0,I0,1ifCr(I)Cs(J)=1,0otherwise,
where
Cr(I)=1if there exist an integerx,1x<rsuch thatI=modrx21otherwise.
A simple method to implement the above equations is to evaluate modrx2 for all x from 1 to r − 1. The resulting values give the locations (I) in Cr that contain +1. All other terms in Cr are −1. In the above two equations it is important that I is associated with r, the larger of the two prime numbers, and J with s.

The postprocessing array G will be a section of the function

G(i,j)=1ifA(i,j)=1=1ifA(i,j)=0,
which is used because
ijA(i,j)G(i+k,j+1)(rs1)/2ifmodrk=0andmodsl=0=0otherwise.
That is, the correlation of A with G is a mosaic of delta functions with zero sidelobes. Figure 2 shows a URA array with r = 43 and s = 41.

In Eq. (10), A and G are assumed to be of infinite extent. In actual practice, space will limit the extent of the detector and aperture, and thus one must be sure that the resulting arrangement maintains the delta function nature of A * G. There are two ways to achieve zero sidelobes in limited space. Figures 3 and 4 show the two arrangements using the assumption that the object is far enough away that m is unity.

The first arrangement to be discussed (Fig. 3) has the very important advantage that the detector need not be bigger than the aperture. In many coded aperture techniques the number of picture elements is roughly the number of aperture elements plus the number of object resolution elements. However, in the arrangement of Fig. 3, the picture need only be as large as a basic aperture pattern. Of course the object will send photons to a larger region of the detector plane, but all the required information is in the one section the size of the basic aperture pattern. Effectively the one section contains a circular correlation of the object with a basic aperture pattern. This advantage is especially important in x-ray imaging because of the difficulties in producing an x-ray detector with many resolution elements.

In Fig. 3 the aperture consists of a mosaic of basic aperture patterns, each r·c by s·c in physical size. The picture array will be an r by s array obtained by sampling the r·c by s·c area directly below the central basic aperture pattern. As shown in Fig. 3, an object point that is within the field of view will contribute to the picture a shadow of a section of the aperture. This section consists of some elements from the central basic pattern as well as elements from neighboring patterns and is a cyclic version of the basic pattern. As long as all object points contribute a complete cyclic version of the basic pattern, the r·c by s·c section of the picture will contain all the information necessary to unfold the object with a system point-spread function equal to a delta function.

The imaging of extended objects places additional geometrical constraints on the imaging system. The condition that all points contribute a complete cycle determines how much mosaicking is necessary. If the object has an angular size of t by t rad, the object array, O(i,j) will be a T by T array where T = tα = t·f/c. Thus, assuming that the camera is pointed toward the center of the object, the mosaicking must provide a border of width T/2 elements (i.e., T·c/2 in physical size) around the central pattern. Such an aperture is shown in Fig. 2.

The size of the object also dictates the minimum size of the basic aperture pattern. In general A * G has more than one peak, one every time modrk = 0 and modsl = 0 [see Eq. (10)]. These peaks in A * G mean that the reconstructed object will consist of a mosaic of reconstructed objects. Only a single version of the reconstructed object is required so one must be careful that the different versions do not overlap. In order for the different versions not to overlap, r and s should be greater than T. Thus if the aperture is an r by s basic pattern mosaicked to produce a border of width ≳ r/2, the resulting system point spread function will be a delta function for objects smaller than ∼ r·Δα by s·Δα radi.

With the picture array obtained by sampling the central r·c by s·c area of the picture plane into an r by s array, the reconstructed object is found by

Ô(k,l)=ilP(i,j)G(i+k,j+l),
where P(i,j) is considered to be zero if P(i,j) corresponds to a point outside the central r·c by s·c section of the picture, and G is a 3 × 3 mosaic of the basic function defined in Eq. (9). In practice the picture will probably be found by sampling on a finer scale than with a c × c box, then A and G must also be defined on a finer scale by representing each hole by a square array of ones instead of just a single one.

Although the above was developed with m = 1, it is simple to modify it for other cases. It is clear from Eq. (4) that the only effect of m is a scaling of the picture. We can compensate for this by an appropriate adjustment of the quantizing aperture in the sampling of the picture. The picture array is found by sampling the central m·r·c by m·s·c area into an r by s array. In actual practice the picture will probably undergo just one sampling but on a fine enough scale that various depths in the object can be unfolded by combining different groups of the finer samples into various r by s picture arrays corresponding to the different m values.

In implementing the arrangement shown in Fig. 3, the following rules should be followed:

  • (1) First one decides what value of m [= (b + f)/b] is desired. This decision is normally based on considerations of the 1/b2 attenuation of source strength and whatever tomographical depth resolution is required.
  • (2) Given a detector with a spatial resolution of e, c should be approximately equal to 2 e/m.
  • (3) Given a desired angular resolution Δα for the imaging system, f = cα.
  • (4) Given that the object to be imaged has an angular size of t rad, then r > tα; rs = 2.
  • (5) The size of the detector should be m·r·c by m·s·c.
  • (6) The aperture should be a 2r by 2s section of a infinite uniformly redundant array, A(i,j), centered on a basic pattern (see Fig. 2).

Figure 4 shows a second possible arrangement for the aperture and film. In this arrangement the aperture consists of a single basic r by s aperture pattern while the detector is large enough to contain the entire picture as expressed by Eq. (4). This arrangement is identical to the arrangements used by many authors except that the URA is used instead of a random array. The reconstructed object is obtained using Eq. (11) with the P array (which is now r + T − 1 by s + T − 1) found by sampling the central m (r + T − 1) by m (s + T − 1) area of the detector. The required larger detector for the Fig. 4 arrangement is a severe disadvantage for most systems, and this method is included here for those situations where there is a limited space for the aperture but not for the detector. It is also included because of its similarity to previous arrangements.

Rules (1), (2), (3), (4), and (5) for the implementation of Fig. 3 are valid for implementing Fig. 4. Rules (5) and (6) are replaced by the following:

  • (5) The detector should be m(r + Tc by m (s + Tc and
  • (6) The aperture should be an r by s basic aperture pattern.

III. System Point-Spread Functions

Many characteristics of an imaging system can be seen in the system point-spread function (SPSF). The SPSF is defined to be the reconstructed object resulting from imaging a point source. From Eqs. (5) and (6),

SPSF=A*G.
We will investigate four different systems: three that are implemented as in Fig. 4 and one implemented as in Fig. 3. The Fig. 4 arrangement will be used for a random array aperture with the matched process. The same arrangement will be used for a new decoding method called balanced correlation. This geometry is also incorporated in the NRA system. URAs will employ the geometry of Fig. 3.

In the matched process (also called autocorrelation or reprojection) A is a random array of ones and zeros where the ones in the array have the same pattern as the pinholes in the physical aperture. A typical random array is shown in Fig. 5. It is a 40 × 40 array where the white denotes a hole (and a one in the array) and the black represents opaque material (and a zero in the array). In this example the holes occupy exactly 50% of the aperture.

For the matched process G is identical to A, and thus the SPSF is the autocorrelation of the aperture array. Figure 6a is a 1-D slice through a typical SPSF for the matched process. The 2-D SPSF is a spike on top of a pyramid. The ratio of the height of the spike to the height of the pyramid is the ratio of the number of pinholes to the number of possible pinhole positions (the density, 0.5 in our example).

The inherent noise in the reconstructed image will be the correlation of the pyramid (the SPSF without its central spike) with the object. Thus the fact that the sidelobes are not flat (on the average) means that the reconstructed object will be superpositioned on an object-dependent, high-contrast background. This will cause a severe degradation of the spatial resolution especially if the object is a low-contrast source. Furthermore, the high-frequency fluctuations about the average value in the SPSF will produce high-frequency inherent noise in the reconstructed image making the imaging of point sources difficult.

These two detrimental features (nonflat sidelobes and high-frequency fluctuations) in the SPSF limit the usefulness of the matched process to high-contrast sources containing only a few bright emitting resolution elements.

An improvement over the matched process can be obtained by what we call the balanced correlation method. In the balanced correlation procedure the recorded picture is the same as the recorded picture obtained using the matched process. The improvement is achieved by using the following array for G:

G(i,j)=1ifA(i,j)=1,=ρ/1ρifA(i,j)=0,
where ρ is the density of the aperture array.

The balanced correlation method is similar to the mismatch method of Brown.[5],[11] In Brown's mismatch, G(i,j) is −1 if A(i,j) is zero, and thus if ρ = 0.5 the balanced correlation is the same as the Brown mismatch approach. Since the optimum ρ is a function of the ratios of the types of noise that are present, the balanced correlation method will work better than the mismatch when ρ is not 0.5. It should also be pointed out that the Brown subtractive method[5],[6],[11] is just the sum of two mismatch reconstructions and in retrospect is seen to represent an inefficient utilization of space.

Figure 6(b) shows a typical 1-D slice through the SPSF for the balanced correlation method. The values of G representing the pinholes in the aperture have been balanced with the values corresponding to the opaque aperture material as per Eq. (13). Thus G has been defined such that the sum of all the terms in G is zero. Our motivation for balancing the G array is that if the sum of the terms is zero, A * G will have an expected value of zero, thus the pyramid structure in the SPSF has been removed resulting in sidelobes which are (on the average) flat. By removing the pyramid structure in the SPSF, low-contrast sources will be more amenable to imaging. On the other hand, the fluctuations about zero are somewhat larger than in the matched process so the effectiveness for point sources might be reduced.

Even with the balanced correlation technique there might still be object dependent high-contrast inherent noise. Such noise occurs when one portion of the aperture array contains a higher than average density of holes. A qualitative way to evaluate this is to correlate the G array with an array of all ones. In the matched process this results in a pyramid. In the balanced correlation method there can be trends in the SPSF with sufficient deviation from flat sidelobes to cause high-constrast inherent noise. The trends can be reduced by having the random array consist of a mosaic of 10 × 10 arrays. Each 10 × 10 array is different, but they all have exactly the desired density. This guarantees that the correlation of the G array with the array of all ones is zero at least at every tenth term, and thus the probability of large trends developing is reduced.

Coded aperture imaging with nonredundant arrays is a matched process with the random array aperture replaced by the special NRA. A 1-D slice through the SPSF [Fig. 6(c)] shows that out to some particular lag L, the SPSF is flat with no fluctuations or trends. One disadvantage of this system is that if the object's size (T) is larger than L/2, the reconstructed object will contain inherent noise due to those parts of the SPSF at lags greater than L. Since large NRAs[8] typically have an L of about 9, only very small sources can take advantage of the flat sidelobes.

In implementing the NRA, the 40 × 40 random array with 800 holes is replaced by an array with only twenty-four holes, and thus the central spike in the SPSF for the NRA is approximately thirty-three times smaller than the spike for the random array. [Notice that the NRA's SPSF is on a different scale in Fig. 6(c).] Thus, although the flat sidelobes mean that there will be no inherent noise one must expose about 1000 times longer to obtain the same reduction in shot noise.

The URA is implemented differently than the above three methods (Fig. 3 instead of Fig. 4). Its SPSF [Fig. 6(d)] combines the high-transmission characteristics of the random arrays with the flat sidelobe advantage of the nonredundant arrays. The high transmission provides a capability to image very low intensity sources, and the flat sidelobes mean there will be no inherent noise to obscure low-contrast sources. The computer simulations in Sec. IV below will demonstrate that these advantages result in a significant improvement in the performance of the coded-aperture system.

Note that outside the r by s region around the central spike the SPSF begins to have inherent noise due to the fact that A and G are not infinite. This is not a problem as long as r and s are larger than T [see rule (4) above]. Furthermore since the correlation of the G array for the URA with an array of all ones is always +1, there will be no trends in the URA reconstructed object.

IV. Simulations

We have performed computer simulations in order to demonstrate the differences between the various methods of coded aperture imaging. The matched and balanced correlation methods will be simulated with no source or detector noise in order to show that those procedures have inherent noise which dominates the reconstructed object. An absence of noise is equivalent to exposing for a very long time with a perfect detector [N(k,l) equal to zero in Eq. (5)]. The simulation of the URA system will include the noise and signal characteristics of an Anger camera viewing a 1 mCurie source. Even under these conditions, the URA approach will be superior to either random array techniques when they are applied under perfectly noiseless conditions.

Figure 7 shows the two simulated x-ray objects used for this study. Figure 7(a) is a high-contrast object in the shape of a man. The man consists of 164 equally intense points in a 40 × 40 array. His integrated intensity is approximately 1 mCurie. If the aperture to object separation is 3 cm and each pinhole is about 0.62 cm square, each resolution element on the man emits about 210 photons/sec through each pinhole (i.e., a c by c area). Figure 7(b) is a combination high- and low-contrast source. The object consists of a central cylinder with various bumps and depressions. Each resolution element of the large background disk has an intensity approximately equal to each element of the man (210 photons/pinhole). In addition there are two bumps (10% and 20% of the background intensity) as well as two depressions (10% and 20%). The sides of the background disk represent a high-contrast object, and the bumps and depressions will simulate a range of contrasts and sizes.

The first simulation generated a noiseless picture [i.e., N(k,l) = 0 in Eq. (5)] using the man from Fig. 7(a) as an object and the random array shown in Fig. 5. It was assumed that the picture was obtained from the arrangement shown in Fig. 4. This picture was analyzed by two methods: the matched and the balanced correlation.

Since the SPSF for the matched process consists of a central spike on top of a noisy pyramid, the pyramids from all points in the object add together to form a high-contrast, object-dependent background. This background (inherent noise) was 100 times larger than the true signal and made it impossible to discern the man. Since one knows that only the center of the P * G array contains the reconstructed image, it is possible to extract that portion out of P * G and scale it such that the borders approximately have the value zero (see Fig. 8). Although the legs of the man are now visible it is still not possible to discern what object was imaged.

Figure 8 is typical of the matched process when the object is extended. The true image is superpositioned on high-contrast inherent noise. Since the inherent noise can be described as the correlation of the object with the pyramid from the SPSF, it can only be removed if one has a priori knowledge of the object. For example, if it was known that the source was symmetric with only one high point, one might try to fit a paraboloid to the edges of the reconstructed image and then subtract out a mean level based on the resulting function. For an object such as the man, no simple function characterizes the inherent noise. We have tried to fit a function to Fig. 8 and although we were able to improve it some, the balanced correlation still gave a much better reconstruction. Woods et al.[4] have referred to this inherent noise as artifacts, and their Fig. 3 is an example of the presence of the artifacts even when the emitting object is small compared to the field of view.

The same encoded picture used to generate Fig. 8 was analyzed using a balanced correlation procedure to produce Fig. 9. As one can see the man is clearly visible although the inherent noise has caused some distortions. Notice that in the matched process one must subtract off a background after the correlation of P with G. At that time the background is object-dependent and a priori knowledge is required to remove it. On the other hand, by replacing the zero terms of the A array with an appropriate negative term (−ρ/1 − ρ) the inherent background associated with each point in the reconstructed object is substracted off as it is being calculated. The balanced correlation method can remove most of the artifacts discussed by Woods et al.[4] In the case of the man it is clear that the balanced correlation can give a significant improvement in the reconstructed object.

There is still some high-contrast inherent noise in Fig. 9. The ramping from the lower right-hand corner to the upper left-hand corner is due to trends in the SPSF of the balanced correlation. We believe most of the trends could be eliminated if the random array was generated as suggested in Sec. III, that is, by segmenting and forcing each segment to have the correct density.

The URA simulations assumed that the imaging system is the arrangement shown in Fig. 3. If the encoded picture does not contain noise, the reconstructed object is perfect and is an exact replica of the object. This means that if one exposes long enough with a reasonably good x-ray detector that eventually an image of almost any x-ray source can be obtained. On the other hand, the random array imaging always contains inherent noise which puts a limit on the quality of the reconstructed object.

In order to obtain an estimate of how well the URA works when noise is present we assumed that the man had an integrated intensity of 1 mCurie, and the exposure time was 1 sec. This meant that each point on the man emitted 210 photons/pinhole-sec. Poisson statistics on the source was assumed, as well as a background noise typically seen by an Anger camera. The resulting reconstructed object is shown in Fig. 10. The man contains no distortions or trends, and the noise is barely visible. When similar noise is added to the balanced correlation simulation there is not much difference because in the balanced correlation method the inherent noise is the dominant noise source.

On a more complicated source such as the one in Fig. 7(b), the matched process gives no recognizable structure. In the balanced correlation method one can just see the high-contrast edge of the large background disk. The URA method (with Poisson noise and background noise) clearly shows the 20% bump and the 20% depression (see Fig. 11). There is a hint of the 10% bumps but the 10% depression (which had the smallest size) is not visible. This demonstrates that the URA is capable of imaging low-contrast objects even when superpositioned on a high-intensity source. Since the noise in Fig. 11 is dominated by the Poisson statistics, a longer exposure time will improve the SNR. For example, if a 10-sec exposure was used rather than a 1-sec, all bumps and depressions are clearly seen (see Fig. 12).

V. Summary

Coded aperture techniques were originally introduced to obtain an improved SNR for low-intensity sources (particularly x-ray sources) while maintaining high angular resolution. An improved SNR (with the matched process) can be obtained if the emitting object consists of a few bright point sources. However, as the object becomes complex the random array methods no longer give an improved SNR (for example, see Fig. 8).

We have pointed out that the matched process can be improved by just a slight change in the analysis procedure. The balanced correlation method is used with the same recorded picture as the matched process. The balanced correlation method subtracts out the high-contrast inherent background as the reconstructed object is being calculated and thus does not have the object-dependent, high-contrast background characteristic of the matched process. Figure 9 demonstrates the improvement possible by using the balanced correlation method.

The uniformly redundant arrays (URA) offer still further improvements. The URA combines the high-transmission characteristics of the random array with the flat sidelobe advantage of the nonredundant arrays. The high transmission provides a capability to image very low-intensity sources, and the flat sidelobes mean that there will be no inherent noise to obscure low-contrast sources.

The simulations show that the URA with shot and background noise produces a much better reconstructed object than the random arrays with no shot or background noise (see Fig. 10). We have shown that even for very weak sources superpositioned on a strong background source that the URA can discern small contrast changes (see Fig. 11). Furthermore, since there is no limiting SNR set by the inherent noise, with a longer exposure time one can see smaller and smaller contrast changes in the reconstructed object (see Fig. 12).

The URA has another significant advantage over the random array. In the random array implementation the detector should be big enough to contain the entire picture formed by the aperture. For the URA there is an arrangement (Fig. 3) involving a mosaic of basic URA patterns which effectively forms a circular correlation of the object on a section of the picture plane. This means that all the information needed to reconstruct the object is contained in an area equal to the area of the basic aperture pattern. For many applications (in particular, x-ray astronomy) the smaller required detector is a major advantage of the URA.

The mosaicking of the URA aperture is not necessary for its implementation. Conversely, it would be possible to mosaic a random array as per Fig. 3. It is doubtful if a mosaicked random array would give a better SPSF. The only major effect would be that the sidelobes in the matched process would be (on the average) flat. However the balanced correlation method already provides a method for obtaining approximately flat sidelobes. In addition there would still be the high-frequency inherent noise. If a deconvolution method is used with a mosaicked random array rather than a correlation method, there would be no inherent noise, but there is still the problem of small terms in the Fourier transform of the aperture function [see discussion after Eq. (2)]. The random variations that give rise to small terms in the transform would still be present even if the random array were mosaicked. For example, if the total number of ones in the even-numbered columns (or rows) of the aperture function is the same as the number of ones in the odd-numbered columns (or rows), the transform will have a zero term. Such arrangements occur in approximately 15% of 32 × 32 random arrays. In general small terms in the transform occur at higher frequencies, and mosaicking the random array will not mitigate their effect.

Finally, we would like to point out a very interesting property of the URA. When we initially proposed Eq. (9) for the G array to go with the URA we did so based on our experience with the balanced correlation method. Such balancing with the URA is not really necessary because the sidelobes of the autocorrelation of the URA are flat. Thus one could use a matched process, and the only effect would be to add in a removable dc level. By using a balanced correlation technique, that dc level is automatically subtracted out. The interesting point is that given Eqs. (7)(10), A * G is exactly a delta function. This means that G is the convolutional inverse, A−1 (or more accurately RA −1, because A * G involves a correlation rather than a convolution) of the aperture. Thus, whereas we might have referred to our method involving a URA as a balanced correlation analysis, it is also a deconvolution method. URAs have the property that their convolutional inverse can be written down by inspection. That inverse function (G) contains no large terms and hence has good noise handling characteristics.

We thank H. Andrews and J. Grindlay for independently pointing out to us the nice properties of pseudonoise arrays. We also acknowledge helpful discussions and encouragement from R. Blake. Invaluable assistance was rendered by J. Trussell in preparation of the final manuscript. This work was done under the auspices of the U.S. Energy Research and Development Administration and the National Aeronautics and Space Administration.

Figures

 figure: Fig. 1

Fig. 1 The basic steps involved in coded aperture imaging are shown above. In an attempt to obtain a higher SNR, a multiple-pinhole aperture is used to form many overlapping images of the object. The resulting recorded picture must be decoded, using either a digital or optical method. The resulting reconstruction is of higher quality than that obtained by using a simple pinhole.

Download Full Size | PPT Slide | PDF

 figure: Fig. 2

Fig. 2 A coded aperture comprised of a 2r by 2s section of a mosaic of uniformly redundant arrays each of size r by s. In this particular case r = 43 and s = 41.

Download Full Size | PPT Slide | PDF

 figure: Fig. 3

Fig. 3 This arrangement for a coded aperture imaging system employs a 2r by 2s aperture composed of a mosaic of basic r by s patterns. Emitting points in the source produce shadows of cyclic versions of the basic aperture pattern upon the detector, which need be only r by s in size.

Download Full Size | PPT Slide | PDF

 figure: Fig. 4

Fig. 4 This coded aperture arrangement employs only the basic r by s pattern for the aperture and has the disadvantage that the detector must be large enough to contain the image from the full field of view.

Download Full Size | PPT Slide | PDF

 figure: Fig. 5

Fig. 5 A 40 × 40 random array. As shown in Fig. 6, the uniformly redundant array is superior to the random array because of its ideal system point-spread function.

Download Full Size | PPT Slide | PDF

 figure: Fig. 6

Fig. 6 The system point-spread function (SPSF) for the random pinhole aperture and the matched decoding procedure. This and Fig. 6(b) represent the same 40 × 40 aperture. (b) The SPSF for the random pinhole aperture and the balanced correlation decoding procedure. The sidelobes have an expected value of zero, although some trends are possible. (c) The SPSF for the nonredundant pinhole aperture used in conjunction with the matched decoding procedure. Note the difference in the vertical scale from the accompanying graphs. The height of the small plateaus is 1. (d) The SPSF of the uniformly redundant array in conjunction with balanced correlation. Note that the sidelobes are deterministically zero out to ±41.

Download Full Size | PPT Slide | PDF

 figure: Fig. 7

Fig. 7 Shown above are the two test objects used in the computer simulations of this paper. Each point in the man emits 210 photons/sec/pinhole. The disks are similar with the large background disk emitting 210 photons/sec/aperture opening and the smaller ones emitting 10% or 20% more or less than this is shown.

Download Full Size | PPT Slide | PDF

 figure: Fig. 8

Fig. 8 This figure represents the result of having imaged the man (Fig. 7) through a random pinhole aperture and then having decoded using the matched decoding process. The high background bias, which is signal-dependent, nearly obliterates the man. The simulation was noise-free, hence the bias stems entirely from the nature of the SPSF of Fig. 6(a). In some cases in which the distribution of the object is partially known, an attempt could be made to mitigate the bias effects.

Download Full Size | PPT Slide | PDF

 figure: Fig. 9

Fig. 9 This figure is the result of having imaged the man through a random aperture and then having decoded using the balanced correlation method. The geometry was that of Fig. 4. This was a noise-free simulation and hence represents an upper limit on the obtainable image quality.

Download Full Size | PPT Slide | PDF

 figure: Fig. 10

Fig. 10 This picture demonstrates the result of having used a uniformly redundant array and the geometry of Fig. 3. Quantum statistics on the source as well as background noise were included in the simulation. Even higher quality is obtainable through longer exposure time.

Download Full Size | PPT Slide | PDF

 figure: Fig. 11

Fig. 11 This image represents the results of having encoded and decoded the source of Fig. 7(b) using the uniformly redundant array and the geometry of Fig. 3. The simulation of this 1-sec exposure included quantum fluctuations in the source as well as background noise. Twenty percent variations in the source are easily discernible.

Download Full Size | PPT Slide | PDF

 figure: Fig. 12

Fig. 12 This image is identical to that in Fig. 11 except a 10-sec exposure was simulated. Ten percent variations are now discernible.

Download Full Size | PPT Slide | PDF

Notes

Note Added in Proof. Our use of the term “inherent noise” can be misleading. A more appropriate choice of terminology would be “artifacts”.

References

1. R. H. Dicke, Astrophys. J. 153, L101 (1968) [CrossRef]  .

2. J. G. Abies, Proc. Astron. Soc. Aust. 4, 172 (1968).

3. H. H. Barrett and F. A. Norrigan, Appl. Opt. 12, 2686 (1973) [CrossRef]   [PubMed]  .

4. J. W. Woods, M. P. Ekstrom, T. M. Palmieri, and R. E. Twogood, IEEE Trans. Nucl. Sci. NS-22, 379 (1975) [CrossRef]  .

5. C. M. Brown, Ph.D. thesis, “Multiplex Imaging and Random Arrays,” U. Chicago (1972).

6. R. L. Blake, A. J. Burek, E. E. Fenimore, and R. Puetter, Rev. Sci. Instrum. 45, 513 (1973) [CrossRef]  .

7. M. J. E. Golay, J. Opt. Soc. Am. 61, 272 (1970) [CrossRef]  .

8. W. K. Klemperer, Astron. Astrophys. Suppl. 15, 449 (1974).

9. D. Calabro and J. K. Wolf, Inform. Control 11, 537 (1968) [CrossRef]  .

10. F. J. MacWilliams and N. J. A. Sloane, Proc. IEEE 64, 1715 (1976) [CrossRef]  .

11. C. M. Brown, J. Appl. Phys. 45, 1806 (1973) [CrossRef]  .

References

  • View by:
  • |
  • |
  • |

  1. R. H. Dicke, Astrophys. J. 153, L101 (1968).
    [Crossref]
  2. J. G. Abies, Proc. Astron. Soc. Aust. 4, 172 (1968).
  3. H. H. Barrett, F. A. Norrigan, Appl. Opt. 12, 2686 (1973).
    [Crossref] [PubMed]
  4. J. W. Woods, M. P. Ekstrom, T. M. Palmieri, R. E. Twogood, IEEE Trans. Nucl. Sci. NS-22, 379 (1975).
    [Crossref]
  5. C. M. Brown, Ph.D. thesis, “Multiplex Imaging and Random Arrays,” U. Chicago (1972).
  6. R. L. Blake, A. J. Burek, E. E. Fenimore, R. Puetter, Rev. Sci. Instrum. 45, 513 (1973).
    [Crossref]
  7. M. J. E. Golay, J. Opt. Soc. Am. 61, 272 (1970).
    [Crossref]
  8. W. K. Klemperer, Astron. Astrophys. Suppl. 15, 449 (1974).
  9. D. Calabro, J. K. Wolf, Inform. Control 11, 537 (1968).
    [Crossref]
  10. F. J. MacWilliams, N. J. A. Sloane, Proc. IEEE 64, 1715 (1976).
    [Crossref]
  11. C. M. Brown, J. Appl. Phys. 45, 1806 (1973).
    [Crossref]

1976 (1)

F. J. MacWilliams, N. J. A. Sloane, Proc. IEEE 64, 1715 (1976).
[Crossref]

1975 (1)

J. W. Woods, M. P. Ekstrom, T. M. Palmieri, R. E. Twogood, IEEE Trans. Nucl. Sci. NS-22, 379 (1975).
[Crossref]

1974 (1)

W. K. Klemperer, Astron. Astrophys. Suppl. 15, 449 (1974).

1973 (3)

C. M. Brown, J. Appl. Phys. 45, 1806 (1973).
[Crossref]

R. L. Blake, A. J. Burek, E. E. Fenimore, R. Puetter, Rev. Sci. Instrum. 45, 513 (1973).
[Crossref]

H. H. Barrett, F. A. Norrigan, Appl. Opt. 12, 2686 (1973).
[Crossref] [PubMed]

1970 (1)

1968 (3)

R. H. Dicke, Astrophys. J. 153, L101 (1968).
[Crossref]

J. G. Abies, Proc. Astron. Soc. Aust. 4, 172 (1968).

D. Calabro, J. K. Wolf, Inform. Control 11, 537 (1968).
[Crossref]

Abies, J. G.

J. G. Abies, Proc. Astron. Soc. Aust. 4, 172 (1968).

Barrett, H. H.

Blake, R. L.

R. L. Blake, A. J. Burek, E. E. Fenimore, R. Puetter, Rev. Sci. Instrum. 45, 513 (1973).
[Crossref]

Brown, C. M.

C. M. Brown, J. Appl. Phys. 45, 1806 (1973).
[Crossref]

C. M. Brown, Ph.D. thesis, “Multiplex Imaging and Random Arrays,” U. Chicago (1972).

Burek, A. J.

R. L. Blake, A. J. Burek, E. E. Fenimore, R. Puetter, Rev. Sci. Instrum. 45, 513 (1973).
[Crossref]

Calabro, D.

D. Calabro, J. K. Wolf, Inform. Control 11, 537 (1968).
[Crossref]

Dicke, R. H.

R. H. Dicke, Astrophys. J. 153, L101 (1968).
[Crossref]

Ekstrom, M. P.

J. W. Woods, M. P. Ekstrom, T. M. Palmieri, R. E. Twogood, IEEE Trans. Nucl. Sci. NS-22, 379 (1975).
[Crossref]

Fenimore, E. E.

R. L. Blake, A. J. Burek, E. E. Fenimore, R. Puetter, Rev. Sci. Instrum. 45, 513 (1973).
[Crossref]

Golay, M. J. E.

Klemperer, W. K.

W. K. Klemperer, Astron. Astrophys. Suppl. 15, 449 (1974).

MacWilliams, F. J.

F. J. MacWilliams, N. J. A. Sloane, Proc. IEEE 64, 1715 (1976).
[Crossref]

Norrigan, F. A.

Palmieri, T. M.

J. W. Woods, M. P. Ekstrom, T. M. Palmieri, R. E. Twogood, IEEE Trans. Nucl. Sci. NS-22, 379 (1975).
[Crossref]

Puetter, R.

R. L. Blake, A. J. Burek, E. E. Fenimore, R. Puetter, Rev. Sci. Instrum. 45, 513 (1973).
[Crossref]

Sloane, N. J. A.

F. J. MacWilliams, N. J. A. Sloane, Proc. IEEE 64, 1715 (1976).
[Crossref]

Twogood, R. E.

J. W. Woods, M. P. Ekstrom, T. M. Palmieri, R. E. Twogood, IEEE Trans. Nucl. Sci. NS-22, 379 (1975).
[Crossref]

Wolf, J. K.

D. Calabro, J. K. Wolf, Inform. Control 11, 537 (1968).
[Crossref]

Woods, J. W.

J. W. Woods, M. P. Ekstrom, T. M. Palmieri, R. E. Twogood, IEEE Trans. Nucl. Sci. NS-22, 379 (1975).
[Crossref]

Appl. Opt. (1)

Astron. Astrophys. Suppl. (1)

W. K. Klemperer, Astron. Astrophys. Suppl. 15, 449 (1974).

Astrophys. J. (1)

R. H. Dicke, Astrophys. J. 153, L101 (1968).
[Crossref]

IEEE Trans. Nucl. Sci. (1)

J. W. Woods, M. P. Ekstrom, T. M. Palmieri, R. E. Twogood, IEEE Trans. Nucl. Sci. NS-22, 379 (1975).
[Crossref]

Inform. Control (1)

D. Calabro, J. K. Wolf, Inform. Control 11, 537 (1968).
[Crossref]

J. Appl. Phys. (1)

C. M. Brown, J. Appl. Phys. 45, 1806 (1973).
[Crossref]

J. Opt. Soc. Am. (1)

Proc. Astron. Soc. Aust. (1)

J. G. Abies, Proc. Astron. Soc. Aust. 4, 172 (1968).

Proc. IEEE (1)

F. J. MacWilliams, N. J. A. Sloane, Proc. IEEE 64, 1715 (1976).
[Crossref]

Rev. Sci. Instrum. (1)

R. L. Blake, A. J. Burek, E. E. Fenimore, R. Puetter, Rev. Sci. Instrum. 45, 513 (1973).
[Crossref]

Other (1)

C. M. Brown, Ph.D. thesis, “Multiplex Imaging and Random Arrays,” U. Chicago (1972).

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (12)

Fig. 1
Fig. 1 The basic steps involved in coded aperture imaging are shown above. In an attempt to obtain a higher SNR, a multiple-pinhole aperture is used to form many overlapping images of the object. The resulting recorded picture must be decoded, using either a digital or optical method. The resulting reconstruction is of higher quality than that obtained by using a simple pinhole.
Fig. 2
Fig. 2 A coded aperture comprised of a 2r by 2s section of a mosaic of uniformly redundant arrays each of size r by s. In this particular case r = 43 and s = 41.
Fig. 3
Fig. 3 This arrangement for a coded aperture imaging system employs a 2r by 2s aperture composed of a mosaic of basic r by s patterns. Emitting points in the source produce shadows of cyclic versions of the basic aperture pattern upon the detector, which need be only r by s in size.
Fig. 4
Fig. 4 This coded aperture arrangement employs only the basic r by s pattern for the aperture and has the disadvantage that the detector must be large enough to contain the image from the full field of view.
Fig. 5
Fig. 5 A 40 × 40 random array. As shown in Fig. 6, the uniformly redundant array is superior to the random array because of its ideal system point-spread function.
Fig. 6
Fig. 6 The system point-spread function (SPSF) for the random pinhole aperture and the matched decoding procedure. This and Fig. 6(b) represent the same 40 × 40 aperture. (b) The SPSF for the random pinhole aperture and the balanced correlation decoding procedure. The sidelobes have an expected value of zero, although some trends are possible. (c) The SPSF for the nonredundant pinhole aperture used in conjunction with the matched decoding procedure. Note the difference in the vertical scale from the accompanying graphs. The height of the small plateaus is 1. (d) The SPSF of the uniformly redundant array in conjunction with balanced correlation. Note that the sidelobes are deterministically zero out to ±41.
Fig. 7
Fig. 7 Shown above are the two test objects used in the computer simulations of this paper. Each point in the man emits 210 photons/sec/pinhole. The disks are similar with the large background disk emitting 210 photons/sec/aperture opening and the smaller ones emitting 10% or 20% more or less than this is shown.
Fig. 8
Fig. 8 This figure represents the result of having imaged the man (Fig. 7) through a random pinhole aperture and then having decoded using the matched decoding process. The high background bias, which is signal-dependent, nearly obliterates the man. The simulation was noise-free, hence the bias stems entirely from the nature of the SPSF of Fig. 6(a). In some cases in which the distribution of the object is partially known, an attempt could be made to mitigate the bias effects.
Fig. 9
Fig. 9 This figure is the result of having imaged the man through a random aperture and then having decoded using the balanced correlation method. The geometry was that of Fig. 4. This was a noise-free simulation and hence represents an upper limit on the obtainable image quality.
Fig. 10
Fig. 10 This picture demonstrates the result of having used a uniformly redundant array and the geometry of Fig. 3. Quantum statistics on the source as well as background noise were included in the simulation. Even higher quality is obtainable through longer exposure time.
Fig. 11
Fig. 11 This image represents the results of having encoded and decoded the source of Fig. 7(b) using the uniformly redundant array and the geometry of Fig. 3. The simulation of this 1-sec exposure included quantum fluctuations in the source as well as background noise. Twenty percent variations in the source are easily discernible.
Fig. 12
Fig. 12 This image is identical to that in Fig. 11 except a 10-sec exposure was simulated. Ten percent variations are now discernible.

Equations (13)

Equations on this page are rendered with MathJax. Learn more.

P = ( O * A ) + N ,
O ̂ = R 1 [ ( P ) / ( A ) ] = O + R 1 [ ( N ) / ( A ) ] ,
O ̂ = P * G = RO * ( A * G ) + N * G ,
p ( x , y ) = o ( ξ , η , b ) a ( f ξ / b + x m , f η / b + y m ) d ξ d η db ,
P ( k , l ) O * A + N i j O ( i , j ) A ( i + k , j + l ) + N ( k , l ) ,
O ̂ ( i , j ) = P * G k l P ( k , l ) G ( k + i , l + j ) ,
A ( I , J ) = 0 if I = 0 , 1 if J = 0 , I 0 , 1 if C r ( I ) C s ( J ) = 1 , 0 otherwise ,
C r ( I ) = 1 if there exist an integer x , 1 x < r such that I = mod r x 2 1 otherwise .
G ( i , j ) = 1 if A ( i , j ) = 1 = 1 if A ( i , j ) = 0 ,
i j A ( i , j ) G ( i + k , j + 1 ) ( rs 1 ) / 2 if mod r k = 0 and mod s l = 0 = 0 otherwise .
O ̂ ( k , l ) = i l P ( i , j ) G ( i + k , j + l ) ,
SPSF = A * G .
G ( i , j ) = 1 if A ( i , j ) = 1 , = ρ / 1 ρ if A ( i , j ) = 0 ,

Metrics