## Abstract

Fourier transform imaging spectroscopy (FTIS) can be performed with a multi-aperture optical system by making a series of intensity measurements, while introducing optical path differences (OPD’s) between various subapertures, and recovering spectral data by the standard Fourier post-processing technique. The imaging properties for multi-aperture FTIS are investigated by examining the imaging transfer functions for the recovered spectral images. For systems with physically separated subapertures, the imaging transfer functions are shown to vanish necessarily at the DC spatial frequency. Also, it is shown that the spatial frequency coverage of particular systems may be improved substantially by simultaneously introducing multiple OPD’s during the measurements, at the expense of limiting spectral coverage and causing the spectral resolution to vary with spatial frequency.

©2005 Optical Society of America

## 1. Introduction

Multi-aperture systems use a number of relatively small-aperture optics together in such a way that the resolution is comparable to that of a larger single-aperture system. Such systems include segmented-aperture telescopes and multiple-telescope arrays (MTA’s), an example of which is illustrated in Fig.1. Such resolutions can only be achieved when the optical path length through each subaperture (one segment of the aperture or a single telescope in the array) are equal. In a real system, this is accomplished by adjusting path length control elements for each subaperture. In Fig. 1 these elements are shown as “optical trombones,” the length of which may be adjusted by moving a corner mirror. Advantages of multi-aperture systems over comparable monolithic systems include lower weight and volume [1], and reduced cost [2]. Reduced weight and volume are especially important for space-deployed systems. For example, the design for NASA’s James Webb Space Telescope includes a segmented primary that will be folded up during launch [3]. One challenging aspect of using multi-aperture systems is phasing the subapertures. Kendrick *et al*. [4] have demonstrated closed-loop phasing of a nine-aperture system while imaging an extended object using the phase diversity technique [5]. If a multi-aperture system is sparse, additional tradeoffs include longer exposure times [6] and increased need for image post-processing [7].

Fourier transform spectroscopy [8] is a standard method for obtaining spectral data through post-processing of a series of polychromatic intensity measurements. The technique can be employed in an imaging system by relaying the image through a Michelson interferometer [9], which is used to introduce the optical path differences (OPD’s) necessary for performing the spectroscopy [10,11]. One alternative to the Michelson design is double Fourier transform interferometry [12,13], where the spectroscopy and imaging are performed by Fourier transforming temporal and spatial coherence measurements, respectively.

Another alternative for performing Fourier transform imaging spectroscopy (FTIS) with multi-aperture systems is to use the path length control elements associated with each subaperture to introduce the required OPD’s [14]. This technique (patent pending) was demonstrated by Kendrick *et al*. [15], who used a two-telescope system to obtain spectra for an array of point-like objects. Here, we develop a theory for this technique based on the principles of physical optics and partial coherence theory. Section 2 describes a system model and gives an expression for the intensity measurements. Section 3 shows how spectral data can be calculated from the image intensity using the standard Fourier transform technique. Section 4 discusses several aspects of the system related to fact that the spectral images are typically complex-valued. Section 5 discusses the imaging properties of such systems and shows that spectral images obtained with this technique are missing low spatial frequency content. Also, it is shown that the imaging properties of some systems can be improved significantly by introducing multiple OPD’s simultaneously during data collection. Section 6 deals with the spectral resolution of the instrument. Section 7 presents simulation results that illustrate many of the points made in earlier sections. Section 8 is a concluding summary with some comments on image reconstruction techniques. The appendix details a partial coherence analysis of the optical system.

## 2. Imaging model

While an ideal spectroscopic system is reflective, our modeling is based on the simplified, equivalent thin-lens refractive system shown in Fig. 2. Shown are: (i) an object plane with coordinates (*x*_{o}*,y*_{o}
), (ii) a collimating lens of focal length *f*_{o}
, (iii) a pupil plane with coordinates (*ξ,η*), containing the various subapertures and associated path-delay elements, (iv) an imaging lens of focal length *f*_{i}
, and (v) an image plane with coordinate (x,y). The subapertures are grouped together according to the path delays introduced during data collection. In general there are *Q* groups indexed by the integer *q*∈[1,*Q*]. The amplitude transmittance of the pupil and associated delay elements is written as

where *ν* is the optical frequency, *τ* is a time-delay variable, and *T*_{q}
(*ξ,η,ν*) and *γ*_{q}
are respectively the amplitude transmittance and relative delay rate of the *q*^{th}
subaperture group. Each *T*_{q}
(*ξ,η,ν*) is written as a function of *ν* to allow for aberrations. The path delay common to the *q*^{th}
group is given by *cγ*_{q}*τ*, where *c* is the speed of light (note that this restricts the model to delays that are linear in time). Without loss of generality, the subaperture groups are organized such that *γ*
_{1}=0, *γ*
_{q+1}>*γ*_{q}
, and *γ*_{Q}
=1. In this context, a conventional FTIS system based on a Michelson interferometer can be modeled as a system with two identical, overlapping subaperture groups (formed by the beamsplitter in a real system) with a path delay equal to the OPD between the arms of the interferometer.

For a spatially incoherent object, the image plane intensity *I*(*x,y,τ*), which is a function of the time-delay variable *τ*, can be written in terms of the object spectral density *S*_{o}
(*x*_{o}*,y*_{o}*,ν*) as

where *κ* is a constant, *M=-f*_{i}*/f*_{o}
is the system magnification, *x′=Mx*_{o}*, y′=My*_{o}
, and *h*(*x,y,ν,τ*) is the monochromatic point spread function (PSF) (intensity impulse response) for the system, which can be written as

The terms *h*_{p,q}
(*x,y,ν*) are referred to as *spectral point spread functions* (SPSF’s) and are defined as

where *t*_{q}
(*x,y,ν*) is the coherent impulse response of the *q*^{th}
subaperture group, given by

The terms *t*_{q}
(*x,y,ν*) can be complex-valued since the subaperture groups are asymmetric about, or offset from, the optical axis in the pupil plane. Note that the path delays introduced for the spectroscopy are included in Eq. (3) as additional phase terms; any other phase terms (aberrations) are included in *T*_{q}
(*ξ,η,ν*). The spectroscopy is based on temporal coherence effects, but the role of spatial coherence may not be immediately obvious. For this reason, the Appendix contains a derivation of Eq. (2) based on partial coherence theory.

The normalized monochromatic optical transfer function (OTF) for the system can be written as

$$=\frac{{T}_{\mathit{pup}}\left(-\lambda {f}_{i}{f}_{x},-\lambda {f}_{i}{f}_{y},\nu ,\tau \right)\u2605{T}_{\mathit{pup}}\left(-\lambda {f}_{i}{f}_{x},-\lambda {f}_{i}{f}_{y},\nu ,\tau \right)}{{\int}_{-\infty}^{\infty}{\int}_{-\infty}^{\infty}{\mid {T}_{\mathit{pup}}(\xi ,\eta ,\nu ,\tau )\mid}^{2}d\xi d\eta}$$

$$=\sum _{q=1}^{Q}{H}_{q,q}({f}_{x},{f}_{y},\nu )+\sum _{p=1}^{Q}\sum _{\underset{q\ne p}{q=1}}^{Q}{H}_{p,q}({f}_{x},{f}_{y},\nu )\mathrm{exp}\left[i2\pi \nu \left({\gamma}_{p}-{\gamma}_{q}\right)\tau \right],$$

where the ⋆ symbol indicates a two-dimensional cross-correlation with respect to the spatial-frequency coordinates *f*_{x}
and *f*_{y}
, the second equality follows from Eqs. (4) and (5), and the terms *H*_{p,q}
(*f*_{x}*,f*_{y}*,ν*) are referred to as *spectral optical transfer functions* (SOTF’s), which are defined as the normalized two-dimensional Fourier transform of the corresponding SPSF’s, *i.e*.,

$$=\frac{{T}_{p}\left(-\lambda {f}_{i}{f}_{x},-\lambda {f}_{i}{f}_{y},\nu \right)\u2605{T}_{q}\left(-\lambda {f}_{i}{f}_{x},-\lambda {f}_{i}{f}_{y},\nu \right)}{{\int}_{-\infty}^{\infty}{\int}_{-\infty}^{\infty}{\mid {T}_{\mathit{pup}}(\xi ,\eta ,\nu ,\tau )\mid}^{2}d\xi d\eta}.$$

For the multiple aperture case, the denominator of this expression is independent of *τ* and it is equal to the area of the entire pupil when the pupil is binary. Note that both the PSF and the OTF consist of a double summation of terms that are modulated with respect to the time-delay variable *τ*, and a single summation of unmodulated terms. For a Michelson system, note that the SPSF and SOTF are equivalent to the normal PSF and OTF for incoherent imaging, since the subaperture groups are identical and overlapping.

## 3. Spectral data

Spectral information can be obtained from a series of image-plane intensity measurements by the standard Fourier technique: (i) subtracting the fringe bias at each image point and (ii) Fourier transforming the data along the *τ*-dimension to the *ν*′-domain. Starting from Eq. (2) and performing these steps yields the spectral image

$$\times {h}_{p,q}\left(x-x\prime ,y-y\prime ,\frac{\nu \prime}{{\gamma}_{p}-{\gamma}_{q}}\right)dx\prime dy\prime .$$

Transforming this equation to the spatial frequency domain yields

where the spectral-spatial transforms *G*_{i}
(*f*_{x}*,f*_{y}*,ν′*) and *G*_{o}
(*f*_{x}*,f*_{y}*,ν*) are the two-dimensional spatial Fourier transforms of the spectral image and the object spectral density, respectively. Notice that the spectral image in Eq. (8) is a double summation of the object spectral density convolved with each of the SPSF terms that are modulated in Eq. (3), i.e., terms for which *γ*_{p}*-γ*_{q}
≠0. Thus, each term contains unique spatial information as it is convolved with a different SPSF. This is evident in Eq. (9) by the fact that the only spatial frequencies in the recovered spectral data are those passed by SOTF terms that are modulated in Eq. (6). Also notice that the spectral dimension of each term in Eqs. (8) and (9) is scaled by the factor 1/(*γ*_{p}*-γ*_{q}
). Thus, terms for which |*γ*_{p}*-γ*_{q}
|≠1 appear at scaled optical frequencies *ν*′=(*γ*_{p}*-γ*_{q}
)ν in the ν′-domain. This will occur only for *Q* ≥ 3. If there are just two groups of subapertures (*Q*=2), then there is just a single value of |*γ*
_{1}-*γ*
_{2}|=1.

When *Q*≥3, it is desirable to map the data in each term of Eqs. (8) and (9) back to the base optical frequencies, thus forming a composite spectral image that contains all of the collected spatial information in each spectral band. Typically, a multi-aperture system is designed such that the OTF does not have any gaps with missing spatial frequencies. This implies that the SOTF terms *H*_{q,p}
(*f*_{x}*,f*_{y}*,ν*) will overlap somewhat in the (*f*_{x}*,f*_{y}
) plane, and one cannot completely separate the different terms in that plane. However, the terms can be separated with respect to *ν*′ by limiting the spectral bandwidth of the object and choosing the relative delay rates appropriately. To illustrate, suppose the object spectrum is limited to optical frequencies in the range *ν*
_{1}≤*ν*≤*ν*
_{2} by a spectral filter placed in the system during the measurements. Then spectral data will appear in *S*_{i}
(*x,y,ν′*) at multiple intervals in the *ν*′-domain given by *ν*
_{1}/(*γ*_{p}*-γ*_{q}
)≤*ν*′≤*ν*
_{2}/(*γ*_{p}*-γ*_{q}
) for all unique, non-zero values of *γ*_{q}*-γ*_{p}
. The band limits (*ν*
_{1} and *ν*
_{2}) and the relative delay rates (*γ*_{q}*’s*) can be chosen such that these intervals do not overlap, making the data separable in the *ν*′-domain. For example, for *Q*=3, *γ*
_{1}=0, *γ*
_{2}=1/3, and *γ*
_{3}=1, the spectra are separated in *ν*′ space if *ν*
_{2}-*ν*
_{1}<*ν*
_{2}/3. An example of this is shown in Sec. 7. Assuming this is the case, the data in each term of Eq. (8) can be mapped to the base optical frequencies *ν* to form a composite spectral image

where Δ*γ=γ*_{p}*-γ*_{q}
, the relative delay rate differences. Substituting from Eq. (8) yields

$$\times {h}_{p,q}(x-x\prime ,y-y\prime ,\nu )dx\prime dy\prime \phantom{\rule{.5em}{0ex}}\mathrm{for}{\nu}_{1}\le \nu \le {\nu}_{2}.$$

## 4. Complex-valued spectral images

The image intensity *I*(*x,y,τ*) is real-valued and nonnegative, since both the object spectral density and the PSF are real and nonnegative. However, the spectral images derived from the FTIS measurements can be complex-valued, since *S*_{i}
(*x,y,ν′*) is related to the measurements by a complex Fourier transform. Since the measurements are real-valued, the recovered spectral data has particular symmetry properties. Specifically, the spectral image possesses Hermitian symmetry about the zero temporal frequency, *i.e*.,

and the spectral-spatial transform is Hermitian about the origin of the (*f*_{x}*,f*_{y}*,ν′*) domain, i.e.,

Note that the object spectral density *S*_{o}
(*x*_{o}*,y*_{o}*,ν*) is a one-sided spectrum, *i.e*., non-zero only for positive frequencies *ν*>0, but the spectral image cube has a two-sided spectrum. Referring to Eqs. (8) and (9), one can see that the spectral data at positive and negative temporal frequencies consists of terms for which *γ*_{p}*-γ*_{q}
>0 and *γ*_{p}*-γ*_{q}
<0, respectively. Equation (12) states that the spectral image values at negative optical frequencies are the complex conjugate of those at positive frequencies. This can be seen from Eq. (8) and by the fact that *h*_{p,q}
(*x,y,ν*)=*h**_{q,p}
(*x,y,ν*) [see Eq. (4)]. The Hermitian symmetry in the (*f*_{x}*,f*_{y}*,ν*′) domain expressed by Eq. (13) is apparent in Eq. (9), since the Fourier transform of the real-valued object spectral density is Hermitian about the DC spatial frequency, *i.e., G*_{o}
(*f*_{x}*,f*_{y}*,ν*)=*G**_{o}(-*f*_{x}*,-f*_{y}*,ν*), and by the fact that *H*_{p,q}
(*f*_{x}*,f*_{y}*,ν*)=*H**_{q,p}(-*f*_{x}*,-f*_{y}*,ν*) [see Eq. (7)]. At positive optical frequencies, the spectral image only contains spatial frequencies corresponding to vector separations oriented from subaperture group *p* toward subaperture group *q* where *γ*_{p}*-γ*_{q}
>0. Spatial frequencies corresponding to the oppositely-oriented vector separations appear at negative optical frequencies.

In certain cases, which include Michelson-based systems, the spectral images are real-valued, because the subaperture groups possess a particular symmetry in the pupil plane. In this case, the spatial frequency data and thus the SOTF terms must possess Hermitian symmetry about the DC spatial frequency in each spectral band, *i.e*., *H*_{p,q}
(*f*_{x}*,f*_{y}*,ν*)=*H**_{p,q}(-*f*_{x}*,-f*_{y}*,ν*). Along with Eq. (7), this condition implies that *H*_{p,q}
(*f*_{x}*,f*_{y}*,ν*)=*H*_{q,p}
(*f*_{x}*,f*_{y}*,ν*). Note that this relation holds for Michelson-based systems (with common-path aberrations only), since the subaperture groups are identical and overlapping. Also, real-valued spectral images imply that the fringe packets described by *I*(*x,y,τ*) are symmetric with respect to the time delay variable *τ*, while complex-valued spectral images imply that the fringe packets are asymmetric. In systems, like those based on Michelson interferometer design, where the fringe packets are symmetric, measurements only need to be made for either positive or negative time delays. For a general multi-aperture system however, the fringes will usually be asymmetric, and thus measurements need to be made for both positive and negative time delays.

Returning to the more general case, it is important to note that the real and imaginary parts of *S*_{i}
(*x,y,ν′*) are linearly related to the object spectral density *S*_{o}
(*x,y,ν*). Hence, these quantities are the appropriate ones for image reconstruction. On the other hand, the magnitude of *S*_{i}
(*x,y,ν′*) and the phase of *S*_{i}
(*x,y,ν′*) are nonlinearly related to *S*_{o}
(*x,y,ν*). Hence, the spatial frequency content of |*S*_{i}
(*x,y,ν′*)| or arg{*S*_{i}
(*x,y,ν′*)}, unlike that of Re{*S*_{i}
(*x,y,ν′*)} or Im{*S*_{i}
(*x,y,ν′*)}, is not directly related to the spatial frequency content of *S*_{o}
(*x,y,ν′*). The point-object simulation of Section 7.1 illustrates how the magnitude and phase of *S*_{i}
(*x,y,ν′*) can vary with position in the image plane.

The spatial frequency content of both the real and imaginary parts of a spectral image are directly related to the spatial frequency content of the object spectral density. To show this, note that the real part of a complex-valued spectral image can be written as

and its spatial Fourier transform is given by

where it is emphasized that ${G}_{i}^{\left(\text{Re}\right)}$(*f*_{x}*,f*_{y}*,ν′*) is ordinarily complex-valued. Using Eq. (9) and the fact that *G*_{o}
(*f*_{x}*,f*_{y}*,ν*)=*G**_{o}(-*f*_{x}*,-f*_{y}*,ν*) yields

$$\times \frac{1}{2}\left[{H}_{p,q}({f}_{x},{f}_{y},\frac{\nu \prime}{{\gamma}_{p}-{\gamma}_{q}})+{H}_{p,q}^{*}\left(-{f}_{x},-{f}_{y},\frac{\nu \prime}{{\gamma}_{p}-{\gamma}_{q}}\right)\right].$$

Similarly, the spatial transform of the imaginary part of the spectral images ${S}_{i}^{\left(\text{Im}\right)}$(*f*_{x}*,f*_{y}*,ν′*) can be written as

$$\times \frac{1}{2i}\left[{H}_{p,q}({f}_{x},{f}_{y},\frac{\nu \prime}{{\gamma}_{p}-{\gamma}_{q}})-{H}_{p,q}^{*}\left(-{f}_{x},-{f}_{y},\frac{\nu \prime}{{\gamma}_{p}-{\gamma}_{q}}\right)\right].$$

Thus, the spatial frequency content of the real or imaginary parts of the complex-valued spectral image is related to the spatial frequency content of the object spectral density through the SOTF terms. In a system with no aberrations, each SOTF term *H*_{p,q}
(*f*_{x}*,f*_{y}*,ν*) is real-valued and nonnegative, and the spatial frequency content of ${G}_{i}^{\left(\text{Re}\right)}$(*f*_{x}*,f*_{y}*,ν′*) and ${G}_{i}^{\left(\text{Im}\right)}$(*f*_{x}*,f*_{y}*,ν′*) is equivalent (to within a multiple of *π*/2 phase shift) at spatial frequencies where there is no overlap between the these terms, according to Eqs. (16) and (17). In regions where the terms do overlap, the phase shifts associated with various terms in the expression for ${G}_{i}^{\left(\text{Im}\right)}$(*f*_{x}*,f*_{y}*,ν′*) may cause the net transfer function to vanish, while the terms in the summation for ${G}_{i}^{\left(\text{Re}\right)}$(*f*_{x}*,f*_{y}*,ν′*) will add in phase. In such cases, the real part of the spectral image will contain more information than the imaginary part. The example in Sec. 7.2 illustrates this point.

## 5. Imaging properties

In essence, the SOTF’s are the spatial transfer functions for the spectral images and thus determine the imaging properties of the system. According to Eq. (7), the SOTF’s are calculated as the cross-correlation between subaperture groups, rather than the autocorrelation of the entire aperture as is the OTF for a normal imaging system. Above, it was noted that the SOTF for a Michelson-based FTIS is equivalent to the traditional OTF, since the system can be described as two identical, overlapping subaperture groups. In a multi-aperture system however, the subaperture groups are physically separated in the pupil plane, and thus the SOTF’s vanish necessarily at the DC spatial frequency and in some neighborhood around it. If the minimum separation in the pupil plane between two subaperture groups is *d*, then the SOTF vanishes for spatial frequencies below the cutoff frequency given by *f*_{c}*=d*/(*λf*_{i}
). For this reason, spectral images from a multi-aperture system are zero-mean, high-pass-filtered versions of the object.

For a given arrangement of subapertures, the spatial frequency content of the spectral images is dependent on the grouping of the subapertures. In some cases, the use of more than two groups can improve the imaging properties by providing additional spatial frequency content. However, having more than two groups implies the use of fractional delay rates, *i.e*., 0<*γ*_{q}
<1 for *q*≠1 or *Q*. In such cases, spectral data will appear at scaled optical frequencies (which can be corrected), bandwidth limitations must be imposed on the system, and the relative delay rates must be chosen such that the data is separable in the *ν*′-dimension, as discussed in Sec. 3. An additional trade-off for using fractional delay rates is variable spectral resolution at different spatial frequencies, as will be shown in the next section.

## 6. Spectral resolution

In practice, the image intensity can only be measured over a finite range of time delay values, *i.e*., -*τ*_{max}
≤*τ*≤*τ*_{max}
. Taking this into account yields the following expression for the image spectral data instead of Eq. (8)

$$\times 2{\tau}_{max}\mathrm{sin}c\left[2{\tau}_{max}\left({\gamma}_{p}-{\gamma}_{q}\right)\left(\frac{\nu \prime}{{\gamma}_{p}-{\gamma}_{q}}-\nu \right)\right]dx\prime dy\prime d\nu ,$$

where sinc(*ν*)=sin(*πν*)/(*πν*). Notice that the spectral image is now convolved in the spectral dimension with a sinc function that limits the spectral resolution. If the object data is bandlimited in the spectral dimension to the interval *ν*
_{1}≤*ν*≤*ν*
_{2}, and the data leakage between the each of the intervals *ν*
_{1}/(*γ*_{p}*-γ*_{q}
)≤*ν*′≤*ν*
_{2}/(*γ*_{p}*-γ*_{q}
) is negligible, then the composite spectral image is given approximately by

$$\times 2{\tau}_{max}\left({\gamma}_{p}-{\gamma}_{q}\right)\mathrm{sin}c\left[2{\tau}_{max}\left({\gamma}_{p}-{\gamma}_{q}\right)(\nu -\nu \prime )\right]dx\prime dy\prime d\nu \prime \phantom{\rule{.5em}{0ex}}\mathrm{for}\phantom{\rule{.2em}{0ex}}{\nu}_{1}\le \nu \le {\nu}_{2}.$$

In this equation, it is easy to see that each term in the summation is convolved with a sinc function having a zero-to-first-null width of 1/[2*τ*_{max}
(*γ*_{p}*-γ*_{q}
)]. The spectral resolution of each term decreases with the quantity *γ*_{p}*-γ*_{q}
, because the effective time-delay range over which data is collected for each term is scaled by the same factor. By transforming Eq. (19) to the spatial frequency domain, it is easy to see that the spectral resolution varies with spatial frequency for *Q*>2.

## 7. Simulation examples

This section presents two multi-aperture FTIS simulations based on an aberration-free system having three subapertures in the equilateral-triangle arrangement shown in Fig. 4. Each subaperture is circular of radius *R*, and the displacement of each subaperture from the optical axis is *r*=1.5*R*. The coordinates for the center of each subaperture are given by (*ξ*
_{1},*η*
_{1})=(0, *r*), (*ξ*
_{2},*η*
_{2})=(√3*r*/2, -*r*/2), and (*ξ*
_{3},*η*
_{3})=(-√3*r*/2, -*r*/2), making the closest separation between two subapertures √3*r*-2*R*. The subapertures are grouped individually with the following relative delay rates: *γ*
_{1}=0, *γ*
_{2}=1/3, and *γ*
_{3}=1. In both simulations the spectrum is assumed to be limited to the interval *ν*
_{1}≤*ν*≤*ν*
_{2}, where *ν*
_{1}=0.9*ν*
_{0}, *ν*
_{2}=1.1*ν*
_{0}, and *ν*
_{0} is the mean optical frequency.

Fig. 4 shows a single frame of a movie that illustrates the effect of the OPD’s on the pupil function, the PSF, and the OTF of the three-telescope system used for the simulations at the mean optical frequency *ν*
_{0} over the range of time-delays 0≤*τ*≤3/*ν*
_{0}. Fig. 4(a) indicates the magnitude of the relative phase delay modulo 2*π* for each subaperture by grayscale tone (white represents zero phase delay and black represents ±π phase delay). Fig. 4(b) shows the monochromatic PSF, where the circle represents the Airy disk radius for a single subaperture at *ν=ν*
_{0}. The PSF can be viewed as a set of interference fringes underneath an Airy envelope function, which is the diffraction pattern for a single subaperture. As the time delay variable changes, the fringes move under the envelope. Fig. 4(c) shows the magnitude of the real part of the OTF. Notice that only spatial frequencies that correspond to vector separations between subapertures are modulated during the movie, and the rate of modulation for various spatial frequencies is proportional to the difference in the relative delay rates of each corresponding pair of subapertures.

Fig. 5 shows the localization of the FTIS signal in three transform domains for the example parameters above. Fig. 5(a) represents the intensity measurements *I*(*x,y,τ*). In this domain, the signal, which is essentially a fringe packet at each image point, occupies the whole domain. Fig. 5(b) represents the spectral image *S*_{i}
(*x,y,ν′*). In these examples, the signal is localized to six spectral bands along the ν′-dimension. By Hermitian symmetry about the plane *ν*′=0, the signal at negative ν′ is the complex conjugate of the signal at positive *ν*′. Of the three spectral bands for *ν*′>0, the one at largest *ν*′ represents spectral image data at the base optical frequencies (from the interaction of subapertures 1 and 3), the middle band represents data scaled to 2/3 of the base optical frequencies (from subapertures 2 and 3), and the third, at smallest *ν*′, represents data that appears at 1/3 of the base optical frequencies (from subapertures 1 and 2). Fig. 5(c) represents the spectral-spatial transform *G*_{i}
(*f*_{x}*,f*_{y}*,ν′*). Here, the FTIS signal is further localized to the support of the SOTF terms. Each semi-transparent skewed cone represents the support of a SOTF term. From this figure we can see how the spectral and spatial frequency information can be separated. The sparsity of the data in this domain will also make possible significant noise filtering.

#### 7.1. Point object

The purpose of this point-object example is to provide a physical understanding of the effects that contribute to the magnitude and phase of the recovered spectral data. This simulation is based on an object with the spectral density

where *E* is a constant with units of [W m^{-2} Hz^{-1}], rect(*ν*) vanishes everywhere except for |*ν*|≤1/2, where it equals unity, and *δ*(*x′,y′*) is the two-dimensional Dirac delta function. This represents an on-axis point source with a uniform spectral exitance in the band of interest. The image intensity is obtained by substituting this expression into Eq. (2) and simplifying to yield

where the PSF *h*(*x,y,ν,τ*) is given by Eqs. (3) and (4) with

where jinc(*ρ*)=2*J*
_{1}(*πρ*)/(*πρ*), and *J*
_{1} is the first-order Bessel function of the first kind. Fig. 6(a) and (b) show the calculated image intensity as a function of the time delay variable at two points in the image plane: (i) Point *A* with coordinates (*x*_{A}*,y*_{A}
)=(0,0), and (ii) point *B* with coordinates (*x*_{B}*,y*_{B}
)=(0,0.61*λ*
_{0}
*f*_{i}*/R*), where *λ*
_{0}=*c/ν*
_{0}. Note that Point *A* corresponds to the geometric image location of the point object, and the distance between the points is equal to the Airy disk radius corresponding to the diffraction pattern of a single subaperture at the mean optical frequency *ν*
_{0}. The data in the figure is in units of *I*
_{0}, which is the intensity at Point *A* for *τ*=0. The figure shows that the fringe packet at Point *A* is symmetric about *τ*=0, while the fringe packet at Point *B* is asymmetric. Fig. 6(c), (d), and (e) show the intensity contributions at Point *B* due to the interference between each pair of subapertures. In general, each contribution is symmetric about some non-zero time delay, *i.e*. *τ*_{p,q}
for the contribution from subaperture groups *p* and *q*. The following expression for *τ*_{p,q}
can be obtained by substituting Eqs. (4) and (22) into Eq. (3) and solving for the time delay that yields zero phase for the (*q,p*) term at Point *B*,

Since each contribution has a different shift, the fringe packet at Point *B* is asymmetric. The fringe packet at Point *A* is symmetric, because each contribution is centered about *τ*=0. All points in the image of an extended scene will have a mixture of the characteristics of Points *A* and *B*, especially for sparse-aperture systems, which have PSF’s with sidelobes much larger than those of conventional filled-aperture systems. Fig. 7 shows the spectral data at Points *A* and *B* for positive temporal frequencies in the *ν*′-domain. Notice that three scaled versions of the object spectral data are clearly visible. The data at the base optical frequencies is due to the interference between subapertures 1 and 3, since *γ*
_{3}-*γ*
_{1}=1, the data scaled to 2/3 of the base frequencies is associated with subapertures 2 and 3, since *γ*
_{3}-*γ*
_{2}=2/3 and the data closest to the origin associated with subapertures 1 and 2, is scaled to 1/3 of the base optical frequencies, since *γ*
_{2}-*γ*
_{1}=1/3. The recovered spectrum at Points *A* and *B* are real- and complex-valued, respectively, since the corresponding fringe packets are symmetric and asymmetric, respectively, as shown in Fig. 6. The spectral content at each point is dependent on the SPSF’s. Note that the spectral data at Point *A* is bluer than the actual object spectral density, since higher optical frequencies are focused more tightly onto the geometric image point than lower optical frequencies, as is the case for ordinary imaging of point objects. Also, the magnitude of the spectral data at Point *B* goes to zero at *ν*
_{0}/3, 2*ν*
_{0}/3, and *ν*
_{0} since Point *B* is located at a point where the SPSF’s (centered about Point *A*) vanish for *ν*
_{0}. The ringing artifacts in the spectral data are due to the convolution in the spectral dimension with a sinc function [see Eq. (18)]. These artifacts can be reduced by applying a window function to intensity data in the *τ*-dimension before taking the Fourier transform.

#### 7.2. Extended object

The second simulation used data from NASA’s Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) [16] as object data. The dimensions of the object data cube are 128×128 samples in the spatial dimensions and 103 samples in the spectral dimension. While the AVIRIS data is uniformly sampled in wavelength over a specific spectral range, in our simulations the optical frequencies were arbitrarily assigned to each spectral band such that object data was uniformly sampled in frequency, spanning the range 0.9*ν*
_{0}≤*ν*≤1.1*ν*
_{0}. Fig. 8 shows a movie of the object data versus *ν*, the relative size of the pupil in the spatial frequency domain at *ν*=1.03*ν*
_{0}, and a movie of the simulated image intensity versus *τ*. Note that the movie of the object data goes completely dark in frames that correspond to atmospheric absorption bands in the AVIRIS data. Also, note that the fringe modulation in the image intensity movie is largest near *τ*=0, which occurs halfway through the movie. Fig. 9 shows recovered spectral image for three values of *ν*′. The top row shows the real part of the complex-valued spectral images, and the bottom row indicates the spatial frequency content of each image by showing the magnitude of the spectral-spatial transform in the corresponding spectral bands. The left hand column shows data that appears at one-third of the base optical frequency at *ν*′=0.34*ν*
_{0} (from subapertures 1 and 2), the middle column shows data that appears at two-thirds of the base optical frequency at *ν′*=0.68*ν*
_{0} (from subapertures 2 and 3), and the right hand column shows data that appears at the base optical frequency at *ν*′=1.03*ν*
_{0} (from subapertures 1 and 3). This data clearly illustrates the advantage of using fractional delay rates. For example, if subaperture 2 were grouped with subaperture 1, then the spatial frequency content shown in the left hand column of Fig. 9 would be absent from the data. Fig. 10 shows the data for the real and imaginary parts of the composite spectral image at the base optical frequency 1.03*ν*
_{0}. Notice that the spatial frequency content of the real and imaginary parts of the image are equivalent everywhere except in the vicinity of a diagonal line passing through the DC spatial frequency where the spatial frequency content of the real part of the image adds constructively, and the spatial frequency content of the imaginary part of the image adds destructively. Also notice that even though the composite spectral image has spatial frequency data in all directions, there is a finite region around DC where the spatial frequency data is missing. As a result, the spectral images are bipolar, zero-mean, high-pass filtered versions of the object spectral density.

## 8. Discussion

Fourier transform imaging spectroscopy can be performed with a multi-aperture optical system by using existing path-length control elements to introduce the required OPD’s. The theory presented shows that spectral data can be obtained from polychromatic intensity measurements by the standard Fourier technique, but the DC spatial frequency components are missing from the resulting spectral images. This is due to the fact that the spatial transfer functions for these images, the SOTF’s, are given by the cross-correlations between the pupil functions for groups of subapertures that have different path-delays during data collection. Since the subapertures do not normally overlap physically, the SOTF’s vanish in some finite region around DC spatial frequency. Thus, the spectral images are also missing some low spatial frequency content. This poses an interesting image reconstruction problem. Linear algorithms, such as the Wiener-Helstrom filter [17], cannot reconstruct the missing low spatial frequencies. However, nonlinear algorithms may be able to reconstruct the missing data based on constraints and specific assumptions about the object. It is unclear whether superresolution algorithms [18], which are typically used to fill in missing high spatial frequencies, can fill in missing low spatial frequency data. However, we have had some success filling in the low spatial frequencies by maximizing a derivative-based sharpness metric [19], which assumes that the object consists of regions that are piecewise uniform in the spatial dimensions, subject to constraints, which require the reconstruction to be consistent with the panchromatic fringe bias data [20].

In particular systems, the imaging properties can be improved significantly by introducing multiple OPD’s between the subapertures for each intensity measurement instead of using a single OPD. This technique offers the ability to collect spectral data over a larger area of the spatial frequency plane, but has two significant trade-offs: (i) the spectral bandwidth of the system needs to be limited and (ii) the spectral resolution varies with spatial frequency.

## Appendix

The multi-aperture FTIS is based on temporal coherence effects, but the role of spatial coherence effects may not be immediately obvious. For this reason, this section presents a description of the derivation of Eq. (2) based on partial coherence theory. The cross-spectral density function is propagated through the system using Fresnel-like transforms and generalized transmission functions. An expression for the image intensity is given for a general partially coherent object, which is then simplified for a spatially incoherent object. The final result shows that spatial coherence effects do not play a role in the measurements.

The cross spectral density in a plane *z*=constant is defined in Section 4.3.2 of Ref. [21] as

where *V*(*x,y,ν*) is the generalized temporal Fourier transform of the analytic signal representation of the scalar electric field at the point (*x,y*) in the plane of interest. Note that this definition is the complex conjugate of the quantity in Ref. [21], in order to conform to the convention of Refs. [23,23]. The cross-spectral density obeys two Helmholtz equations and can be propagated from a plane *z*=0 to a plane *z=d* >0 by two applications of Rayleigh’s first diffraction formula (see Sec. 4.4.2 of Ref. [21]). By making the standard paraxial physical-optics approximations, the propagation equation can be written in the following form

$$\times \mathrm{exp}\left\{\frac{i\pi}{\lambda d}\left[{\left({x}_{1}-{x}_{1}^{\prime}\right)}^{2}+{\left({y}_{1}-{y}_{1}^{\prime}\right)}^{2}-{\left({x}_{2}-{x}_{2}^{\prime}\right)}^{2}-{\left({y}_{2}-{y}_{2}^{\prime}\right)}^{2}\right]\right\},$$

where *W*
^{(0)}(*x*
_{1},*y*
_{1},*x*
_{2},*y*
_{2},*ν*) and *W*
^{(d)}(*x*
_{1},*y*
_{1},*x*
_{2},*y*
_{2},*ν*) represent the cross-spectral densities in the planes *z*=0 and *z=d*, respectively, the distance between the planes is assumed to be many optical wavelengths (*d*≫*λ*), and the Fresnel approximation [24] has been used.

The concept of a generalized pupil function for the scalar optical field can be applied to the cross-spectral density. If *T*(*x,y,ν*) describes the complex amplitude transmission in the plane *z*=0, such that

where *V*_{inc}
(*x,y,ν*) represents the field incident from the half-space *z*<0 and *V*_{trans}
(*x,y,ν*) represents the field transmitted into the half-space *z*>0, then by substitution into Eq. (24), one can write

where *W*
^{(0)} _{inc}(*x*
_{1},*y*
_{1},*x*
_{2},*y*
_{2},*ν*) and *W*
^{(0)} _{trans}(*x*
_{1},*y*
_{1},*x*
_{2},*y*
_{2},*ν*) represent the incident and transmitted cross-spectral densities, respectively. The standard transmission function for a lens is given in Section 5.1 of Ref. [24], and the transmission function for the pupil plane *T*_{pup}
(*ξ,η,ν,τ*) is given in Eq. (1). Note that the pupil transmission function is written explicitly as a function of the time-delay variable.

The cross-spectral density is propagated through the multi-aperture FTIS system shown in Fig. 2 by repeated application of Eqs. (25) and (27). After simplification, the cross-spectral density in the image plane *W*
^{(i)}(*x*
_{1},*y*
_{1},*x*
_{2},*y*
_{2},*ν,τ*) can be expressed as

$$\times \mathrm{exp}\left[\frac{i\pi}{\lambda {f}_{o}}\left(1-\frac{{d}_{1}}{{f}_{o}}\right)\left(\frac{{x\prime}_{1}^{2}}{{M}^{2}}+\frac{{y\prime}_{1}^{2}}{{M}^{2}}-\frac{{x\prime}_{2}^{2}}{{M}^{2}}-\frac{{y\prime}_{2}^{2}}{{M}^{2}}\right)\right]$$

$$\times \mathrm{exp}\left[\frac{i\pi}{\lambda {f}_{i}}\left(1-\frac{{d}_{2}}{{f}_{i}}\right)\left({{x}_{1}}^{2}+{{y}_{1}}^{2}-{{x}_{2}}^{2}-{{y}_{2}}^{2}\right)\right]$$

$$\times \{\sum _{q=1}^{Q}\sum _{p=1}^{Q}{t}_{q}({x}_{1}-{x}_{1}^{\prime},{y}_{1}-{y}_{1}^{\prime},\nu ){t}_{p}^{*}\left({x}_{2}-{x}_{2}^{\prime},{y}_{2}-{y}_{2}^{\prime},\nu \right)$$

$$\times \mathrm{exp}\left[i2\pi \nu \left({\gamma}_{q}-{\gamma}_{p}\right)\tau \right],$$

where *W*
^{(o)}(*x*
_{1},*y*
_{1},*x*
_{2},*y*
_{2},*ν,τ*) is the cross-spectral density in the object plane. The image intensity *I*(*x,y,τ*) is related to the cross-spectral density by [21]

For a spatially incoherent object, the object spectral density can be written as [25]

where *S*_{o}
(*x′,y′,ν*) is the spectral density of the object and *κ=λ*
^{2}/*π* for a perfectly incoherent object. Substituting Eqs. (28) and (30) into Eq. (29) and simplifying yields Eq. (2).

## Acknowledgment

This work was supported by Lockheed Martin Corporation.

## References and Links

**1. **J. S. Fender, “Synthetic apertures: an overview,” in *Synthetic Aperture Systems*, J. S. Fender, ed., Proc. SPIE440, 2–7 (1983).

**2. **S.-J. Chung, D. W. Miller, and O. L. de Weck, “Design and implementation of sparse aperture imaging systems,” in *Highly Innovative Space Telescope Concepts*, H. A. MacEwen, ed., Proc. SPIE4849, 181–191 (2002).

**3. **D. Redding, S. Basinger, A. E. Lowman, A. Kissil, P. Bely, R. Burg, and R. Lyon, “Wavefront sensing for a next generation space telescope,” in *Space Telescopes and Instruments V*, P. Y. Bely and J. B. Breckinridge, eds., Proc. SPIE3356, 758–772 (1998).

**4. **R. L. Kendrick, A. L. Duncan, and R. Sigler, “Imaging Fizeau interferometer: experimental results,” presented at Frontiers in Optics, Tucson, Arizona, 5–9 Oct. 2003 (post-deadline paper 15).

**5. **R. G. Paxman, T. J. Schultz, and J. R. Fienup, “Joint estimation of object and aberrations by using phase diversity,” J. Opt. Soc. Am. A **9**, 1072–1085 (1992). [CrossRef]

**6. **J. R. Fienup, “MTF and integration time versus fill factor for sparse-aperture imaging systems,” in *Imaging Technologies and Telescopes*, J. W. Bilbro, et al., eds., Proc. SPIE4091, 43–47 (2000).

**7. **J. R. Fienup, D. Griffith, L. Harrington, A. M. Kowalczyk, J. J. Miller, and J. A. Mooney, “Comparison of reconstruction algorithms for images from sparse-aperture systems,” in *Image Reconstruction from Incomplete Data II*, P. J. Bones, et al., eds., Proc. SPIE4792, 1–8 (2002).

**8. **J. Kauppinen and J. Partanen, *Fourier Transforms in Spectroscopy*, (Wiley-VCH, Berlin, 2001). [CrossRef]

**9. **N. J. E. Johnson, “Spectral imaging with the Michelson interferometer,” in *Infrared Imaging Systems Technology*, Proc. SPIE226, 2–9 (1980).

**10. **C. L. Bennett, M. Carter, D. Fields, and J. Hernandez, “Imaging Fourier transform spectrometer,” in *Imaging Spectrometry of the Terrestrial Environment*, G. Vane, ed., Proc. SPIE1937, 191–200 (1993).

**11. **M. R. Carter, C. L. Bennett, D. J. Fields, and F. D. Lee, “Livermore imaging Fourier transform infrared spectrometer,” in *Imaging Spectrometry*, M. R. Descour, J. M. Mooney, D. L. Perry, and L. R. Illing, eds., Proc. SPIE2480, 380–386 (1995).

**12. **K. Itoh and Y. Ohtsuka, “Fourier transform spectral imaging: retrieval of source information from three-dimensional spatial coherence,” J. Opt. Soc. Am. A **3**, 94–100 (1986). [CrossRef]

**13. **J.-M. Mariotti and S. T. Ridgeway, “Double Fourier spatio-spectral interferometry: combining high spectral and high spatial resolution in the near infrared,” Astron. Astrophys. **195**, 350–363 (1988).

**14. **M. Frayman and J. A. Jamieson, “Scene imaging and spectroscopy using a spatial spectral interferometer,” in *Amplitude and Intensity Spatial Interferometry*, J. B. Breckingridge, ed., Proc. SPIE1237, 585–603 (1990).

**15. **R. L. Kendrick, E. H. Smith, and A. L. Duncan, “Imaging Fourier transform spectrometry with a Fizeau interferometer,” in *Interferometry in Space*, M. Shao, ed., Proc. SPIE4852, 657–662 (2003).

**16. **Provided through the courtesy of Jet Propulsion Laboratory, California Institute of Technology, Pasadena, California (http://aviris.jpl.nasa.gov/).

**17. **C. W. Helstrom, “Image restoration by the method of least squares,” J. Opt. Soc. Am. **57**, 297–303 (1967). [CrossRef]

**18. **B. R. Hunt, “Super-resolution of images: algorithms, principles, performance,” International Journal of Imaging Systems and Technology **6**, 297–304 (1995). [CrossRef]

**19. **S. T. Thurman and J. R. Fienup, “Fourier transform imaging spectroscopy with a multiple-aperture telescope: band-by-band image reconstruction,” in *Optical, Infrared, and Millimeter Space Telescopes*, J. C. Mather, ed., Proc. SPIE5487-68 (2004).

**20. **S. T. Thurman and J. R. Fienup, “Reconstruction of multispectral image cubes from multiple-telescope array Fourier transform imaging spectrometer,” presented at Frontiers in Optics, Rochester, New York, 10–14 Oct. 2004, paper FTuB3.

**21. **L. Mandel and E. Wolf, *Optical Coherence and Quantum Optics*, (Cambridge University Press, Cambridge, 1995).

**22. **M. Born and E. Wolf, *Principles of Optics*, 7th (expanded) ed., (Cambridge University Press, Cambridge, 2002) Sec. 10.2.

**23. **J. W. Goodman, *Statistical Optics*, (Wiley, New York, 2000) Sec. 3.5.

**24. **J. Goodman, *Introduction to Fourier Optics*2nd ed., (McGraw-Hill, New York, 1996).

**25. **M. J. Beran and G. B. Parrent Jr., “The mutual coherence of incoherent radiation,” Nuovo Cimento **27**, 1049–1065 (1963). [CrossRef]