## Abstract

The resolution of a diffraction-limited imaging system is inversely proportional to the aperture size. Instead of using a single large aperture, multiple small apertures are used to synthesize a large aperture. Such a multi-aperture system is modular, typically more reliable and less costly. On the other hand, a multi-aperture system requires phasing sub-apertures to within a fraction of a wavelength. So far in the literature, only the piston, tip, and tilt type of inter-aperture errors have been addressed. In this paper, we present an approach to correct for rotational and translational errors as well.

© 2012 OSA

## 1. Introduction

A conventional imaging system with a circular aperture without aberrations is said to be diffraction-limited and its point spread function (PSF) is an Airy function whose width is inversely proportional to its aperture diameter. However, aberrations present in the aperture necessarily spread energy from the PSF peak resulting in blurred imagery. Resolution improves with increasing aperture diameter, but unfortunately so does the weight, volume and cost. Instead of using a single large aperture, multiple small apertures are combined to synthesize a large aperture. This aperture synthesis requires phasing the sub-apertures to within a fraction of a wavelength which is challenging when wavelength is on the order of one micron. The multi-aperture imaging system described uses a coherent detection method to measure the field (both amplitude and phase) within each sub-aperture. In a digital, post-detection process, each measured sub-aperture field is placed into a common pupil plane corresponding to the physical location of the sub-aperture. The composite pupil plane field is then digitally propagated to the image plane. If no relative phase errors exist between the sub-apertures, the sub-apertures are said to be phased. If the sub-apertures are diffraction-limited and phased, an image is formed with improved resolution based on the synthetic array dimensions [1].

#### 1.1. Optical field measurement via spatial heterodyne detection

In order to synthesize of large aperture from multiple small apertures, the complex-valued field radiating off the object must be measured within each sub-aperture. The object is flood illuminated with a coherent laser source. Coherent detection is accomplished at the receiver by optically mixing the field radiating off the object with a local oscillator reference beam [2, 3]. In the spatial heterodyne coherent detection technique, a two-dimensional detector array at a sub-aperture records

where*a*(

_{k}*x, y*) is the complex object field captured at the

*k*th sub-aperture and

*r*(

*x, y*) =

*e*

^{−j2π(u0x+v0y)}is the tilted reference beam, also known as the local oscillator. The complex-valued field is obtained by taking the Fourier transform of

*I*(

_{k}*x, y*). Defining

*â*(

_{k}*u, v*) and

*r̂*(

*u, v*) as the Fourier transforms of the pupil plane field

*a*(

_{k}*x, y*) and the local oscillator

*r*(

*x, y*), the Fourier transform of

*I*(

_{k}*x, y*) becomes

*R*(·) is the autocorrelation function, and the offset (

*u*

_{0},

*v*

_{0}) is due to the tilted local oscillator in the spatial heterodyne mixing. The

*â*(

_{k}*u*,

*v*) term is spatially separated from its conjugate term and the autocorrelation terms by virtue of the spatial offset. Therefore, the desired pupil plane field,

*â*(

_{k}*u, v*), is obtained by cropping the region of

**(

*I*(

_{k}*x, y*)) around (

*u*

_{0}

*, v*

_{0}). In addition to the spatial heterodyne technique, the object field

*a*(

_{k}*x, y*) can be measured using other interferometric methods such as phase shifting interferometry or estimated from the object intensity using a phase retrieval algorithm [4, 5].

#### 1.2. Aperture synthesis

The field radiating from the object is measured across each of the *k* sub-aperture pupils. Each measured pupil plane field *a _{k}*(

*x, y*) is placed on a blank aperture field at their corresponding spatial locations to form a composite pupil field. Mathematically, the composite pupil field can be written as ${a}_{\mathit{comp}(1:K)}(x,y)={\sum}_{k=1}^{K}{a}_{k}\left(x-{x}_{k},y-{y}_{k}\right)$, where (

*x*) is the center location of the

_{k}, y_{k}*k*th sub-aperture, and

*K*is the number of sub-apertures. (The notation

*a*

_{comp}_{(1:}

_{K}_{)}(

*x, y*) indicates that all sub-apertures from 1 to

*K*are used to form the composite as opposed to

*a*

_{comp}_{(1}

_{,k}_{)}(

*x, y*) where the first and

*k*th sub-apertures are used to form the composite.) The composite pupil field is then digitally propagated to the image plane, where the magnitude squared of the field becomes the intensity image. In Fig. 1, we show realizations

*b*(

*u,v*) = |

*â*

_{comp}_{(1:}

_{K}_{)}(

*u, v*)|

^{2}obtained with single-aperture (

*K*= 1) and multi-aperture (

*K*= 3) configurations. First, notice that a single realization is of poor quality due to speckle noise. When multiple realizations are averaged, the image quality improves since the statistically independent realizations of speckle noise tend to cancel out. It is known that the signal-to-noise ratio in case of speckle noise is inversely proportional with the square root of the number of realizations averaged [6]. Second, notice that in case of the three-aperture configuration, the resolution improves along the horizontal axis but not the vertical axis, which is obviously due to the arrangement of the sub-apertures [7]. (A more detailed description of the simulations will be given in Section 3.)

The imaging process requires correction of aberrations, including atmospheric turbulence, intra-aperture aberrations such as defocus, astigmatism, coma and spherical aberration, and inter-aperture aberrations such as piston, tip, and tilt. It has been shown that these aberrations could be modeled as phase distortions in the pupil plane and fixed through phase correction (also known as phasing). A commonly used method is to define a sharpness measure on the object image and determine the optimal weights of Zernike polynomials applied to the phases of the sub-apertures *a _{k}*(

*x, y*) [8–11]. The Zernike polynomials, illustrated in Fig. 2, form an orthogonal basis, and have been used extensively for identifying and fixing lens aberrations over circular apertures [12].

So far in the literature, only the piston, tip, and tilt type of inter-aperture errors have been addressed. In this paper, we will show how to fix rotational and translational errors in addition to piston/tip/tilt errors. We will discuss that rotational errors could easily be fixed through a coordinate transformation process, followed by a Fourier-domain phase correction, while translational errors and piston/tip/tilt errors are fixed in image plane and pupil plane, respectively.

In Sections 2.1 and 2.2, we will review the Zernike-based intra-aperture and inter-aperture (piston/tip/tilt) corrections, respectively. We will then present our underlying idea for correcting rotational errors in Section 2.3, and translational errors in Section 2.4. The algorithm to fix intra- and inter-aperture aberrations (including piston/tip/tilt,rotation, and translation) in multi-aperture systems is presented in Section 2.5. In Section 3, we will provide experimental results to show the performance of the algorithm. Conclusions are given in Section 4.

## 2. Phase correction of sub-apertures

#### 2.1. Intra-aperture correction

Intra-aperture aberrations, including defocus, astigmatism, coma, and spherical aberrations, can be modeled as phase distortions of the pupil field *a _{k}*(

*x, y*), and can be corrected by determining the Zernike polynomial weights that optimize a measure

*S*(·) applied on the constructed image

*b*(

*u,v*). Depending on whether the measure

*S*(·) outputs a low or a high value for sharp images, the optimization is either a minimization or a maximization problem. In this paper, we use the convention of minimizing the measure

*S*(·). As the correction is applied on the phase of each pupil field, the optimal Zernike coefficients

*ŵ*for the

_{k,p}*k*th sub-aperture are

*Z*

_{1}(

*x, y*),...,

*Z*(

_{P}*x, y*) are the Zernike polynomials used. When there are multiple realizations, input to the measure

*S*(·) is the average of all realizations.

#### 2.2. Piston/Tip/Tilt correction

The inter-aperture aberrations piston, tip, and tilt can also be modeled as phase distortions on the pupil plane and typically corrected using the Zernike polynomials in a similar way. In case of multiple sub-apertures, one of the sub-apertures is taken as the reference sub-aperture, and all other sub-apertures are corrected with respect to the reference. Setting the first sub-aperture as the reference, the composite of the first and the *k*th fields is

*Z*include ${Z}_{0}^{0}$, ${Z}_{1}^{-1}$, and ${Z}_{1}^{1}$ from Fig. 2,

_{p}*w*are the coefficients of these polynomials. The optimal values of these coefficients are determined by

_{k,p}#### 2.3. Rotation correction

In addition to the piston/tip/tilt errors, it is possible that the sensor at a sub-aperture is rotated by some angle *θ* along the optical axis. In other words, the measured pupil plane field at a sub-aperture is *a _{k}*(

*x*′

*,y*′) instead of

*a*(

_{k}*x, y*), where (

*x*′

*,y*′) = (

*xcosθ*−

*ysinθ, xsinθ*+

*ycosθ*). Such a rotational error cannot be fixed with pupil plane phasing using Zernike polynomials; and it would also degrade the accuracy of piston/tip/tilt correction.

The problem can be solved through transforming the pupil plane field from Cartesian coordinate system to polar coordinate system, where the rotation becomes a circular shift along the angular axis. (The polar coordinates are calculated as
$(\rho ,\theta )=(\sqrt{{x}^{2}+{y}^{2}},{\mathit{tan}}^{-1}(y/x))$; and an example is provided in Fig. 3.) Defining
${a}_{k}^{(\mathit{polar})}(\rho ,\varphi )$ as the coordinate transformed version of *a _{k}*(

*x, y*), the rotated field

*a*(

_{k}*x*′

*,y*′) will have the polar version ${a}_{k}^{(\mathit{polar})}(\rho ,\varphi +\theta )$. Let ${\widehat{a}}_{k}^{(\mathit{polar})}({u}_{\rho},{v}_{\varphi})$ be the Fourier transform of ${a}_{k}^{(\mathit{polar})}(\rho ,\varphi )$, then the shifted version ${a}_{k}^{(\mathit{polar})}(\rho ,\varphi +\theta )$ will have a Fourier transform of ${\widehat{a}}_{k}^{(\mathit{polar})}\left({u}_{\rho},{v}_{\varphi}\right){e}^{-j2\pi \theta {v}_{\varphi}}$. That is, a rotational error in

*a*(

_{k}*x, y*) corresponds to a linear phase shift in ${\widehat{a}}_{k}^{(\mathit{polar})}({u}_{\rho},{v}_{\varphi})$. This phase shift can be corrected similar to the way piston/tip/tilt errors are fixed. Following the same convention, the rotational correction of the

*k*th sub-aperture is achieved by determining the optimal coefficient

*𝒫*operator transforms from Cartesian to polar coordinates, and

*𝒫*

^{−1}is the inverse transformation.

Notice that we use the Zernike polynomial
${Z}_{1}^{1}$, which corresponds to linear phase shift along the horizontal (that is, *θ*) axis. However, one should be careful that the domain of the polynomial is now rectangular unlike the previous cases, where it is circular. (Therefore, strictly speaking, the term Zernike can be avoided; and the polynomial can simply be referred to as a linear phase polynomial.)

Here, one may argue to fix the rotational error directly in pupil plane through optimizing the angle. This approach, however, is problematic as it requires resampling of the pupil field and is not as accurate as the Fourier domain approach. In fact, within the context of incoherent image registration, Fourier domain phase correlation has been demonstrated to achieve subpixel accurate registration with much less computational complexity compared to spatial domain approach [13].

#### 2.4. Shift correction

Now, suppose that the actual position of *k*th sub-aperture is offset by amount (*δx _{k}*,

*δy*); that is, the measurement of the camera is

_{k}*I*(

_{k}*x*+

*δx*,

_{k}*y*+

*δy*). This spatial shift (translational error) will correspond to a phase shift in the image plane:

_{k}The local region around (*u*_{0}, *v*_{0}) is then *â _{k}*(

*u*,

*v*)

*e*

^{−j2π(δxku+δykv)}

*e*

^{−j2π(δxku0+δykv0)}. That is, the phase distortion in

*â*(

_{k}*u, v*) consists of a constant term and two linear terms, one in each direction. This can be handled by the first three Zernike polynomials, ${Z}_{0}^{0}$, ${Z}_{1}^{-1}$, and ${Z}_{1}^{1}$.

#### 2.5. Proposed phasing algorithm

The overall illustration of the idea is given in Fig. 4. The corrections are done in three domains: the intra-aperture and piston/tip/tilt corrections are done on the pupil plane field; the rotation correction is done on the Fourier transform of the polar transformed pupil plane field; and the shift correction is done on the image plane field. The transformations between these domains are invertible; therefore, the signal can be transformed to a particular domain, fixed for a certain type of aberration, and then taken back to another domain. In all cases, the optimization is done by minimizing the measure *S*(·), which is applied on *b*(*u,v*).

The proposed algorithm to correct for intra- and inter-aperture aberrations is given in Algorithm 1. First, the intra-aperture aberrations are corrected. Then, the inter-aperture piston/tip/tilt, rotation, and translation corrections are done iteratively. In the first iteration, one of the sub-apertures is set as the reference sub-aperture; each of the remaining sub-apertures is corrected with respect to the reference sub-aperture sequentially. For the rest of the iterations, the input to the sharpness measure includes all sub-apertures. When all the sub-apertures are included the resulting image has a finer spatial resolution, which yields a more accurate parameter estimation. In our experiments, two iterations were sufficient, and subsequent iterations did not produce significant visual quality improvement.

## 3. Experimental evaluation

The imaging simulation models an optically diffuse object at range, *L*, which is flood illuminated by a coherent laser source of wavelength *λ*. The complex-valued field reflected off the object is modeled with amplitude equal to the square root of the object’s intensity reflectance and phase a uniformly distributed random variable over −*π* to *π*. The complex-valued field in the receiver plane, subject to the paraxial approximation, is given by the Fresnel diffraction integral [14]. Analytic evaluation of this Fresnel diffraction integral is difficult for all but a few very simple object geometries. Therefore, the Fresnel diffraction integral was numerically evaluated using the angular spectrum propagation method [15]. However, the angular spectrum method of wave propagation is limited by discrete Fourier transform wraparound effects that occur when the wavefront spreads in the transverse dimensions and reflects at the computational grid boundaries. This wraparound effect is the most onerous limitation for wavefront propagations from diffuse objects which have inherently large divergence. In order to mitigate this wraparound effect, the wave propagation is performed in multiple partial propagations. After each partial propagation, the wavefront energy reflected at the computational grid boundary is absorbed by an annular attenuating function. The central, on-axis region of the propagating wavefront is unattenuated. The partial propagation distance is limited such that spreading wavefront does not reflect into the central, on-axis region.

The object plane and receiver pupil planes in the simulation consisted of *N* = 2048 × 2048 computational grids with identical 182*μ*m sample spacings in both planes. The optical wavelength, *λ*, is 1.55*μ*m, and the range, *L*, from the receive pupil plane to the object is 100 meters. The numerical propagation consisted of 10 partial propagations of 10 meters each to avoid the wraparound effects described above.

The optical field in the receiver pupil plane was measured in three sub-apertures with 48mm diameters and 70mm center-to-center spacings. In Fig. 5, we display the focused image after 60 realizations in the three-aperture configuration. This is the best possible realization that can be achieved when there was no aberrations.

To simulate inter-aperture aberrations, we added random piston/tip/tilt and rotation errors to each sub-aperture. Figures 6(a), 6(b), and 6(c) show the average realizations in each sub-aperture. The first step of the algorithm is the intra-aperture correction. This is achieved as given in (3). The Zernike polynomials used are
${Z}_{2}^{-2}$,
${Z}_{2}^{0}$,
${Z}_{2}^{2}$,
${Z}_{4}^{2}$. The optimization is done using the *f minunc* function of MATLAB, minimizing the well-known power measure ∑* _{u,v}* |

*b*(

*u,v*)|

^{0.5}[10]. The output of this step is given in Fig. 6(d), 6(e), and 6(f). As shown in Fig. 6(g), the composite at this point suffers from piston/tip/tilt and rotation errors. Next, we do the inter-aperture corrections as described in Algorithm 1. The same optimization technique and the sharpness measure mentioned above is used. Figure 6(h) shows the result with piston/tip/tilt correction only. Figure 6(i) shows the result when both piston/tip/tilt and rotation/shift corrections are done. Figure 7 shows selected zoomed regions to highlight the effectiveness of the algorithm. (Compare the recovered result in Fig. 7(d) with the best possible result given in Fig. 5.)

## 4. Conclusions

In this paper, we present a method to correct for rotational and translational errors in addition to piston/tip/tilt errors in multi-aperture coherent imaging. For rotational correction, the pupil field is transformed from Cartesian to polar coordinates, where a rotation becomes a circular shift, which is then fixed in Fourier domain by a linear phase correction. For translational errors, the correction is done on the phase of the image plane field. As the piston/tip/tilt, rotational, and translational corrections are done on different domains, a sequential algorithm is adopted, where one domain correction is done at a time. The corrections can be done iteratively; in our experiments, two iterations (first iteration with two sub-apertures as inputs, second iteration with all sub-apertures as inputs) resulted in satisfactory satisfactory results. It is also possible to apply the idea to dynamic scenes, where frame-by-frame error correction has to be done. Finally, a straightforward extension of the method can be done address pupil magnification errors as well: Instead of transforming the pupil data to polar coordinates from Cartesian coordinates, we may transform it to log-polar coordinates, where a magnification in pupil data corresponds to a shift in the log axis. Such a shift would become a phase shift when Fourier transform is taken, and can be fixed through linear phase correction.

## References and links

**1. **N. J. Miller, M. P. Dierking, and B. D. Duncan, “Optical sparse aperture imaging,” Appl. Opt. **46**(23), 5933–5943 (2007). [CrossRef] [PubMed]

**2. **E. N. Leith and J. Upatnieks, “Wavefront reconstruction and communication theory,” J. Opt. Soc. Am. **52**(10), 1123–1128 (1962). [CrossRef]

**3. **J. C. Marron, R. L. Kendrick, N. Seldomridge, T. D. Grow, and T. A. Hoft, “Atmospheric turbulence correction using digital holographic detection: experimental results,” Opt. Express **17**(14), 11638–11651 (2009). [CrossRef] [PubMed]

**4. **D. Malacara, *Optical Shop Testing* (Wiley, 2007). [CrossRef]

**5. **J. R. Fienup, “Lensless coherent imaging by phase retrieval with an illumination pattern constraint,” Opt. Express **14**(2), 498–508 (2006). [CrossRef] [PubMed]

**6. **J. W. Goodman, *Speckle Phenomena in Optics: Theory and Applications* (Roberts and Company, 2006).

**7. **D. Rabb, D. Jameson, A. Stokes, and J. Stafford, “Distributed aperture synthesis,” Opt. Express **18**(10), 10334–10342 (2010). [CrossRef] [PubMed]

**8. **R. A. Muller and A. Buffington, “Real-time correction of atmospherically degraded telescope images through image sharpening,” J. Opt. Soc. Am. **64**(9), 1200–1210 (1974). [CrossRef]

**9. **R. G. Paxman and J. C. Marron, “Aberration correction of speckled imagery with an image sharpness criterion,” Proc. SPIE **976**, 37–47 (1988).

**10. **J. R. Fienup and J. J. Miller, “Aberration correction by maximizing generalized sharpness metrics,” J. Opt. Soc. Am. A **20**(4), 609–620 (2003). [CrossRef]

**11. **S. T. Thurman and J. R. Fienup, “Phase-error correction in digital holography,” J. Opt. Soc. Am. A **25**(4), 983–994 (2008). [CrossRef]

**12. **R. J. Noll, “Zernike polynomials and atmospheric turbulence,” J. Opt. Soc. Am. **66**(3), 207–211 (1976). [CrossRef]

**13. **B. S. Reddy and B. N. Chatterji, “An FFT-based technique for translation, rotation, and scale-invariant image registration,” IEEE Trans. Image Process. **5**(8), 1266–1271 (1996). [CrossRef] [PubMed]

**14. **J. W. Goodman, *Introduction to Fourier Optics* (Roberts and Company, 2004).

**15. **J. D. Schmidt, *Numerical Simulation of Optical Wave Propagation* (SPIE, 2010).