Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Multi-transmitter aperture synthesis

Open Access Open Access

Abstract

Multi-transmitter aperture synthesis is a method in which multiple transmitters can be used to improve resolution and contrast of distributed aperture systems. Such a system utilizes multiple transmitter locations to interrogate a target from multiple look angles thus increasing the angular spectrum content captured by the receiver aperture array. Furthermore, such a system can improve the contrast of sparsely populated receiver arrays by capturing field data in the region between sub-apertures by utilizing multiple transmitter locations. This paper discusses the theory behind multi-transmitter aperture synthesis and provides experimental verification that imagery captured using multiple transmitters will provide increased resolution.

©2010 Optical Society of America

1. Introduction

Coherent aperture synthesis offers a means of capturing high resolution imagery while limiting the size and weight of an imaging system. Such systems often use holographic methods to capture the optical wavefronts across a number of small telescopes and synthesize the captured fields into a larger digital pupil plane [1]. The synthesized imagery will have improved resolution relative to the imagery formed by the individual sub-apertures while the blurring effects of atmospheric turbulence can be mitigated [2]. Furthermore, two-wavelength interferometric techniques can be applied to create 3D imagery with improved resolution [3].

Miller et al. showed that the transverse resolution of distributed aperture systems is determined by the array geometry and that mid and high-frequency contrast tends to fall off as an array is made increasingly sparse [4]. To recover contrast in sparse arrays, Stokes et al. proposed and demonstrated a method of optimum sub-aperture scaling that reintroduces redundancy in the array autocorrelation and boosts mid frequencies [5].

Aperture synthesis has been demonstrated for arrays of sub-apertures which are sparsely packed [6]. These arrays require speckle averaging to correctly phase the independently captured field values into a single digital pupil field. As a result methods which feature more redundant sampling of the digital pupil plane will allow for improved image processing techniques such as cross-correlating overlapped speckle fields.

Holographic Aperture Ladar (HAL) has been demonstrated as a means of capturing an increased aperture baseline by moving a transceiver system relative to a target [7]. HAL serves as a method which combines the temporal measurements made in Synthetic Aperture Ladar (SAL) with the spatial measurements described above, thus allowing for literal 2D imaging with the improved cross-range resolution commensurate with traditional SAL [8]. As with SAL, Holographic Aperture Ladar requires relative motion between the sensor platform and the target in order to achieve improved resolution and image contrast, which is often described as aperture gain. An illustration of the HAL concept is shown in Fig. 1 .

 figure: Fig. 1

Fig. 1 The HAL concept combines synthetic aperture ladar with digital holography. Translation of a linear array through positions 1 to 4 allows a very large synthetic aperture to be formed. A single transmitter is shown as a white circle which translates along with the receiver array shown by the grey circles.

Download Full Size | PDF

The white circles in Fig. 1 denote each transmitter realization while grey circles denote the holographic receivers. The dotted lines represent previous transceiver locations. Digital algorithms are used to convert the spatially and temporally captured optical wavefront values into a single digital pupil plane. A 2-dimensional FFT can then be used to create a high-resolution image. The final image resolution is then inversely proportional to the synthetic aperture diameter in one dimension and inversely proportional to the effective array diameter in the other dimension. The depiction in Fig. 1 is analogous to bi-static SAR [9], with the substitution of a transmitter at an optical wavelength, and a receiver comprised of holographic apertures capturing a large number of independent speckle returns from the illuminated object. Similar to bi-static SAR aperture gain can be achieved through both receiver motion as well as transmitter motion.

Here it is demonstrated that gain in resolution and contrast can also be achieved through the use of multiple transmitters as well as multiple receivers from a static system. Multi-transmitter, multi-aperture synthesis describes a process in which arrays of temporally-multiplexed transmitters can be used as a substitute for motion between the transceiver system and the object. A multi-transmitter system will illuminate the target with one transmitter at a time while capturing the resulting backscattered field due to each illumination angle, as shown in Fig. 2 . Changes in transmitter location result in a tilt phase term in the target-plane illumination field and thus a translation of the backscattered field back in the receiver plane as shown in Fig. 2. As a result, a single set of receiver apertures can be used to capture multiple sets of backscattered field data.

 figure: Fig. 2

Fig. 2 A multi-transmitter, multi-aperture imaging system will (a) illuminate the target with a pulse from a single transmitter element and capture the backscatter. The subsequent transmitter pulse will (b) illuminate the target from a different location and the receiver array will capture a shifted version of the reflected wavefront.

Download Full Size | PDF

In addition to increasing resolution, the use of multiple transmitters provides a method that can be used to improve the contrast of imagery formed by sparsely packed arrays by effectively increasing the number of sub-apertures. This work will describe the theory behind a coherent multi-transmitter, multi-aperture imaging system. Furthermore, an experiment is conducted which presents experimental results from a multi-transmitter, multi-aperture system.

2. Theory

Figure 3 depicts an ideal multi-transmitter system in which the receiver and transmitter arrays are located in the x-y plane and stare at the target plane. The coherent receivers are assumed to be ideal such that they can capture the incident optical wavefront. In the derivation to follow, we will focus on how multiple transmitters can be utilized to provide increased aperture gain across the receiver array.

 figure: Fig. 3

Fig. 3 Theoretical layout of an imaging system which utilizes multiple transmitters with a active distributed aperture system.

Download Full Size | PDF

We begin with some standard assumptions of far field and vacuum propagation conditions and that a turbulent path would limit the extent of the usable transmitter locations. Accordingly, the field illuminating the target plane can be described using a Fraunhofer diffraction integral on the transmitted field. A staring point transmitter with an arbitrary location given by (xn,yn) will yield a target plane field UT(xT,yT) given by

UT(xT,yT)=ejkzejk2z(xT2+yT2)jλzF{U(xxn,yyn)},
where z is the propagation distance between the transmitter and target planes, λ is the wavelength of the source, F{}represents the Fourier Operator and the transmitted optical field is described by U(x-xn,y-yn). Note that the transmitted field is written as a function of transmitter location coordinates (xn,yn) and is defined to be identical between all transmitter locations. By rearranging the terms in Eq. (1) the target plane field can then be written as
UT(xT,yT)=g˜(xT,yT)F{U(x,y)δ(xxn,yyn)},
where g˜(xT,yT)denotes the phase terms in Eq. (1) and δ() is the dirac delta function. Furthermore, the transmitter location-dependent tilt term can be factored out of Eq. (2) and the diffracted field can then be rewritten such that
UT(xT,yT)=ej2πλz(xTxn+yTyn)UT'(xT,yT),
where UT’(xT,yT) is the general field given by far-field diffraction from the identical transmitters:
UT'(xT,yT)=g˜(xT,yT)F{U(x,y)​ ​}​ ​ ​.
The reflected field in the target plane Urefl(xT,yT) is written as the product of the illuminating field and the reflection coefficient of the scene such that
Urefl(xT,yT)=ej2πλz(xTxn+yTyn)UT'(xT,yT)r(xT,yT),
where r(xT,yT) describes the reflection coefficient of the target scene. The field measured at the receiver plane UR(x,y) is found by multiplying the receiver pupil function by the Fraunhofer diffraction pattern of the reflected field
UR(x,y)=P(x,y)ejkzejk2z(x2+y2)jλzF{ej2πλz(xTxn+yTyn)UT'(xT,yT)r(xT,yT)},
where P(x,y) is the pupil function of the receiver array. The expression for the received field can be simplified through use of the convolution theorem. The resulting expression is
UR(x,y)=P(x,y)ejkzejk2z(x2+y2)jλzUr(xxn,yyn),
where

Ur(x,y)=F{UT'(xT,yT)r(xT,yT)}.

Equation (7) shows that a fixed receiver array P(x,y) can be used to receive a large area of the backscattered wavefront by illuminating the target from multiple transmitter positions. The large collection area is in contrast to the relatively small receiver area of an individual sub-aperture. In short, a translated transmitter creates a tilt phase in the target plane which itself results in a shifted field across the static receiver array. This process is equivalent to translating the sub-apertures and leads to the aforementioned aperture gain across the receiver array.

3. Experiment

An experiment was designed to validate the multi-transmitter theory described above. The experiment will demonstrate that using multiple transmitters can provide aperture gain as well as improved mid-frequency contrast due to increased redundancy in the sampled fields.

3.1 Hardware

The lab hardware used in this experiment is based on digital holography with an afocal telescope used to improve sub-aperture resolution and the system’s photon collecting ability. A fiber-coupled HeNe laser is used as a coherent source with 95% of the power split to a transmitter and 5% used to provide a LO field. The receiver setup is illustrated in Fig. 4 while the laser and transmitter are not shown.

 figure: Fig. 4

Fig. 4 A diagram of the digital holography receiver setup used to capture the incident wavefront Ut(x,y).

Download Full Size | PDF

In Fig. 4, the field incident on the afocal telescope Ut(x,y) is imaged onto a Lumenera 120m camera and mixed with a tilted plane wave LO field described by ULO(x,y). The resultant fringes are captured by the camera and can be digitally processed using standard digital holographic techniques so that the field Ut(x,y) can be recovered [6]. Three receivers with diameters of 4.8 cm are arranged horizontally with a center to center spacing of 5.8 cm, together having an effective width of 16.4 cm.

The multiple-transmitter array will be replaced with a single transmitter which will be moved between shots in lieu of constructing multiple transmitters. The transmitter locations are described in Table 1 by relative coordinates (Δx,Δy) and the initial transmitter aperture location is defined to be location (0,0). The transmitter locations were based on shifts equal to half of the aperture-to-aperture separation of the fixed array, thus maximizing the overlap in between subsequent captures of the received field.

Tables Icon

Table 1. Relative transmitter locations.

In Fig. 5 , a diagram of the system apertures (dashed lines) and transmit locations (solid circles) is given. The three optical apertures are formed from three copies of the receiver shown in Fig. 4, with the drawing in Fig. 5 being in the plane of the field Ut(x,y). A transmitter is moved to each of the locations to gather the coherent pupil plane information for each transmitter location separately. For each realization the aperture function, P(x,y), and the transmitter locations (xn,yn) are used to place the reconstructed field in the combined coherent pupil plane.

 figure: Fig. 5

Fig. 5 A diagram of the optical apertures and transmitter locations in the pupil plane.

Download Full Size | PDF

The target consists of a U.S. Quarter Dollar placed at a distance of 10 meters from the imaging system. The quarter is worn and therefore presents a highly reflective target with both specular and diffuse reflections.

A range of 10 meters violates the far field assumption used in the derivation of Sec. 2; however, the processing steps defined in the following section remove not only aberrations, but also the quadratic Fresnel phase terms. This allows us to achieve the resultant aperture gain while operating in the Fresnel region.

3.2 Processing and results

The field values are captured across the horizontal apertures for each of the twelve transmitter locations listed in Table 1. The hardware described above allows the optical wavefront to be captured and algorithms are now applied to phase together the captured field values in order to create a larger synthetic pupil.

The first step in synthesizing a larger aperture involves eliminating static aberrations within each of the pupils by pre-recording a received flat field input. These static aberrations are removed from the current experiment by subtracting the pre-recorded phase from the phase of the captured field.

The second step involves sharpening the images formed by the individual telescopes by applying Zernike polynomials and maximizing a sharpness metric [2, 6]. The sharpness metric SA used here is given by

SA=dxdy|1Nn=1NIn(x,y)|γ,
where N is the total number of images captured and In(x,y) is the nth image. This method allows for the individual sub-aperture images to be sharpened based on the incoherent, speckle-averaged image [6]. Here, the number of images captured is 36 which will allow the image being sharpened to have the speckle noise floor reduced. It should be noted that, due to the effective receiver apertures overlapping, there will be a noise reduction less than the predicted 6x from traditional speckle averaging processes.

The third step is another calibration step. Once the individual apertures have been sharpened, the apertures are then placed into a combined aperture plane based on their position within the physical array of apertures. This is repeated for each of the twelve sets of data and adjusted for the transmitter location.

In the fourth step, the recovered field values are registered within a digital pupil plane by utilizing an optimization routine based on the cross-correlation of overlapped speckle patterns. To achieve sub-pixel registration accuracy in the pupil plane the routine optimizes on a phase tilt term in the focal plane. Once a pair of apertures has been overlapped the next set of field data is registered through the use of the same routine. This process is repeated until all of the apertures have been registered. The registered pupil planes are shown in Fig. 6(a) . This figure shows the regions which are overlapped between adjacent speckle fields and which are used to register the aperture locations. An amplitude correction is applied so that the overlapped fields are not double, triple, or even quadruple counted. The resulting, appropriately weighted, speckle field amplitude is shown in Fig. 6(b).

 figure: Fig. 6

Fig. 6 Registered pupil-field amplitudes (a) have increased amplitude where the registered field values overlap and have been added. An amplitude correction is applied to remove the aperture-registration artifacts resulting in a corrected field amplitude (b).

Download Full Size | PDF

Experimental results are presented in Fig. 7 for several stages of the processing methods described above. Figure 7(a) is the result of averaging across the imagery formed by each of the 36 pupil fields. These images have not been sharpened and the resulting image is poor. The next image is also averaged across the 36 sub-aperture realizations and the resulting image, Fig. 7(b), is at or near the diffraction limit of a single aperture. Figure 7(c) shows the result from synthesizing the three physical apertures for one of the transmitter locations, and then incoherently averaging over each of the 12 transmit locations, effectively using the transmit locations to create a new speckle realization. The final image, Fig. 7(d), is the result of synthesizing the 36 realizations rather than simply averaging across them. As a result the image has a higher relative speckle noise compared with Fig. 7(a) and (b), however the resolution is much higher. The increased resolution corresponds to the synthesized aperture (19.3 cm diameter), which has an aperture gain of 4 over a single sub-aperture (4.8 cm diameter). Note that image contrast has been altered, individually, for each Fig. 7(a) through (d) to reduce the effect of very bright speckles and glints.

 figure: Fig. 7

Fig. 7 An image formed from (a) 36 averages of an unsharpened, single-aperture image of the quarter and (b) 36 averages of a sharpened single aperture. The application of image sharpening algorithms on the field captured at a single aperture allows for aberrations to be corrected. An image formed from (c) 12 averages of a synthesized aperture from the three physical apertures and a single transmitter, and lastly an image (d) is shown which is created from the set of 36 field values synthesized into a single, large field.

Download Full Size | PDF

4. Conclusions

Multi-transmitter aperture synthesis promises to provide high resolution imagery without the need for relative motion between target and sensor. As a result this system may find utility in situations where the sensor is static or as a means to provide resolution orthogonal to the direction of travel in a HAL imaging system. A system which utilizes a sparsely packed array of sub-apertures can also apply multiple transmitters to facilitate phasing of sub-apertures, as well as reducing speckle noise to increase mid-frequency contrast.

An overview of multi-transmitter theory was presented which described how multiple transmitter apertures can be used to shift the received speckle field across a set of receiver apertures. An experiment was designed and conducted which demonstrated that multi-transmitter synthesis can be used to create high-resolution imagery. Image processing algorithms such as sharpening routines were applied to the imaging system to maximize the resolving power at the sub-aperture and synthesized aperture levels and speckle field correlation was used to correctly register the backscattered fields. The resulting imagery shows that much finer target details were captured than with sub-aperture limited image formation.

References and links

1. J. C. Marron and R. L. Kendrick, “Distributed Aperture Active Imaging,” Proc. SPIE 6550, 65500A (2007). [CrossRef]  

2. S. T. Thurman and J. R. Fienup, “Phase-error correction in digital holography,” J. Opt. Soc. Am. A 25(4), 983–994 (2008). [CrossRef]  

3. J. C. Marron, and R. L. Kendrick, “Multi-Aperture 3D Imaging Systems,” Aerospace Conference, 2008 IEEE (2008).

4. N. J. Miller, M. P. Dierking, and B. D. Duncan, “Optical sparse aperture imaging,” Appl. Opt. 46(23), 5933–5943 (2007). [CrossRef]   [PubMed]  

5. A. J. Stokes, B. D. Duncan, and M. P. Dierking, “Improving mid-frequency contrast in sparse aperture optical imaging systems based upon the Golay-9 array,” Opt. Express 18(5), 4417–4427 (2010). [CrossRef]   [PubMed]  

6. D. J. Rabb, D. F. Jameson, A. J. Stokes, and J. W. Stafford, “Distributed aperture synthesis,” Opt. Express 18(10), 10334–10342 (2010). [CrossRef]   [PubMed]  

7. J. W. Stafford, B. D. Duncan, and M. P. Dierking, “Experimental demonstration of a stripmap holographic aperture ladar system,” Appl. Opt. 49(12), 2262-2270 (2010). [CrossRef]   [PubMed]  

8. B. D. Duncan and M. P. Dierking, “Holographic aperture ladar,” Appl. Opt. 48(6), 1168–1177 (2009). [CrossRef]  

9. C. V. Jakowatz, D. E. Wahl, P. H. Eichel, D. C. Ghiglia, and P. A. Thompson, Spotlight-mode synthetic aperture radar: a signal processing approach (Springer, 1996), Chap. 2.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (7)

Fig. 1
Fig. 1 The HAL concept combines synthetic aperture ladar with digital holography. Translation of a linear array through positions 1 to 4 allows a very large synthetic aperture to be formed. A single transmitter is shown as a white circle which translates along with the receiver array shown by the grey circles.
Fig. 2
Fig. 2 A multi-transmitter, multi-aperture imaging system will (a) illuminate the target with a pulse from a single transmitter element and capture the backscatter. The subsequent transmitter pulse will (b) illuminate the target from a different location and the receiver array will capture a shifted version of the reflected wavefront.
Fig. 3
Fig. 3 Theoretical layout of an imaging system which utilizes multiple transmitters with a active distributed aperture system.
Fig. 4
Fig. 4 A diagram of the digital holography receiver setup used to capture the incident wavefront Ut(x,y).
Fig. 5
Fig. 5 A diagram of the optical apertures and transmitter locations in the pupil plane.
Fig. 6
Fig. 6 Registered pupil-field amplitudes (a) have increased amplitude where the registered field values overlap and have been added. An amplitude correction is applied to remove the aperture-registration artifacts resulting in a corrected field amplitude (b).
Fig. 7
Fig. 7 An image formed from (a) 36 averages of an unsharpened, single-aperture image of the quarter and (b) 36 averages of a sharpened single aperture. The application of image sharpening algorithms on the field captured at a single aperture allows for aberrations to be corrected. An image formed from (c) 12 averages of a synthesized aperture from the three physical apertures and a single transmitter, and lastly an image (d) is shown which is created from the set of 36 field values synthesized into a single, large field.

Tables (1)

Tables Icon

Table 1 Relative transmitter locations.

Equations (9)

Equations on this page are rendered with MathJax. Learn more.

U T ( x T , y T ) = e j k z e j k 2 z ( x T 2 + y T 2 ) j λ z F { U ( x x n , y y n ) } ,
U T ( x T , y T ) = g ˜ ( x T , y T ) F { U ( x , y ) δ ( x x n , y y n ) } ,
U T ( x T , y T ) = e j 2 π λ z ( x T x n + y T y n ) U T ' ( x T , y T ) ,
U T ' ( x T , y T ) = g ˜ ( x T , y T ) F { U ( x , y ) ​ ​ } ​ ​ ​ .
U r e f l ( x T , y T ) = e j 2 π λ z ( x T x n + y T y n ) U T ' ( x T , y T ) r ( x T , y T ) ,
U R ( x , y ) = P ( x , y ) e j k z e j k 2 z ( x 2 + y 2 ) j λ z F { e j 2 π λ z ( x T x n + y T y n ) U T ' ( x T , y T ) r ( x T , y T ) } ,
U R ( x , y ) = P ( x , y ) e j k z e j k 2 z ( x 2 + y 2 ) j λ z U r ( x x n , y y n ) ,
U r ( x , y ) = F { U T ' ( x T , y T ) r ( x T , y T ) } .
S A = d x d y | 1 N n = 1 N I n ( x , y ) | γ ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.