High pixel count apertures for digital holography may be synthesized by scanning smaller aperture detector arrays. Characterization and compensation for registration errors in the detector array position and pitch and for phase instability between the reference and object field is a major challenge in scanned systems. We use a secondary sensor to monitor phase and image-based registration parameter estimators to demonstrate near diffraction-limited resolution from a 63.4 mm aperture synthesized by scanning a 5.28 mm subaperture over 144 transverse positions. We demonstrate 60 μm resolution at 2 m range.
© 2011 OSA
Aperture synthesis is used to increase the resolution of coherent sensors . Synthetic aperture holography has been studied using a scanning system in off-axis digital holography. Two effects of scanning measurement complicate coherent aperture synthesis: subaperture registration errors and the relative phase instability of the reference field to the object field.
A cross-correlation method [2, 3] has been used to estimate the registration errors. The method basically conducts a similarity test on measurement overlap. Massig demonstrated improved resolution and reduced speckle sizes in the reconstructed image at a distance of 0.8 m . In that study, a 12.7 × 12.7 mm synthetic aperture was formed by scanning a 6.4 × 8.3 mm sensor. Binet et al. presented a 27.4 × 1.7 mm aperture synthesis by scanning a sensor with an effective area of 1.7 × 1.7 mm at 1.5 m .
This paper proposes an image-based method of coherent aperture synthesis on scanned images. An image-based metric, the sharpness metric , is used to estimate registration errors. The sharpness metric does not rely on any measurement overlap. Therefore, the image-based method is free from the change of speckle pattern  caused by phase instability. The measurement overlap is not necessary, so every measurement contributes to the improvement of image resolution.
Previous studies have accounted for phase instability by various methods. For example, Mico et al. synthesized nine measurements in a regular octagon geometry, improving resolution by a factor of 3 in digital holographic microscopy . The phase instability was compensated by using phase factors up to the 2nd order that represent constant, linear, and quadratic phase error. Jiang et al. used the same phase factors for a 25.9 × 23.3 mm aperture synthesis in digital Fresnel holography .
In this paper, phase instability is mathematically analyzed and compensated. The mathematical model addresses phase instability by using spatial displacement of a point source that generates a wide reference field. The spatial displacement may be caused by experimental instability such as vibration, drift, and temperature fluctuations. By correcting the displaced position of reference field, phase instability is alleviated in the object field. Thus, physical and concise representation of phase instability is possible in aperture synthesis.
The angular spectrum method [10, 11] supports mathematical modeling of phase instability. The angular spectrum method does not depend on Fresnel approximation, which is based on paraxial approximation in optics. Thus, the reference field can be accurately modeled for a large aperture synthesis.
A secondary camera is designed to monitor piston phase error as a kind of phase instability. Since piston phase error is parameterized in the optical measurement, the number of estimation variables decrease in the computational process. A specific scanning scheme also enables us to reduce the number of estimation variables in the computational domain.
A hierarchical strategy is adopted for computational estimation. The estimation process first solves the hologram patch errors within a block and again solves the hologram block errors between blocks within the synthetic aperture hologram. This strategy helps us efficiently break the large synthesis problem into small sub problems in the two steps.
The main result is a near diffraction-limited image synthesis achieving 60 micron resolution over a 63.4 mm field at a range of 2 meters. The hologram size is equivalent to 14400 × 14400 pixels. This lensless imaging enables us to achieve a thin imager for a large synthetic aperture.
Depth imaging is also demonstrated by numerically focusing the image-based synthetic aperture hologram. The phase information of the field is recorded and processed [12, 13]. So, numerical focus axially forms the object image at a desired depth.
2. Problem formulation
Figure 1 shows a simplified schematic of the measurement process. Object plane and detector plane are defined with the spatial coordinates of (x, y) and (u, v), respectively. Also, zd and zr denote the ranges of the object and the reference point from the detector array along the optical axis. A hologram patch indicates a measurement of scanning aperture. A hologram block is composed of the hologram patches, and a wide aperture (WA) hologram is a collection of the hologram blocks. A symbol A denotes the set of spatial coordinates of all locations in the area of the synthetic aperture as shown in Fig. 1(b). The A is partitioned into I × J subsets, where Ai j denotes the (i, j)-th subset of A: ∪i j Ai j = A.
An illumination field is incident on a diffuse object confined in a field of view (FOV) and creates a field scattered off the object. The incident field and the object field are denoted by Ei (x, y) and Eo (x, y), respectively. A point source is located below the object to create a reference field R (x, y). The object field Eo (x, y) propagates to the detector plane and creates a field Es (u, v; zd) incident on focal plane array (FPA). Also, the reference field R (x, y) propagates to the detector plane and creates a propagated reference field R (u, v; zr). The fields Eo (x, y) and Es (u, v; zd) are related by11] given by 11]:
Ideally, once the interference intensity measurements Ii j (u,v;zd, zr) for all i and j are collected, we can register them together by placing in the positions according to the Ai j as shown in Fig. 1(b). By doing so, we can synthesize the full synthetic aperture intensity. Then a typical holographic filtering process  can be performed to extract the scattered field Es (u, v; zd), which can then be backpropagated by using an adjoint operator to form a coherent image of the object field Eo (x, y).
However, in real experiments, creating a synthetic aperture hologram can involve various errors that will degrade the reconstructed image resolution. These errors are described, modeled, and analyzed in the following subsection.
2.1. Error sources
In synthetic aperture holography, a large diffracted optical field scattered off a 2D object is measured in many hologram patches, each of which is measured with a FPA by moving it to a designated position and then pausing it for a few seconds to allow for the vibration to cease, and then repeating at the next position. This process takes an amount of time and space in linear proportion to the number of hologram patches. Therefore, there can be temporal and spatial changes, causing several different errors in the measurements from one hologram patch to another.
While there are many types of errors, we are mostly concerned about the following:
- Piston phase errors: unknown changes in the constant phase of the interference intensity measurements Ii j (u, v; zd, zr),
- Detector registration errors: unknown errors in the exact positions of Ii j (u, v; zd, zr) caused by the inaccuracy of the 2D translation stage that scans the FPA,
- Reference field errors: unknown relative changes in the position of the reference field to the object field, which may be caused by the experimental instability (e.g. vibration and temperature fluctuations),
- Reference field discrepancy: unknown discrepancy in the phase of the reference field caused by the non-ideal generation of the spherical field.
2.2. Mathematical modeling of errors
The ideal interference intensity measurement Ii j (u, v; zd, zr) is captured by placing the sensor array at the (i,j)-th position as shown in Fig. 1(b). It consists of the reference and scattered field, Ri j (u, v; zr) and , and can be expressed asEq. (6), the (i,j)-th field measurement is obtained.
If considering the errors described above, the errors-impacted field measurement D̃i j (u, v; zd, zr) is re-defined by
The inaccurate reference field R̃i j (u, v; zr) and the inaccurate scattered field may be expressed using the PSF ,
3. Computational methods for aperture synthesis
Fig. 2 shows the flow chart of estimation processes for WA hologram synthesis. After the piston phase errors compensation (see subsection 3.1), the error parameters vector becomes
3.1. Piston phase compensation
A secondary camera is used to eliminate the piston phase errors in the measurement. The secondary FPA is set up to monitor the piston phase fluctuations of WA hologram field. The piston phase errors are given by,
3.2. Hologram block synthesis (hologram patch based process)
Recall that a hologram block is defined as a specified group of WA hologram. We alleviate the detector registration errors by registering hologram patches in each block, by shifting the patches by a few pixels that will be estimated. Since the reference field change is negligible within a hologram block (meaning that the associated errors are constants in a block), the detector registration errors mainly degrade the hologram synthesis.
The object field is backpropagated using the angular spectrum method  from the corrected field measurement , where i = 1,..., I and j = 1,..., J. The (i,j)-th hologram patch is defined in the (m,n)-th hologram block.14] which is expressed as,
3.3. Hologram synthesis (hologram block based process)
After the hologram block synthesis, we estimate the detector registration errors and the reference field errors in hologram blocks. Both the errors are dominant in the WA hologram synthesis because the hologram blocks suffer from the phase instability and the registration errors.
The block (say Bk) is denoted in the m-th row and in the n-th column of the matrix of the blocks. The detector registration errors in hologram blocks are defined as , where m = 1, ..., M and n = 1, ..., N. Then the WA hologram field measurement is expressed by summing the estimated hologram blocks Dmn (u, v; zd, zr),
The estimated scattered field E s (u, v; zd) is obtained by multiplying the estimated WA hologram field measurement by the estimated WA hologram reference field,
To evaluate the errors edb , r = (edb , u, edb , v, et , u, et , v), we again use the sharpness metric  on the guiding feature images,
3.4. Reference field estimation
Since the hologram synthesis came through the error estimation processes, only reference field discrepancy is left. To generate a phase estimate, 2D Chebyshev polynomials are used. The 2D Chebyshev polynomials need fewer polynomial terms to express the 2D rectangular aperture than Zernike polynomials. The phase estimate has the form
We multiply the phase estimate by the estimated scattered field E s (u, v; zd). Then the phase estimation is performed by minimizing the sharpness metric on the guiding feature images. The estimated WA object field E o (x, y) is obtained by using the backpropagation method,14] is again,
We designed our experiment to demonstrate coherent aperture synthesis in digital holography. The optical setup is composed of field generation and detection. In the field generation, three beams were used for the experiment: two beams were for the object and reference illumination of off-axis holography, and the other beam was for the guiding features illumination of image-based synthetic aperture holography. A HeNe laser with the wavelength of 633nm and the power of 20 mW was used to make a monochromatic light source for hologram measurement as shown in Fig. 3. The laser beam was split by beam splitters (BS1 and BS2) making two object and one reference beams.
In the reference beam, the two mirrors were used to step down the reference beam maintaining laser polarization perpendicular to the optical table. The polarization is one linear factor to determine the degree of optical coherence, so maintaining the polarization is critical for highly visible hologram measurement. Then the reference beam was guided by a mirror and spatially filtered by a 25 μm pinhole and a 0.65 numerical aperture (NA) microscopic objective lens. The high NA microscopic objective lens generates a wide spherical field in the detector plane. Note the center of the reference field was vertically 100 mm lower than the center of the guiding features and axially 2.032 m (within ±2 mm accuracy) away from the surface of FPA.
In the object beam of the guiding features, the beam after the beam splitter (BS2) was filtered and collimated using a microscopic objective lens, a pinhole and a f/3 lens to illuminate the 2D object features. The filtered/expanded beam was also split into three illumination beams of the guiding features by two beam splitters (BS3 and BS4). In the other object beam of the target object, the beam split by a beam splitter (BS1) was guided to illuminate the target object using a 2 inch beam splitter (BS5). A lens (L) diverged the beam to illuminate the full size of the target object. Therefore, the generated object and reference fields interfere, forming a hologram field in the detector plane.
A 1951 USAF resolution target (AF target) was used as the 2D guiding features, quantifying our estimation process in terms of image resolution. A diffuser inserted in the back side of the AF target scatters the field more uniformly spread with speckle patterns. Fig. 3 shows three AF targets in total: two AF targets at the sides were used for the guiding features and the other one at the center was used for the test object of the hologram synthesis. Note that the axial position of the guiding features are 2.034 m (within ±2 mm accuracy) away from the surface of FPA and 2 mm away from the reference point source. The individual AF targets were horizontally placed 30 mm away in the same object plane.
A reflective 2D object was added to show the depth imaging in the synthetic aperture holography. The computer CPU chip was placed 35 mm below the AF targets’ vertical location and 1.99 m away from the surface of FPA in optical axis. The logo inscription of the CPU chip was illuminated with a beam size of 22 mm in diameter.
The theoretical resolution and FOV is determined by the number of pixels and pixel pitch of the sensor array using the Fraunhofer diffraction formula .
In practice, the FOV is reduced to account for separation of the reference and object fields. In Fourier filtering for the off-axis holography, only one fourth of the total hologram bandwidth is used to avoid the effect of undesired signals. Numerical backpropagation method also limits FOV because of the way it is analytically derived . In the angular spectrum method, the effective resolution and FOV are constant to the propagation range z.
4.1. Stereo camera system for piston phase compensation
A stereo camera system is designed to compensate the piston phase errors for 144 hologram patches. One static camera is placed 50 mm away from the center of the hologram scanning area as shown in Fig. 1(a) (see also Fig. 3) to record the piston phase fluctuation of hologram field.
The piston phase fluctuation over time is a dominant phase instability in the scanning measurement. The effect of piston phase fluctuation is that the reconstructed images from the hologram patches can destructively combine, resulting in worse image resolution than theoretical resolution.
The idea of the stereo camera system is based on the assumption that the hologram field shares common phase fluctuations over the scanning area. Thus, the piston phase at two distant locations will be highly correlated such that the static camera can be used to estimate the piston phase fluctuations.
To verify the validity of this assumption, two cameras were fixed and tested. Both the cameras simultaneously took 25 image frames every two seconds. Using Eq. (13), the piston phase fluctuations were obtained in Fig. 4. The continuous red and dotted blue lines are the relative phase variations of camera 1 and camera 2 respectively. Both the line plots are shown to be strongly correlated over the frames.
4.2. Reinitialization points scheme for hologram scanning
A reinitialization points scheme uses a few initial measurement points for the individual blocks. So 2×2 reinitialization points are set to measure 2D WA hologram area as shown in Fig. 1(b). The 2D WA hologram area is equivalent to 12×12 hologram patches area without any overlap. The individual blocks are raster scanned starting from the reinitialization points and the hologram blocks are composed of 6×6 hologram patches. The number of hologram patches in one hologram block is determined such that the hologram block has only localized detector registration errors.
A scanning scheme of reinitialization points is designed to support the computational methods for hologram synthesis. Since the WA hologram is scanned block by block, the detector registration errors are dominant in the hologram patches within a block. For the WA hologram synthesis, we estimate the detector registration errors and the reference field errors. In the measurement, we used a 600mm 2D axis translation stage (Newport M-IMS600CC) specifying a mechanical resolution of 1.25 μm and a bi-directional repeatability of 1.0 – 2.5 μm. However, the guaranteed accuracy is 15 μm and the inaccuracy linearly accumulates along the translation axis.
5. Processes and Results
The data processing of image-based synthetic aperture holography followed the computational methods as described in section 3. To process the WA hologram data (14400 × 14400 pixels), a Dell Precision T5500 was used with Intel Xeon CPU at 2.27 GHz, 48 GB RAM, and Windows7 64 bit operating system.
The processing time was dominantly taken by the hologram block synthesis, the WA hologram synthesis, and the reference field estimation. The hologram block synthesis searched the detection registration errors in a range of 5 pixels, minimizing the sharpness metric. The range was determined by the guaranteed accuracy 15μm of the translation stage. To speed up the estimation of the hologram block, each row of the hologram block was considered to have identical detector registration errors. This is a reasonable assumption since the translation stage achieves bi-directional repeatability in the adjacent row measurement. So the detector registration errors were transversely searched for by sweeping the possible errors within the range for about one hour.
Both the WA hologram synthesis and the reference field estimation used unconstrained multivariate minimum search in MATLAB. The algorithm utilizes the Quasi-Newton line search, which is stably convergent but slow. The WA hologram synthesis required about 4 hours for 5 iterations and the reference field estimation took about 6 hours for 5 iterations.
In the experiment, two transversely distant AF targets were used as the guiding features to avoid finding local minima in the estimation. Fig. 5 shows the evolution of estimation in the 12×12 hologram patches. The raw data image suffers from all the errors , resulting in the periodic ghost images and the blurs in Fig. 5(a). Compensating for the piston phase errors , the ghost images were effectively mitigated. However, the ghost images and the blurs still remain in Fig. 5(b). Estimating the detector patch errors , the ghost images were removed in Fig. 5(c). Due to the blurs remaining in the image, the two AF targets resolve only the features (group 2, element 5) whose resolution correspond to 158 μm.
In the WA hologram synthesis, the detector block errors and reference field errors in the hologram blocks were estimated, resolving the features in group 3 in Fig. 5(d). Finally, the estimation of reference field discrepancy restores the resolution in the features (group 4, element1) that correspond to the theoretical resolution 62.5 μm in Fig. 5(f). Also, Fig. 5(e) and (f) show the effect of the reference field discrepancy estimation on the zoomed-in images. The estimation helps to resolve the features (group 4, element1) that are marked by a red circle. The estimated phase of the reference field is shown in Fig. 5(g). Note that the theoretical resolution is calculated by multiplying the speckle factor of 3 by the theoretical resolution [11, 16].
Fig. 6 shows the evolution of image resolution to the number of hologram patches in the estimated guiding features. The image of the 1×1 hologram patch barely resolves any feature (Group 2 and 3) in Fig. 6(a) and (d). The image of the 3×3 hologram patches resolves the features (group 2, element 1) whose resolution corresponds to the theoretical resolution of 250 μm in Fig. 6(b) and (e). The image of the 12×12 hologram patch resolves the features (group 4, element 1) whose resolution corresponds to the theoretical resolution 62.5 μm in Fig. 6(c) and (f). Here we used the speckle-affected resolution .
Another experiment demonstrated the holographic images of three AF targets and one reflective 2D object. A zoom-in movie starts from a full FOV image at 63.4 × 63.4 mm to a zoomed-in image at 2.1 × 2.1 mm (Fig. 7). The start image is 14400 × 14400 pixels, and the end image is 480 × 480 pixels. The images were downsampled to 480 × 480 pixels by using bicubic interpolation. The two AF targets at the sides were used as the guiding features to estimate the synthetic errors, and the CPU chip was an object for depth imaging. The two AF targets were transversely separated by 60 mm on the same object plane.
Fig. 8 shows the images estimated by the hologram synthesis. Two AF targets at the sides (see Fig. 8(a) and (c)) were used for the guiding features, and the center one (Fig. 8(b)) was used for the performance test target. Unlike the non estimated images in Fig. 8(a) and (c), the estimated images in Fig. 8(d) and (f) have the ghost images and the blurs mitigated. In Fig. 8(e), the center AF target can also read the numbers in group 2. Thus, the estimation strategy using the guiding features is verified to be useful in the synthetic aperture holography. Unlike the images of Fig. 5, the resolution degradation is caused by the limit of the detector’s dynamic range. The increased field signals easily saturate the dynamic range of the detector as the number of objects increases.
Fig. 9 shows the feasibility of depth imaging. The logo inscription of a CPU chip is in focus by backpropagating the estimated synthetic aperture hologram. The resolution improvement to the number of hologram patches is presented in Fig. 9(a), (b), and (c). The more hologram patches we synthesize, the smaller letters are readable. Fig. 9(d) and (e) show the effect of error estimation on the in-focus image. The zoomed-in images have better sharpness in the estimated image in Fig. 9(e). Fig. 9(f) shows an incoherent base line image of the logo inscription.
In Fig. 10, the piston phase errors were monitored showing temporal drift over the 144 scanned hologram patches. The hologram block synthesis estimated the horizontal and vertical detector errors as and , respectively (for m=1 and n=1). The other blocks showed the same detector errors with the hologram block of m=1 and n=1. Table 1 shows the detector registration errors and reference field errors estimated for the hologram blocks. The estimates for the reference field discrepancy are shown in Table 2.
This paper described a method to compensate for scanning effects in image-based synthetic aperture holography. We used this method to restore near diffraction-limited resolution in a 63.4 × 63.4 mm synthetic aperture. This research suggests that high pixel count imaging on giga-pixel scale may be achieved using available computational power and memory. Depth imaging was also demonstrated by using a reflective object placed 44 mm away from the guiding features’ plane. This infers that the hologram synthesis is valid for the reconstruction of 3D space information. Therefore, the synthetic aperture holography can be used for imaging of 3D samples. Since the synthetic aperture increases the numerical aperture of the measurement system, the resolution in depth and transverse can improve.
This research was supported by a DARPA under AFOSR contract FA9550-06-1-0230. The authors thank James Fienup for helpful suggestions.
References and links
1. C. W. Sherwin, P. Ruina, and R. D. Rawcliffe, “Some early developments in synthetic aperture radar systems,” IRE Trans. Mil. Electron. 6, 111–115 (1962). [CrossRef]
2. L. G. Brown, “A survey of image registration techniques,” ACM Comput. Surv. 24, 4 (1992). [CrossRef]
3. L. Romero and F. Calderon, A Tutorial on Parametric Image Registration (I-Tech, 2007).
4. J. H. Massig, “Digital off-axis holography with a synthetic aperture,” Opt. Lett. 27, 24, 2179–2181 (2002). [CrossRef]
6. J. R. Fienup and J. J. Miller, “Aberration correction by maximizing generalized sharpness metrics,” J. Opt. Soc. Am. A 20, 609–620 (2003). [CrossRef]
7. J. W. Goodman, Speckle Phenomena in Optics - Theory and Applications (Roberts and Company, 2007).
8. V. Mico, Z. Zalevsky, C. Ferreira, and J. Garca, “Superresolution digital holographic microscopy for three-dimensional samples,” Opt. Express 16, 19260–19270 (2008). [CrossRef]
9. H. Jiang, J. Zhao, J. Di, and C. Qin, “Numerically correcting the joint misplacement of the sub-holograms in spatial synthetic aperture digital Fresnel holography,” Opt. Express 17, 18836–18842 (2009). [CrossRef]
10. J. W. Goodman, Introduction to Fourier Optics, 3rd ed. (Roberts and Company, 2005).
11. D. J. Brady, Optical Imaging and Spectroscopy (Wiley, 2009). [CrossRef]
12. U. Schnars and W. P. O. Juptner, “Digital recording and numerical reconstruction of holograms,” Meas. Sci. Technol. 13, R85–R101 (2002). [CrossRef]
13. B. Javidi, P. Ferraro, S.-H. Hong, S. De Nicola, A. Finizio, D. Alfieri, and G. Pierattini, “Three-dimensional image fusion by use of multiwavelength digital holography,” Opt. Lett. 30, 144–146 (2005). [CrossRef]
14. S. T. Thurman and J. R. Fienup, “Phase-error correction in digital holography,” J. Opt. Soc. Am. A 25, 983–994 (2008). [CrossRef]
15. T. M. Kreis, M. Adams, and W. P. O. Jueptner, “Methods of digital holography: a comparison,” Proc. SPIE 3098, 224–233 (1997). [CrossRef]
16. A. Kozmat and C. R. Christensent, “Effects of speckle on resolution,” J. Opt. Soc. Am. 66, 1257–1260 (1976). [CrossRef]