## Abstract

The performance of long distance imaging systems is typically degraded by phase errors imparted by atmospheric turbulence. In this paper we apply coherent imaging methods to determine, and remove, these phase errors by digitally processing coherent recordings of the image data. In this manner we are able to remove the effects of atmospheric turbulence without needing a conventional adaptive optical system. Digital holographic detection is used to record the coherent, complex-valued, optical field for a series of atmospheric and object realizations. Correction of atmospheric phase errors is then based on maximizing an image sharpness metric to determine the aberrations present and correct the underlying image. Experimental results that demonstrate image recovery in the presence of turbulence are presented. Results obtained with severe turbulence that gives rise to anisoplanatism are also presented.

© 2009 Optical Society of America

## 1. Introduction

The ability to obtain images of distant objects by Fourier processing of holographic data has been of interest to the research community for many years. Soon after the invention of the ruby laser, researchers demonstrated the feasibility of this imaging method by obtaining images of objects at a distance of several kilometers [1]. Related laboratory experiments were also conducted to demonstrate digital recording and processing of the holographic data [2]. In this paper we consider imaging of distant objects with digital holographic detection using advanced hardware and signal processing algorithms that allow for correction of atmospheric turbulence. The general concept is to record a long-distance hologram by flood-illuminating an object with coherent light and then digitally recording the interference pattern that arises when this light is combined with a coherent reference beam. Digital recording is accomplished using a 2D detector array to measure the intensity of the interference pattern. Image formation is then accomplished by Fourier processing of the recorded data. In the early experiments, researchers would typically ensure coherence of the reference and return beams by using a strong reflector located near the object as the reference source. In this paper we report several advancements made possible by improved lasers, detector arrays and digital processing hardware and processing algorithms. For the results presented herein, we use a long coherence length laser to create a reference beam that is local to the laser source (local oscillator) obviating the need for a bright reflector near the object. Furthermore, we use a large format detector array and high-speed FFT processing to obtain fine-resolution images. Another important aspect of the work reported here is that we determine and correct for aberrations caused by atmospheric turbulence by using an image sharpness criterion [3, 4, 5].

## 2. Digital holographic recording

A typical arrangement for a long-distance, digital holographic detection system is shown in Fig. 1. Light from a coherent laser source is divided to produce two beams: one that flood-illuminates the distant object with light and one that forms the reference beam that is coherent with the return light from the distant object.

A portion of the light that illuminates the distant object is scattered back toward the sensor. The telescope is typically positioned so that the entrance pupil is imaged onto the detector array. At the detector, the return light from the object is interfered with light from the reference beam and the intensity of the interference pattern is recorded by the detector array. The magnification of the telescope is typically adjusted to establish proper sampling of the interference pattern by the detector array.

Our data collection activities were performed at the Table Mountain Field Site which is an outdoor test range north of Boulder, CO [6]. Figure 2 shows an example of a single realization of intensity data recorded by the detector array with the object being a bar chart. The image on the left shows the entire pupil and the right shows a magnified image of a small region: note the spatial carrier frequency which corresponds to the angular offset between the object and reference beams.

Following intensity detection, the next step in the imaging process is to compute the digital Fourier transform. Figure 3 shows an example data set corresponding to the summation of 16 realizations of the atmospheric phase and object speckle realizations. Figure 3(A) shows the sum of the detected intensity patterns. Figure 3(B) shows the sum of the magnitudes of the digital Fourier transforms of the same 16 realizations. Notice that there are 3 image terms evident in Fig. 3(B); the center term and two offset terms, one focused and the other defocused. The center term results from the autocorrelation of the image and in this case, due to thresholding and other filtering, the term has some structure that can be ignored. Additionally there is a bright point in the center, but this has been digitally zeroed in Fig. 3(B) so that the other image terms are visible.

To understand the various terms let us denote the distant object as *f*(*x*) and the reference point as *g*(*x*) with their corresponding Fourier transforms given by *F*(*u*) and *G*(*u*) respectively. The intensity recorded by the detector array can be written as

$$={\mid F\mid}^{2}+{\mid G\mid}^{2}+{\mathrm{FG}}^{*}+{F}^{*}G.$$

It follows that the inverse Fourier transform of this intensity, obtained digitally in our experiments, is given by

where ⊗ denotes the convolution operation. If the reference point is a delta function centered at *x=b*, it follows that the Fourier transform of the intensity pattern is given by

The first term in Eq. (3) is the autocorrelation of the object; this corresponds to the central term in Fig. 3(B) mentioned above. The second term is a delta function at the origin that is zeroed as discussed above. The final two terms are a set of twin images spatially offset from the center by ±*b*. These images are complex-valued and by extracting one of them the complex-valued representation of the object field is obtained. In the experiments reported here we digitally correct for instrumentation and atmospheric phase aberrations. The instrumentation phase aberrations in our system included several waves of defocus. Thus prior to computing the digital Fourier transform of the recorded intensity data, we typically apply a quadratic phase term to the pupil intensity data. This produces a better focused image term in the Fourier transform data shown on the right in Fig. 3(B). Notice, however, that the conjugate image on the left of Fig. 3(B) was more severely defocused by adding the quadratic term. With the defocus removed the image is localized and we are readily able to extract the complex-valued, optical field data from the Fourier transform by simply extracting the desired region. The result of this extraction is shown in Fig. 3(C). Note that the sum of the squared magnitudes of the 16 complex-valued realizations is shown. In this example the object is the 1951 USAF resolution test chart. The realizations result from changes in the atmosphere and changes in the underlying speckle caused by slight object motion. Note that for this image we have not yet corrected for atmospheric turbulence, or fully corrected for instrumentation errors.

## 3. Aberration correction using image sharpness

In our experiments there are several important phenomena that affect the appearance of images. The first is the effect of turbulence on the outgoing, flood-illumination laser beam. The second is the diffuse scattering of the object that randomizes the phase of the illuminating beam and gives rise to laser speckle. The final effect is the phase aberration from turbulence on the light in propagating from the object to the detection system.

Regarding the first effect, because of the object’s randomly-phased scattering (which scrambles the phase of the incident beam), we are only concerned with spatial intensity fluctuations caused by effects such as beam wander and scintillation [7, 8]. These give rise to random spatial intensity fluctuations of the image’s underlying intensity that vary for each atmospheric realization. For our experiments we typically collect data over several atmospheric realizations, as well as several laser speckle realizations. Atmospheric realizations are most readily obtained by allowing time to pass so that the atmosphere changes and laser speckle realizations are most readily obtained by small relative motion between the sensor and object [9]. When we collect several laser speckle realizations with a static atmosphere, we find that the ideal underlying incoherent image that would be obtained by averaging many realizations of the laser speckle intensity is distorted by the beam scintillation and wander in the flood-illumination beam and furthermore, phase aberrations are introduced by the return path turbulence; these phase aberrations degrade the impulse response function of the imaging system. Terminology at this point can be confusing because the term speckle is sometimes used for both the laser speckle that results from diffuse scattering as well as to describe the granular nature of the impulse response function caused by the atmospheric phase distortions as in the field of stellar speckle interferometry [10]. To avoid confusion in this treatment, we restrict our use of the term speckle to describe laser speckle that results from diffuse scattering from an object that has rough surfaces.

As stated above, the method used for aberration correction is based on maximizing imaging sharpness. The application of image sharpness to aberration correction of coherent, speckled images was originally formulated in Ref. 4. The formulation begins by modeling the object as the product of a transmission function *t*(*x,y*) multiplied by a random phase function *ϕ*(*x,y*). At this point we take the atmosphere to be static and thus the ensemble of realizations corresponds to realizations of the object’s random phase function *ϕ*(*x,y*). Note that *t*(*x,y*) includes both the object’s underlying reflectivity and any variation that might be caused by the scintillation and wander of the illuminating beam. We can thus write the image intensity as

where the transfer function *h*(*m,n,x,y*) includes properties of the digital holographic imaging system (including the telescope and reference beam optics) and phase aberrations caused by the return path through the atmosphere. If the object’s random phase is delta correlated, we can then write the expected value of the image intensity as [4, 11]

Note from this equation that the expected value of the speckled image intensity is equal to the intensity image of the object’s underlying transmission function. For notational purposes we define *V*(*m,n*)=<*I*(*m,n*)>. Following Refs. 4, 11 and 12 we can write the speckled image intensity as the product of the underlying intensity image *V*(*m,n*) multiplied by a speckle noise function or

Where the speckle noise *I _{s}*(

*m,n*) is a spatially stationary negative exponential random variable with mean value equal to unity. The relationship in Eq. (6) is referred to as the multiplicative speckle noise model because it shows that the intensity of the speckled image is given by the product of the underlying image intensity multiplied by spatially uniform speckle noise function. Note that this model does have limitations that are discussed in Ref. 12. From Eq. (6) it follows that the

*r*moment of the speckled image intensity is given by

^{th}Now let us consider image sharpness for the correction of phase errors. The idea of using image sharpness for the correction of atmospheric phase errors with incoherent imaging was originally discussed in Ref. 3. The concept is to compute an image sharpness metric that quantifies the image contrast or high frequency detail. It follows that the value of the sharpness metric is maximized when system aberrations are minimized. One can then use maximization of the sharpness to drive an adaptive optic system and thus correct for atmospheric turbulence. A simpler version of this concept is used commonly in camera systems to automatically correct for focus errors. For correcting more general system errors, or as in Ref. 3 for atmospheric phase errors, one can use image sharpness maximization to determine the coefficients of aberration polynomials such as the Zernike polynomials [13].

The most common image sharpness metric is to compute the sum of the square of the image intensity over the image [3]. For the case of digitally sampled speckled imagery, we can write this image sharpness metric as

Where the summation is performed over the image region of interest. If we now apply the multiplicative speckle noise model we find that

and if we take the expected value of the image sharpness metric we find that from Eq. (9) th

This result shows that the expected value of the sharpness of the speckled image is proportional to the sharpness of the underlying incoherent image. We are thus able to focus a speckled image in the same manner that sharpness metrics are used to focus incoherent imaging systems. In practice it also follows that because the speckle in the multiplicative noise model is spatially stationary, the summation used in computing image sharpness from a single speckle realization has the same mean value as if one were computing the sharpness over an ensemble of speckle realizations. In this case, the standard deviation of the computed sharpness metric becomes important because the sharpness estimate is formed from averaging a finite number of speckles. In Ref. 4 the variance of the sharpness computed from a single speckle realization of speckle is shown to be

when the sharpness metric is the intensity squared. This result corresponds to the case of delta-correlated speckle noise. In practice, the ability to discern changes in sharpness (effectiveness of sharpness maximization) depends on having a large value of a detection statistic such as

where the primed and unprimed quantities correspond to sharpness computations performed before and after the aberration correction is applied. Note that the value of this quantity depends on the underlying incoherent image and thus one can evaluate the potential performance of a sharpness metric without simulating speckled images. Large values of this detection statistic indicate that sharpness maximization is easily attainable. Further discussion of Eq. (12) and its implications are contained in Ref. 4.

Image sharpness metrics have been applied to a variety of coherent imaging methods with great success. An important point to note, as shown in Ref. 5, is that the optimal sharpness metric for a given imaging scenario is not always summing the intensities squared. In fact a sharpness metric of the form

where the best value of the exponent *β* is dependent on the nature of the image. For images with bright spots or glints, values of *β*>2 are generally more suitable, whereas for images without bright points a value of 1.0<*β*<2.0 is generally more suitable [5].

With the above framework for correcting image aberrations using a sharpness metric, we have conducted an experimental investigation of the performance of this method. Specific experiments include imaging with multiple realizations of atmospheric turbulence and multiple realizations of laser speckle obtained by slight object motion. We have also tested the method under conditions where the atmosphere is anisoplanatic.

The general approach for correcting image aberrations is illustrated in Fig. 4 for a single image realization.

The first step is to begin with the complex-valued, pupil data corresponding to an image realization. This data includes fixed aberrations from the digital holographic imaging system as well as phased aberrations imparted by the atmospheric turbulence. One then initializes a set of polynomial coefficients. For the work reported here we have used the first 48 Zernike polynomials. An initial computation of image sharpness is made. For these results we typically use a sharpness coefficient of *β*=1.2. An optimization algorithm, such as a quasi-Newton line search, is then used to determine a set of polynomial coefficients that maximize the sharpness. Finally the output image and phase aberration masks are generated.

One important consideration in setting up the optimization algorithm is over-sharpening. With over-sharpening, the final image solution is driven to put too much energy into point-like image features. It follows that the image that has maximum sharpness is indeed an image that has constant phase which puts all of the image energy into a small region. This could result if the algorithm is allowed to drive to a phase correction (image (D) in Fig. 4) that is the conjugate of the phase of the input data (image (A) in Fig. 4). This could be the case if we allowed the phase solution to have very fine spatial structure. In our case we are able to limit the spatial structure by only considering phase correction polynomials up to a certain order, which for many of our experiments is the 48^{th} order Zernike polynomial. The actual 48^{th} polynomial is shown in Fig. 4(B) and it is apparent that the spatial features of this polynomial are not fine enough to negate the phase features of the input image, which are approximately the speckle size in Fig. 4(A).

Another aspect of this process that can drive over-sharpening is to use a sharpness coefficient *β* in Eq. (9) that is too large. For this reason we often use values of *β* that are in the range of 1<*β*<2. In practice one should evaluate the value of *β* with respect to the specific imaging task as discussed in Ref. 5 and if the processing appears to generate artificial point like features, the value of *β,* or the number of Zernike polynomials, should be reduced.

When conducting the sharpness maximization it is often useful to employ a normalized sharpness metric. For this case the normalized sharpness metric is of the form

## 4. Experiment description

As stated above, experiments were performed at the Table Mountain Test site north of Boulder, CO [6]. The outdoor path chosen for the experiments covered a range of *R*=100 meters. The beam height was 1.2 meters along the path and the terrain was a uniform grassy desert. Based on the uniformity of the path we operated under the assumption that the level of refractive turbulence quantified by *C _{n}^{2}* was constant along the path. In this case the values of Fried’s parameter,

*r*

_{0}, and the isoplanatic angle,

*α*

_{0}, are defined by algebraic formulas rather than path integrals and are given by

and

respectively [7]. The values of these parameters for different configurations provide guidance for experimental features such as the number of Zernike polynomials required and the object patch size over which a correction is valid.

A schematic for the hardware is shown in Fig. 1. The system is slightly bistatic with the laser being launched a few centimeters from the receiver aperture. The laser used in the experiments was a narrow linewidth laser operating at *λ*=532 nm. The coherence length was significantly longer than the 200 m roundtrip path length and thus the return light and reference beam interfered coherently. The target was flood illuminated with a laser beam having a power level of roughly 200 mW.

The entrance pupil of the receiver was circular with a diameter of *D*=48.3 mm. A simple telescope consisting of a set of doublets was used to reimage the entrance pupil onto a two-dimensional CCD detector array. A small portion of the laser transmitter was coupled into a fiber and was used as the reference beam, or local oscillator. A beamsplitter was used to mix the target return and the local oscillator on the 2D detector array creating the digital holographic interference pattern shown, for example, in Fig. 2. Image formation was performed using a 1024×1024 pixel region of the detector array and the pixel spacing of the detector array was 6.7 microns. Integration times for the exposures of the detector array were typically 0.5 msec. This value was chosen to give sufficient signal level, while also corresponding to a quasi-static realization of the atmosphere. The CCD’s quantum efficiency was roughly 0.4.

The object used in the experiments was a 1951 USAF resolution test pattern. For this work we concentrated on groups -2 and -1. It follows that the bar spacings for group -1, elements 6 and 5, are 0.891 and 0.793 lp/mm, respectively. With the system’s diffraction limited resolution given by *D/λ R*=0.907 lp/mm it follows that the MTF for our system is near zero for element 6 and increases somewhat for group -1, element 5.

In order to obtain different realizations of the laser speckle patterns we attached a motor to the object and tilted the object between realizations so that the pupil intercepts an independent speckle pattern. In a typical data collection we would acquire 16 independent speckle realizations. The time interval between recording the realizations was usually greater than 1 second and over this time period the turbulence-induced phase functions were decorrelated. A typical data set thus consists of 16 image realizations for which all realizations experience the same phase errors caused by imperfections in the instrumentation such as telescope aberrations or collimation errors in the reference beam. Each realization corresponds to a different realization of the atmospheric phase and a different realization of the underlying laser speckle pattern.

A scintillometer was used to measure the properties of the atmosphere for the data sets. The scintillometer output consists of time stamped *C ^{2}_{n}* and

*r*

_{0}values. The scintillometer determines the level of turbulence over 60 second time intervals and reports values for each such interval. Figs. 5 and 6 contain examples of

*C*and

^{2}_{n}*r*

_{0}values for the Table Mountain test site on the March 6, 2008. Note that the turbulence levels were relatively strong during the early afternoon and improved as sunset approached. For the results discussed below we selected three data sets with atmospheric parameters given in Table 1. The corresponding values of

*D/r*

_{0}were

*D/r*

_{0}=0.4,

*D/r*

_{0}=1.6 and

*D/r*

_{0}=5.9.

## 5. Results

Figure 7 shows the image results from data collected on March 6^{th}, 2008. The level of atmospheric turbulence increases in going from top to bottom.

Sixteen realizations were processed to create each of the images shown. Each realization corresponds to a different realization of the atmosphere and also the object was moved slightly so that each corresponds to a different realization of the laser speckle. These images have common static aberrations from the imaging system. The first step in the processing was to estimate the best set of Zernike coefficients that correct the 16 realizations simultaneously. In this manner we corrected for the common static aberrations. Figures 7(A), (B), and (C) contains these results; these images show the sum of the intensities of the 16 realizations with the realizations spatially registered. It is evident from this figure that the atmospheric aberrations degrade the images substantially for the larger levels of turbulence.

The second step of processing was to determine and apply the set of Zernike coefficients that maximize the sharpness of each realization individually. These corrections correspond to the aberrations imparted by atmospheric turbulence and vary for each image realization due to changes in the atmospheric turbulence. Figure 7(D), (E) and (F) shows the result of correcting both the static aberrations and the dynamic aberrations. Again, the sum of the intensities of the 16 realizations with registration are shown. Total processing time for generating each of the images (D), (E) and (F) was approximately 30 seconds using non-optimized algorithms.

The images in separate rows of Fig. 7 were acquired during periods with different levels of atmospheric turbulence with the turbulence increasing from top to bottom. Table 1 gives the atmospheric parameters for each image. It is clear, by noting that there is a slight difference between Fig. 7(A) and (D), that a single wavefront correction for static aberrations is sufficient to correct the entire image with *D/r*
_{0}<1 which is the case in Fig. 7(A).

As the value of *D/r*
_{0} increases, a single fixed correction does not correct for the turbulence as shown in Fig. 7(C). However, when we correct for each image realization, the quality of the image is improved substantially with Figs. 7(E) and (F) showing marked improvement over the single screen corrections. Image details approaching the diffraction limit, for example group -1 element 4 in Fig. 7(E), are clearly visible.

As we move to the highest level of turbulence in Fig. 7(F), we see that the image is good, but more correction could be performed. For this case *D/r*
_{0} is approximately 5.9. Using the equations derived in Ref. 13 we find that the residual rms wavefront error after removing 48 Zernikes is approximately 0.11 waves. Thus improvement could be obtained by using more Zernike polynomials, however, anisoplanatism (aberration variation as a function of field angle) also degrades the imaging performance in this case.

A nice feature of digital processing using sharpness is that small regions of an image can readily be isolated and corrected by maximizing sharpness. Figure 8 shows the results of experiments as different regions of the image in Fig. 7(C) were corrected separately. In this manner, we were able correct for anisoplanatism. The dashed line indicates the area over which the sharpness metric was calculated during the aberration estimation process. In Fig. 8(A), the entire bar target was used to estimate the Zernike coefficients. Figure 8(B) was corrected while using only group -2 element 1. It is obvious that while correcting the bars of group -2 element 1 the rest of the image suffers degradation. Decreasing the size of the sharpened area even further, as shown in Fig. 8(C), results in a higher contrast for the horizontal bars but even the nearby vertical bars are degraded. This is a clear indication that there are significant field dependent effects (anisoplanatism) that are caused by the atmosphere. In this example the isoplanatic patch on the target is 4.6 mm. This is consistent with the degradation with field angle seen in Fig. 8(C). The horizontal set of bars is mostly corrected while the vertical bars that are approximately 5 mm to the left are severely degraded. MTF values, (*I _{max}*-

*I*)/(

_{min}*I*+

_{max}*I*), for the bars in the lower right corner are: 0.26, 0.43, 0.61 for images (A), (B), and (C), respectively.

_{min}## 6. Summary

We have demonstrated digital correction of aberrations caused by atmospheric turbulence for a variety of atmospheric conditions. To accomplish this we employed digital holographic detection with a local reference beam and recorded complex-valued images of a test chart at a distance of 100 meters for a variety of atmospheric conditions. The conditions ranged from *D*/*r*
_{0}=0.4 to *D/r*
_{0}=5.9 and isoplanatic patch sizes that covered the entire object to the case where there are approximately seven isoplanatic patches across the object. In all conditions the aberrations due to atmospheric turbulence were corrected digitally by maximizing an image sharpness criterion. For the case where the isoplanatic patch is the same size as the target, the result is essentially diffraction limited. For the severe turbulence case a reasonable image is recovered but it is not diffraction limited. We also demonstrated that by isolating a small section of an image corresponding to an isoplanatic patch we are able to obtain good atmospheric correction.

## References and links

**1. **J. W. Goodman, D. W. Jackson, M. Lehmann, and J. Knotts, “Experiments in Long-Distance Holographic Imagery,” Appl. Opt. **8**, 1581–1586 (1969).
[CrossRef] [PubMed]

**2. **J. W. Goodman and R.W. Lawrence, “Digital image formation from electronically detected holograms,” Appl. Phys. Lett. **11**, 77–79 (1967).
[CrossRef]

**3. **R. A. Muller and A. Buffington, “Real-time correction of atmospherically degraded telescope images through image sharpening,” J. Opt. Soc. Am. **64**, 1200–1210 (1974).
[CrossRef]

**4. **R. G. Paxman and J. C. Marron, “Aberration Correction of Speckled Imagery With an Image Sharpness Criterion,” In Proc. of the SPIE Conference on Statistical Optics, 976, San Diego, CA, August (1988).

**5. **J. R. Fienup and J. J. Miller, “Aberration Correction by Maximizing Generalized Image Sharpness Metrics,” J. Opt. Soc. Am. A **20**, 609–620 (2003).
[CrossRef]

**6. **The U.S. Department of Commerce Table Mountain Field Site and Radio Quiet Zone. http://www.its.bldrdoc.gov/table_mountain/.

**7. **M. C. Roggeman and B. Welsh, *Imaging Through Turbulence* (CRC, New York, N.Y.1996).

**8. **R. R. Beland, “Propagation through atmospheric optical turbulence,” in *The Infrared and Electro-Optical Handbook*, Vol. 2: *Atmospheric Propagation of Radiation*,
F. G. Smith, ed. (SPIE Optical Engineering Press: Bellingham, Wash., 1993), pp. 157–232.

**9. **J. Marron and G. M. Morris, “Image-plane speckle from rotating, rough objects,” J. Opt. Soc. Am. A **2**, 1395–1402 (1985).
[CrossRef]

**10. **J. C. Dainty, “Stellar speckle interferometry,” in *Laser Speckle and Related Phenomena*, 2nd ed.,
J. C. Dainty, ed. Springer-Verlag, Berlin, (1983), pp. 256–320.

**11. **G. April and H. H. Arsenault, “Nonstationary image-plane speckle statistics,” J. Opt. Soc. Am. A **1**, 738–741 (1984).
[CrossRef]

**12. **M. Tur, K. C. Chin, and J. W. Goodman, “When is speckle noise multiplicative?,” Appl. Opt. **21**, 1157–1159 (1982).
[CrossRef] [PubMed]

**13. **R. J. Noll, “Zernike polynomials and atmospheric turbulence,” J. Opt. Soc. Am. **66**, 207–211 (1976).
[CrossRef]