Abstract

Sharpness metric maximization is a method for reconstructing coherent images that have been aberrated due to distributed-volume turbulence. This method places one or more corrective phase screens in the digital-propagation path that serve to increase overall sharpness of the image. As such, this study uses sharpness metric maximization on 3D irradiances obtained via frequency-diverse digital holography. We vary the number of corrective phase screens in the propagation path and sharpen images of a realistic, extended object via multi-plane sharpness metric maximization. The results indicate that image reconstruction is possible when using fewer corrective screens than aberrating screens, but that image quality increases with a greater number of corrective screens.

© 2021 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. INTRODUCTION

Distributed-volume atmospheric turbulence poses unique problems when imaging distant objects. These problems arise from the fact that different points within the field of view experience generally different aberrations after propagation to a pupil plane. This phenomenon, known as anisoplanatism [1], results in different point-spread functions (PSFs) in different areas of the image (i.e., the imaging system is linear, shift variant). In contrast, isoplanatism occurs when all of the aberrations occur close to or within the pupil plane of the imaging system. In this case, each point on the object experiences the same aberrations (i.e., the imaging system is linear, shift invariant). In general, the degree of anisoplanatism can be assessed via the area of interest in the object plane relative to the isoplanatic patch, where more anisoplanatic scenarios have a greater density of isoplanatic patches.

When correcting coherent imagery in the presence of turbulence-induced aberrations, one can reconstruct the image by introducing a phase correction in one or more planes within the propagation volume. One can achieve this via hardware, for example, with a deformable mirror that imparts a phase correction, or by applying phase corrections in one or more planes within a model of the propagation volume and digitally propagating back to the object. The latter method requires an estimate of the complex-valued field for digital propagation to take place. In either case, however, anisoplanatic aberrations require for the phase to be corrected in multiple planes if an image is to be reconstructed over multiple isoplanatic patches. In hardware, one avenue is multiconjugate adaptive optics [2,3]. This paper will instead focus on digital reconstruction in the form of multi-plane sharpness metric maximization (SMM).

In the past, SMM, or image sharpening, has been used with coherent imagery to correct for both isoplanatic and anisoplanatic aberrations. Thurman and Fienup developed image sharpening for isoplanatic phase errors and compared it to digital shearing laser interferometry [4]. They later looked at anisoplanatic phase errors with a single aberrating and correcting screen [5]. Tippie and Fienup extended upon that work to include multiple aberrating screens and multiple correcting screens [6,7]. Those efforts both made use of digital holography to estimate the field in the pupil plane before reconstructing the image via SMM. Farriss et al. used SMM with frequency-diverse digital holography (also known as 3D holographic laser radar [8,9]) to correct for isoplanatic aberrations [10], and the approach in this study builds on that work by increasing the number of aberrating screens along with the degree of anisoplanatism.

In this study, we also use digital holography to obtain field estimates for the digital propagations that take place in SMM. In general, there are several digital holographic recording geometries [1113], and we opt to use the off-axis image-plane recording geometry (IPRG) for its straightforwardness when windowing pupil estimates in the Fourier plane [14,15]. Here, we use frequency-diverse digital holography to obtain a stack of complex-valued fields in the pupil, from which one can generate a 2D irradiance image for each frequency as well as a 3D irradiance image of the distant object. One can generate range images of the object from the 3D irradiance by taking the range to be the pixel location of maximum irradiance in the $z$ direction for every transverse $x {-} y$ pixel. Figure 1 shows an example where (a) and (b) show the truth reflectance and truth range map for a simulated truck object, (c) shows a 2D irradiance image of the truck averaged over a narrow bandwidth of illuminating frequencies (described in Sections 2 and 3), and (d) shows a range image of the truck obtained from the 3D irradiance. In the presence of turbulence, we feed the complex-valued pupil data to the SMM algorithm to obtain reconstructed versions of Figs. 1(c) and 1(d). In Fig. 1(d), we present the range image modulo the range ambiguity interval (which we describe in Section 2), hence the range wrap that appears below (in front of) the truck.

 figure: Fig. 1.

Fig. 1. (a) Truth reflectance map, (b) truth range map, (c) 2D frequency-averaged irradiance image taken through vacuum, and (d) range image taken through vacuum.

Download Full Size | PPT Slide | PDF

In practice, we use SMM to increase the sharpness of the 3D irradiance image by optimizing the phase values in multiple planes within the digital-propagation path. There are two shortcomings when performing multi-plane SMM. First, large quadratic phase factors can form in the corrective phase estimates when optimizing over two or more screens (where neither of which is coincident with the object plane). These quadratic phases act together as a telescope that demagnifies the resultant image. Solutions with telescoping occur because image sharpness generally increases when the irradiance is more concentrated in a smaller area/volume. These solutions are undesirable, but there are methods to combat them, such as employing a Fourier domain constraint [6,7]. This study uses regularization of the corrective phase screens to mitigate against telescoping, and, in particular, limits the magnitude of the quadratic phase component that forms in the corrective phase screens. Second, when correcting with multiple corrective phase screens, multiple phase solutions can yield images of similar quality. This issue of degenerate solutions is an outstanding area of research that goes beyond the scope of this paper.

In simulation and experiment, it is often useful and efficient to simulate turbulence via one or more 2D phase screens that are placed throughout the propagation path. Given a finite number of screens, the question of where to place the screens throughout the propagation path to best approximate the effects of distributed-volume turbulence becomes an issue. The same issue arises when placing corrective phase screens throughout the propagation path in multi-plane SMM, as well as other approaches [16,17]. In this study, we refer to a study by Paxman et al., which develops a prescription for finding the optimal phase plane configuration for a given turbulence profile [18].

SMM algorithms can struggle with oversharpening. In particular, when treating phases on a point-by-point basis, the SMM algorithms can optimize the corrective phase screens in such a way that most of the light in the image plane concentrates to just a few pixels. From a raw sharpness standpoint, this outcome yields an extremely sharp image. To combat this effect, we recommend that one uses some form of bootstrapping where the corrective phase screens are constrained to lower order components for the first few rounds of optimization and gradually allowed to include higher order information. Previous researchers have used Zernike polynomials as a bootstrapping basis [4] as well as the method of sieves [7] (though both of these studies used point-by-point optimization for the final bootstrapping cycle). This paper will follow a more recent approach by Thurman in which one downsamples corrective phase screens prior to optimization [19]. This choice provides a more sparse basis for the initial optimization and also constrains the phase screens to include only lower order phase information at first. As the algorithm progresses, we reduce the downsampling factor and allow the corrective phase screens to develop higher order terms.

Given the phase plane configurations provided by Paxman’s method, this study examines multi-plane SMM when using fewer corrective phase screens than simulated aberrating screens and quantifies performance via two metrics. Section 2 describes the simulated imaging system and the SMM algorithm that we used to reconstruct images in the presence of simulated distributed-volume turbulence. Section 3 sets up the simulated trade space as well as the metrics used to quantify performance. Section 4 shows the simulation results and discusses the implications, and Section 5 concludes the paper.

2. ALGORITHM DESCRIPTION

In this section, we describe the overall 3D imaging setup as well as the SMM algorithm that reconstructs the images. Starting with the imaging system, we began our simulation with the truth reflectance and truth range of a scaled truck [cf. Figs. 1(a) and 1(b)] that is approximately 0.5 m in length. We illuminated the truck with a modified point source in the pupil plane that generated a tapered square illumination pattern (via separable Tukey windows in the $x$ and $y$ directions) for each frequency. We sequentially illuminated the truck with 32 equally spaced, discrete frequencies over a 4 GHz bandwidth with a central frequency ${\nu _0}$ of 300 THz (and corresponding wavelength ${\lambda _0} = 1\,\,\unicode{x00B5}{\rm m}$). We obtained the initial field in the object plane via

$$\begin{split}G({x_O},{y_O};\nu) &= T({x_O},{y_O}) \circ U({x_O},{y_O};\nu)\\&\quad \circ [{\cal N}(0,0.5) + i{\cal N}(0,0.5)] \\&\quad\circ \exp [i(4\pi \nu /c)Z({x_O},{y_O})],\end{split}$$
where ${x_O}$ and ${y_O}$ are the transverse coordinates in the object plane, $G({x_O},{y_O};\nu)$ is the field scattering from the object plane for a given frequency $\nu$, $T({x_O},{y_O})$ is the underlying object amplitude reflectance (which is independent of frequency), $U({x_O},{y_O};\nu)$ is the illuminating field at the object, ${\cal N}(\mu ,{\sigma ^2})$ is a Gaussian-distributed independent variable with mean $\mu$ and variance ${\sigma ^2}$, $c$ is the speed of light in vacuum, $Z({x_O},{y_O})$ is the truth range of the object, and $\circ$ denotes a Hadamard product. The circular complex Gaussian random numbers in Eq. (1) ensure that rough-surface scattering is properly simulated, and the exponential term applies the proper phase from the object depth for each frequency, $\nu$. Equation (1) holds for optically rough objects from which light reflects diffusely. However, real-world objects often exhibit regions of specular reflection or “glints.” In practice, we do not expect the presence of glints to adversely affect SMM, as glints have been shown to aid in the convergence of sharpening algorithms [5,20]. The dependence of SMM on the underlying object reflectance presents a rich trade space that is beyond the scope of this paper.

Note that Eq. (1) does not provide speckle diversity between different frequencies, as the circular complex Gaussian random numbers are frequency independent. This assumption is valid for the extremely narrow bandwidths relative to the mean frequency that we employed in this study, as changes in phase from rough surface scattering due to changes in frequency were negligible (in the 10 s of milliradians across the full bandwidth). The reader should note that 3D imaging with frequency-diverse digital holography [in the context of Eq. (1)] requires a consistent speckle realization across all illumination frequencies, which poses a problem for rotating/vibrating objects. This paper does not address the issues associated with rotating/vibrating objects in the context of 3D imaging, but researchers have developed modified 3D imaging modalities that are robust to target motion [21].

Once we obtained the simulated fields at the object (one for each frequency), we propagated through multiple Kolmogorov phase screens to the pupil plane via the split-step beam propagation method [22]. In these simulations, the total propagation distance was $z = 2\;{\rm km}$, and we maintained an $M \times M$ grid resolution (where $M = 512$) for all propagation arrays. Additionally, all propagation grids had the same pixel pitch, $\delta = 1.98\;{\rm mm} $. We also satisfied Fresnel scaling, which is otherwise known as critical sampling of the angular spectrum transfer function, such that $z = M{\delta ^2}/{\lambda _0}$ [23]. We generated phase screens using the approach by Lane et al., which uses the Kolmogorov Fourier spectrum with added subharmonics [24]. In the pupil plane, we applied a $D = 50\;{\rm cm} $ diameter circular aperture and thin-lens function to focus the incoming light to a $508 \times 508$ focal-plane array (FPA) that was conjugate to the object plane. Here, the image plane sampling quotient was two, meaning that there were two pixels per diffraction-limited transverse resolution element. For each illuminating frequency, $\nu$, we interfered each field with a tilted reference beam of the same frequency and measured the resultant irradiance of the hologram (with no added measurement noise). In accordance with Ref. [11], we performed a fast Fourier transform (FFT) of each hologram and windowed out the appropriate complex pupil field estimate. This resulted in a stack of pupil fields for the entire illumination bandwidth (one pupil field for each frequency within $\Delta \nu$).

 figure: Fig. 2.

Fig. 2. Overview of SMM.

Download Full Size | PPT Slide | PDF

There are two range-specific parameters that are of interest, namely, the range resolution and the range ambiguity interval. The range resolution, ${\delta _z}$, provides the smallest discernible element that one can perceive in the $z$ direction and is given by

$${\delta _z} = \frac{c}{{2\Delta \nu}},$$
where $\Delta \nu$ is the illuminating bandwidth. The range ambiguity interval, $\Delta z$, is given by
$$\Delta z = \frac{c}{{2{\delta _\nu}}},$$
where ${\delta _\nu}$ is the spacing between adjacent illumination frequencies. In practice, the range ambiguity interval is the maximum range in the $z$ direction before which wrapping occurs. This wrapping means that all range images display range modulo the range ambiguity interval, as evidenced by the horizontal range wrap in Fig. 1(d). In this study, ${\delta _z} = 3.75\;{\rm cm} $ and $\Delta z = 1.20\;{\rm m} $. Additionally, the diffraction-limited transverse resolution, determined by the aperture diameter, $D$, was $1.22{\lambda _0}z/D = 4.88\;{\rm mm} $.

Once we obtained our stack of pupil field estimates, we fed the pupil information into the SMM algorithm. Following along with Fig. 2, we first propagated our pupil fields back to the object plane through our corrective phase screens. Again, we used the split-step beam propagation method, which makes use of the angular spectrum method for vacuum propagations. On the first iteration, the corrective phase screens were initialized to zero, so the resultant object fields were aberrated. In general, the number and placement of the corrective phase screens differed from the original aberrating screens. In the object plane, we took a Fourier transform along the frequency dimension followed by a modulus squared operation to obtain the 3D irradiance. From there, we calculated the sharpness of the 3D irradiance and the gradients of the sharpness metric with respect to the parameters of each corrective phase screen. Finally, we fed this information into our optimizer of choice, described later, which tweaked the values of the corrective phase screens to increase the image sharpness.

With the overview for SMM laid out, we now discuss the internal mechanisms behind the algorithm. As previously mentioned, this study used a downsampling bootstrapping method, described by Thurman [19], which downsampled corrective phase screens prior to optimization. After we optimized the downsampled phase screens, we used Gaussian interpolation kernels to upsample the optimized screens back to their original resolution for propagation. The Gaussian kernel interpolation served as a de facto form of regularization that smoothed the phase estimates prior to propagation. This step provided more physical phase estimates and prevented the formation of branch points in the phase function, which occasionally developed in Ref. [19], which used nearest neighbor interpolation. To condense the mathematical equations to come, we denote a downsampling operation on the $n$th 2D input corrective phase screen ${\hat \phi ^{(n)}}({x_n},{y_n})$ as

$${\boldsymbol {\hat \phi}}_{\rm{down}}^{(n)} = {A_d}{{\boldsymbol {\hat \phi}}^{(n)}},$$
where ${{\boldsymbol {\hat \phi}}^{(n)}}$ is a vector version of ${\hat \phi ^{(n)}}({x_n},{y_n})$, ${\boldsymbol {\hat \phi}}_{\rm{down}}^{(n)}$ is a vector version of $\hat \phi _{\rm{down}}^{(n)}({x_n},{y_n})$ (which is the 2D downsampled corrective phase screen), and ${A_d}$ is a matrix that downsamples the vector ${{\boldsymbol {\hat \phi}}^{(n)}}$ via decimation for which the prescribed physical sample spacing in both $x$ and $y$ directions is $d$. Explicitly stated, ${A_d}$ is a sparse matrix, where all rows contain a one at locations where we desire to sample ${{\boldsymbol {\hat \phi}}^{(n)}}$ and zeros for every other entry. ${A_d}$ has $M_{\rm{opt}}^2$ columns and ${\kappa ^2}$ rows, where ${M_{\rm{opt}}}$ is the number of pixels across an ${M_{\rm{opt}}} \times {M_{\rm{opt}}}$ optimization region over which we optimize the phase (with pixel pitch $\delta$), and $\kappa$ is given by
$$\kappa = {\rm ceil}\left[{\frac{{{M_{\rm{opt}}} - 1}}{{{\rm round}\left({d/\delta} \right)}}} \right] + 1.$$

In Eq. (4), the downsampling process results in a ${\kappa ^2}$-length vector, which we reshape into a $\kappa \times \kappa$ downsampled corrective phase screen over the optimization region (with pixel pitch ${M_{\rm{opt}}}\delta /\kappa$). Upon further examination of Eq. (5), one can see that if the prescribed sample spacing, $d$, exceeds the linear dimension of the optimization region [i.e., if ${\rm round}(d/\delta) \ge {M_{\rm{opt}}} - 1$], then $\kappa = 2$ and we downsample the corrective phase screen to a $2 \times 2$ array. Additionally, for values of $d \lt \delta$, we always set $\kappa = {M_{\rm{opt}}}$, which corresponds to no downsampling.

Similarly, for upsampling, we write

$${\boldsymbol {\hat \phi}}_{\rm{comb}}^{(n)} = A_d^T{\boldsymbol {\hat \phi}}_{{\rm down}}^{(n)},$$
where ${\boldsymbol {\hat \phi}}_{\rm{comb}}^{(n)}$ is an $M_{\rm{opt}}^2$-length vector embedded with the values of ${\boldsymbol {\hat \phi}}_{{\rm down}}^{(n)}$ separated by zeros, and superscript $T$ denotes a transpose. After application of Eq. (6), we reshape ${\boldsymbol {\hat \phi}}_{\rm{comb}}^{(n)}$ into an ${M_{\rm{opt}}} \times {M_{\rm{opt}}}$ array over the optimization region, which we denote with $\hat \phi _{\rm{comb}}^{(n)}({x_n},{y_n})$. This array is nonzero at the original downsampling locations [cf. Eq. (4)]. To obtain a smooth corrective phase screen over the optimization region, we interpolate the array with a Gaussian kernel given by
$${g_d}({x_n},{y_n}) = \frac{1}{{2\pi {d^2}}}\exp \!\left({- \frac{{x_n^2 + y_n^2}}{{2{d^2}}}} \right),$$
which we use to obtain an ${M_{\rm{opt}}} \times {M_{\rm{opt}}}$ corrective phase screen given by
$${\hat \phi ^{(n)}}({x_n},{y_n}) = \hat \phi _{\rm{comb}}^{(n)}({x_n},{y_n})*{g_d}({x_n},{y_n}),$$
where $*$ denotes a 2D convolution. This operation provides us with a smooth corrective phase screen over the ${M_{\rm{opt}}} \times {M_{\rm{opt}}}$ optimization region. In general, the optimization region can be smaller than or equal in size to the entire $M \times M$ propagation grid, though we zero padded prior to propagation if ${M_{\rm{opt}}} \lt M$.

In addition to the sampling operators defined in Eqs. (4) and (6), we define a propagation operator given by

$${{\cal P}_{{z_1} \to {z_2}}}\{{\cal U};|{z_2} - {z_1}|\} ,$$
where ${\cal P}$ denotes an angular spectrum propagation that propagates an input 2D field, denoted by ${\cal U}$, from ${z_1}$ to ${z_2}$. Here, we take the positive $z$ direction to be pointing from the object to the pupil with the object located at $z = 0$. As such, we denote backwards propagations with
$${\cal P}_{{z_2} \to {z_1}}^\dagger \{{\cal U};|{z_2} - {z_1}|\} ,$$
where ${{\cal P}^\dagger}$ is the adjoint angular spectrum propagator.

Our sharpness metric of choice is given by

$$S = {S_1} + {\Psi _q},$$
where $S$ is the total metric, such that
$$\begin{split}{S_1} &= \sum\limits_{{x_O},{y_O},{z_O}} {s_1}({x_O},{y_O};{z_O}) \\&= \sum\limits_{{x_O},{y_O},{z_O}} {\rm sgn}(\beta - 1){I^\beta}({x_O},{y_O};{z_O}),\end{split}$$
and
$${\Psi _q} = - \alpha \sum\limits_{n = 1}^N \sum\limits_{q = 4}^6 |a_q^{(n)}{|^2}.$$

In Eq. (12), $I({x_O},{y_O};{z_O})$ is the reconstructed 3D irradiance, and $\beta$ is a sharpness exponent [20], and, in Eq. (13), $\alpha$ is a regularization parameter, $a_q^{(n)}$ is the $q$th Zernike coefficient in the $n$th corrective phase screen, and $N$ is the total number of corrective phase screens. ${S_1}$, given in Eq. (12), is the base sharpness term that the algorithm seeks to maximize. ${\Psi _q}$, given in Eq. (13), is a quadratic-phase (specifically, the defocus and astigmatism Zernike polynomials) regularization term that penalizes phase solutions that have large quadratic phases and thus mitigates against the aforementioned telescoping effect. Together, ${S_1}$ and ${\Psi _q}$ sum to the total metric that we maximized.

Every time we calculated sharpness in the algorithm, we used reverse-mode algorithmic differentiation (RMAD) [25] to numerically compute analytic gradients of $S$ with respect to the phase in each plane. RMAD is particularly useful for optimization problems such as this for two reasons: (1) numerically computing analytic gradients is less computationally burdensome than calculating finite-difference gradients, and the computational cost of each gradient calculation is comparable to evaluating the forward model; and (2) RMAD breaks the differentiation process into bite-sized chunks that can easily be modified when reconstructing with a different propagation geometry, phase screen configuration, or term in the metric. In Appendix A, we lay out the forward and gradient models for 3D multi-plane SMM in a fashion similar to Ref. [10].

3. SIMULATION SETUP AND EXPLORATION

In this section, we explore the trade space as well as the parameters used in SMM. We always used the scaled truck as the extended, realistic object, and we simulated three turbulence scenarios, shown in Table 1.

Tables Icon

Table 1. Turbulence Scenarios

Moving left to right in Table 1, $C_n^2$ gives a measure of the strength of turbulence in the atmosphere [26], where here the $C_n^2$ profile in the $z$ direction is a constant. $\sigma _\chi ^2$ is the log-amplitude variance, which gives a gauge for the strength of scintillation in the pupil plane after propagation through the distributed-volume turbulence [26], and $D/{r_0}$ tells us how many times the path-integrated Fried coherence diameter, ${r_0}$, fits across our pupil. This ratio informs us of the strength of the turbulence-induced blur in the images. Finally, $\Theta /{\theta _0}$ is the ratio of the angular field of view, $\Theta$, to the isoplanatic angle, ${\theta _0}$, which informs us of how many isoplanatic patches fit across the area of interest in the object plane. We simulated each scenario with 10 Kolmogorov phase screens, whose configuration in $z$ was determined by the Paxman method [18].

In these simulations, we did not simulate noise or pixel saturation at the FPA during the digital holographic detection process. This choice resulted in near-pristine complex-valued fields that we fed into the SMM algorithm. In terms of the SMM process itself, we performed empirical trade studies that varied $\beta$ [cf. Eq. (12)] as well as $\alpha$ [cf. Eq. (13)]. The $\beta$ study informed us that the optimal value for $\beta$ on average was 0.88. Note that this study used only a single speckle realization, and a previous study on 2D SMM empirically showed that the optimal value for $\beta$ was 0.5, provided a single speckle realization [4]. However, the difference in average error between $\beta = 0.88$ and $\beta = 0.5$ was minute in the simulations with 3D SMM. The $\alpha$ study suggested that a value of $\alpha = 1 \times {10^4}$ was optimal in producing high quality range images and in mitigating against telescoping, though that particular value depends heavily on the scale of ${S_1}$ and on the value of $\beta$. In general, there is a trade-off with $\alpha$, where larger values of $\alpha$ prevent telescoping and any demagnification in the image, but they sometimes struggle to improve image quality. Smaller values of $\alpha$ sometimes offer higher quality images at the cost of demagnification. For this study, we used $\beta = 0.88$ and $\alpha = 1 \times {10^4}$.

 figure: Fig. 3.

Fig. 3. Example image reconstructions with $\alpha = 0$, $D/{r_0} = 15$, and $N = 10$.

Download Full Size | PPT Slide | PDF

For each of the scenarios listed in Table 1, we performed SMM while decreasing the total number of corrective phase screens, $N$. For each number of corrective phase screens, we used the Paxman method with a uniform $C_n^2$ profile to generate the optimal corrective phase screen configuration for each case. Based on the fact that the field of view of interest was comparable in size to the simulated pupil, the Paxman algorithm generally placed the aberrating and corrective phase screens in a nearly equally spaced configuration each time. For each of the 30 reconstruction scenarios (three turbulence scenarios and 10 corrective phase screen configurations), we simulated 10 independent realizations of turbulence for our SMM algorithm to correct.

When using the downsampling method of bootstrapping described in Ref. [19], we used six cycles of bootstrapping and performed five iterations of limited-memory Broyden Fletcher Goldfarb Shanno (L-BFGS) optimization per cycle [27]. The bootstrapping process began with a prescribed downsampled spacing, $d$, such that $d/r_0^{(n)} = 8$, where $r_0^{(n)}$ was the assumed ${r_0}$ at the $n$th corrective phase screen, and divided $d/r_0^{(n)}$ by two until it equaled 0.25, for six spacings in total. We determined the assumed value of $r_0^{(n)}$ for the corrective phase screens by calculating ${r_0}$ at each screen that we required to obtain the true path-integrated $D/{r_0}$ in the pupil plane, given the number and configuration of corrective phase screens and a uniform $C_n^2$ profile. For example, when using fewer corrective phase screens, we reduced the assumed ${r_0}$ associated with each screen to achieve the appropriate $D/{r_0}$. We then set $r_0^{(n)}$ to the assumed ${r_0}$ at each screen by this method. In this process, the algorithm assumed knowledge of both the true $D/{r_0}$, as well as the fact that the turbulence was uniform over the propagation path. The performance of SMM when the $C_n^2$ profile is unknown is still an active area of research, but we expect the algorithm to converge to adequate solutions when the exact $C_n^2$ profile is unknown based on the fact that our algorithm showed success when using fewer corrective phase screens than aberrating screens, and based on the fact that previous researchers have optimized corrective phase screens without any knowledge of the strength of the turbulence [47]. We expect our algorithm to perform adequately even if guesses for $r_0^{(n)}$ that are not informed by the true value of ${r_0}$ for each screen are used in the bootstrapping process.

When upsampling the downsampled phase screens for use in propagation [cf. Eq. (6)], we set ${M_{\rm{opt}}} = 256$, so that we optimized the phase screens over a region that is 0.25 times the total area of the propagation arrays. This central region is where most of the light exists in each $z$ plane, so it allowed us to more efficiently optimize the corrective phase screens. Before propagation, however, we always zero padded the ${M_{\rm{opt}}} \times {M_{\rm{opt}}}$ arrays to $M \times M$ arrays.

Figure 3 shows an example reconstruction where $\alpha = 0$ for illustration purposes. Here, the top row shows the 2D frequency-averaged irradiance images, and the bottom row shows the range images. Recall that we obtain the 2D frequency-averaged irradiance from the stack of object fields, and we obtain the range image from the 3D irradiance after performing a Fourier transform along the frequency dimension [cf. Fig. 2]. Explicitly stated, range images relate to the 3D irradiance via

$$R({x_O},{y_O}) = \mathop {{\rm argmax}}\limits_{{z_O}} \{I({x_O},{y_O};{z_O})\} ,$$
where $R({x_O},{y_O})$ is the range image.

The columns in Fig. 3 correspond to original aberrated imagery (left), imagery obtained via SMM (middle), and ideal imagery if SMM arrived at the exact corrective phase screens to undo the original aberrations (right). There are a few things to notice here, one being that the 2D frequency-averaged irradiance images do not resemble the truck object, in general, due to speckle noise. The range images on the other hand have many identifiable features. In addition, the corrected range image of the truck (although significantly improved compared to the aberrated image) exhibits some obvious demagnification when compared to the ideal range image. This outcome is because there is no regularization present in this example (i.e., $\alpha = 0$), so the range image exhibits crisp features and edges, but a gross demagnification. It should be noted that in general, SMM cannot reconstruct global tip-tilt aberrations that cause translations in the final images, as translations do not affect our sharpness metric, $S$, at all. In general there are unavoidable translation discrepancies between the corrected and ideal imagery.

We use the following metrics to gauge performance. First, we define

$$\begin{split}\varepsilon = \mathop {{\rm min}}\limits_{m,x_O^\prime ,y_O^\prime} \left[{\sqrt {\frac{{\sum\limits_{{x_O},{y_O}} W({x_O},{y_O}) \circ {{\left| {{\rm mod}\left\{{R({x_O},{y_O}) - \hat R\left({\frac{{{x_O} - x_O^\prime}}{m},\frac{{{y_O} - y_O^\prime}}{m}} \right) + \Delta z/2,\Delta z} \right\} - \Delta z/2} \right|}^2}}}{{\sum\limits_{{x_O},{y_O}} W({x_O},{y_O}) \circ |R({x_O},{y_O}{{)|}^2}}}}} \right],\end{split}$$
where $R({x_O},{y_O})$ is the ideal range image, $\hat R({x_O},{y_O})$ is the corrected range image, $m$ is a uniform transverse magnification in ${x_O}$ and ${y_O}$ directions, ${x_O^\prime}$ and ${y_O^\prime}$ are shift parameters, $W({x_O},{y_O})$ is a window function, and the ${\rm mod}\{{\cal R},\Delta z\}$ operation makes an input ${\cal R}$ modulo $\Delta z$. In this case, the modulo operation ensures that range jumps near the range wraps do not contribute to $\varepsilon$. In these simulations, we selected the window function to be a $256 \times 256\;{\rm pixel}$ array of ones embedded in the $M \times M$ range images. The windowing served to null out the range information where the irradiance was near zero, which is not informative. This metric compares ideal range images to corrected range images with translational differences and magnification differences removed. We used $\varepsilon$ to gauge range image quality while also keeping track of the magnification term $m$ to determine how well the quadratic-phase regularization was working to prevent demagnification. For this study, we performed magnifications on the 2D frequency-averaged irradiances followed by discrete Fourier transform sub-pixel registration [28] for $m$ ranging from one to two and calculated $\varepsilon$ to find the optimal magnification and translation to apply to the range images.

After obtaining the parameters $m$, $x_O^\prime $, and $y_O^\prime $ that provided the minimum value of $\varepsilon$, we also used the mean structural similarity index measure (MSSIM) as a metric to gauge how similar the ideal range images were to the corrected range images (with the parameters $m$, $x_O^\prime $, and $y_O^\prime $ that minimized error). MSSIM provides an objective measure of structural similarities between a signal and reference image, and the metric is tailored to correlate with image similarity as determined by the human visual system [29]. The metric has a maximum value of one when the two images have the exact same values pixel-to-pixel, and it provides us a gauge for just how similar the corrected range images are to their ideal counterparts (cf. Figs. 3 and 4).

 figure: Fig. 4.

Fig. 4. Corrected range images for all turbulence scenarios in Table 1 and several corrective phase screen configurations.

Download Full Size | PPT Slide | PDF

4. RESULTS

Here, we show results for the aforementioned trade space. Figure 4 shows a corrected range image for all three scenarios in Table 1 and for several values of the number of corrective phase screens, $N$, in the reconstruction. The image quality of the truck decreases as $D/{r_0}$ increases and increases as $N$ increases, although for $N = 10$, the reconstructions for all three values of $D/{r_0}$ are relatively good. It is also apparent that it is possible to successfully reconstruct range images when $N$ is less than the number of aberrating screens, particularly for smaller values of $D/{r_0}$.

Quantitatively, Fig. 5 displays the error, $\varepsilon$, versus $N$ in (a), MSSIM versus $N$ in (b), the magnification correction $m$ versus $N$ in (c), and the average algorithm runtime in (d). Figures 5(a)–5(c) display the average value for each turbulence scenario in Table 1, as well as the results for each individual realization with the color matching the corresponding average for each data point. Figure 5(d) shows the timing results for five realizations as opposed to 10, with error bars corresponding to ${\pm}1$ standard deviation.

 figure: Fig. 5.

Fig. 5. (a) Error, $\varepsilon$, versus $N$, (b) MSSIM versus $N$, (c) magnification correction, $m$, versus $N$, and (d) algorithm runtime versus $N$ for all turbulence scenarios in Table 1.

Download Full Size | PPT Slide | PDF

The error trends in Fig. 5(a) agree with the qualitative results in Fig. 4, with the error, $\varepsilon$, decreasing on average for each turbulence strength as $N$ increases. It is also apparent that the average error increases overall as $D/{r_0}$ increases. Note that the error metric, $\varepsilon$, in Fig. 5(a) also indicates that the average error for images corrected with $N = 10$ and $D/{r_0} = 15$ is comparable to the average error when $N = 1$ and $D/{r_0} = 10$; however, the images in Fig. 4 do not bear this out qualitatively. Additionally, the relative change in the average error as $N$ increases is not as severe as a qualitative analysis of Fig. 4 would suggest.

In Fig. 5(b), on the other hand, the average MSSIM correlates more strongly with the quality of the images in Fig. 4. This result makes sense because MSSIM is a gauge for the structural similarity between two images given the human visual system. The individual realization results for MSSIM are also more tightly clustered around the mean as compared to the error results, which have more outliers (especially in the strongest turbulence scenario). Furthermore, there is a noticeable increase in MSSIM as $N$ increases from nine to 10 for all turbulence strengths, but most noticeably for the strongest turbulence. This result is likely because we become model matched when $N = 10$, and the corrective phase screens exist in the same locations as the aberrating screens. Additional studies could explore the affects of using more corrective phase screens than aberrating screens (i.e., when $N {\gt}{10}$).

In Fig. 5(c), the magnification correction $m$ is fairly constant for all turbulence scenarios, but it is larger on average for the stronger turbulence scenarios. This outcome suggests that more telescoping occurs for the stronger turbulence reconstructions. It is possible that the optimal value of $\alpha$ is turbulence dependent, and, in general, if demagnification is undesirable, then we could increase $\alpha$ to limit demagnification at the possible cost of image quality. There are some outliers for the stronger turbulence scenarios for which we required a large magnification ($m \sim 1.2$). It also appears that $m$ increases slightly as $N$ increases. This is most likely due to the fact that greater numbers of corrective phase screens have more of an effect on the range images, which causes them to be more susceptible to magnification.

Finally, in Fig. 5(d), the average algorithm runtime appears to be directly proportional to the number of corrective phase screens used in the reconstruction, $N$, with little variation. For the timing results, we used the same 32 CPU cores and 128 GB of RAM for all realizations to ensure an apples-to-apples comparison study. Here, we examined only five realizations to reduce computing time. Since we performed the same number of L-BFGS iterations for each simulation, the algorithm runtime increases proportionally with $N$. This outcome is mainly due to the number of FFT calculations that take place within the algorithm, which is proportional to $N$ as well. The relative runtime differences for varying $N$ make it clear that there is an inherent trade-off to adding more corrective phase screens: the reconstruction quality increases as $N$ increases, but the algorithm runtime increases as well.

Tables Icon

Table 2. Forward and Gradient Models for ${S_1}$

Tables Icon

Table 3. Forward and Gradient Models for Split-Step Propagation

5. CONCLUSION

This study extended previous image sharpening work and demonstrated that image reconstruction is possible when using fewer corrective phase screens than aberrating screens. It examined multi-plane SMM on 3D irradiances provided by frequency-diverse digital holography with no added measurement noise. The results indicate that reconstruction was successful for sufficiently great $N$ and for all turbulence scenarios examined. In general, we required a greater number of corrective phase screens for stronger turbulence [cf. Fig. 4]. The error trends indicate that error decreases on average as $N$ increases, and that error is higher for stronger turbulence scenarios on average [cf. Fig. 5(a)]. The MSSIM results show that MSSIM increases on average as $N$ increases and as the turbulence becomes weaker, and that there is a noticeable uptick when $N = 10$ for all turbulence scenarios, which is probably because the reconstruction model matches the simulation model [cf. Fig. 5(b)]. Additionally, the amount of demagnification in the range images is higher on average for the stronger turbulence scenarios and when $N$ is larger [cf. Fig. 5(c)]. This suggests that larger values of $\alpha$ might be necessary for stronger turbulence. In general there is a trade-off between image quality and demagnification with respect to $\alpha$. Finally, when increasing $N$, the algorithm runtime increases as well [cf. Fig. 5(d)].

Moving forward, there is room for further study of SMM on the 3D irradiance. For example, other regularization terms besides quadratic-phase regularization should be examined and compared in a wide trade study. Both the mean-squared error and MSSIM show the same trends with $N$ and with turbulence strength, but it appears that MSSIM is more sensitive to qualitative changes in the range images and is more consistent across the individual realizations, which justifies its use in future studies. As opposed to running a set number of optimization iterations, future research should explore truncating the optimization process according to a tolerance parameter (e.g., when the gradient is less than a certain threshold). This tolerancing would likely alter the algorithm runtime’s dependence on $N$. Uplink scintillation in the object plane should also be tested in future studies.

APPENDIX A

This section describes the RMAD process we used to generate sharpness gradients for any general propagation geometry and phase screen configuration. In the analysis, we denote estimated variables with hats. Any 2D field that exists in a plane within the propagation volume has a superscript $(n)$ to denote that it coincides with the $n$th corrective phase screen. In addition, the coordinate arguments for these fields reflect the plane in which they exist [e.g., $({x_n},{y_n})$ denote the 2D coordinates for the $n$th corrective phase screen, and $({x_O},{y_O})$ and $({x_P},{y_P})$ denote the 2D coordinates in the object and pupil planes, respectively]. In keeping with the nomenclature in Ref. [25], we also define the derivative of $S$ with respect to some variable $x$ as

$${\overline x} \equiv \frac{{\partial S}}{{\partial x}}.$$

Additionally, we define an operation that vectorizes a 2D array denoted by $A$ into a vector as ${\rm vec}\{A\}$ and an operation that reshapes some vector ${\textbf v}$ in a 2D array as ${\rm arr}\{{\textbf v}\}$.

Further, we denote the 3D object field as $\tilde f({x_O},{y_O};{z_O})$ and the stack of 2D object fields as $f({x_O},{y_O};\nu)$. We denote a stack of 2D fields located at the $n$th plane by ${F^{(n)^\prime}}({x_n},{y_n};\nu)$, where a prime signifies the field immediately after the $n$th phase plane and a lack of prime signifies the field immediately preceding the $n$th phase plane. We denote corrective phase screens in the $n$th phase plane with ${\hat \phi ^{(n)}}({x_n},{y_n})$ and a unit-amplitude complex phasor with phase equal to ${\hat \phi ^{(n)}}({x_n},{y_n})$ by ${\Phi ^{(n)}}({x_n},{y_n})$. Finally, ${{\cal F}_x}$ denotes a Fourier transform over variable $x$, ${\psi _q}({x_n},{y_n})$ is the $q$th Zernike polynomial in the $n$th plane, and superscript $*$ denotes a complex conjugate.

We now lay out the mathematical forward and gradient models for the total metric, $S$. For this purpose, we first determine the gradient of $S$ with respect to each summand on the righthand side of Eq. (11), which yields

$$\overline {{S_1}} = \overline {{\Psi _q}} = \overline S = 1.$$

We then break the remaining model into bite-sized chunks to obtain $\overline {\hat \phi _{\rm{down}}^{(n)}} ({x_n},{y_n})$ as described in Tables 25 via RMAD based on the information from Ref. [25]. The left columns (forward models) are executed from bottom to top, and the right columns (gradient models) are executed from top to bottom for each table. The forward model begins with downsampled corrective phase screens, $\hat \phi _{\rm{down}}^{(n)}({x_n},{y_n})$, and ends with the metric, $S$. In turn, the gradient model begins with $\overline S$ and ends with the derivative of $S$ with respect to downsampled corrective phase screens, $\overline {\hat \phi _{\rm{down}}^{(n)}} ({x_n},{y_n})$. Table 2 describes the model of the base sharpness, ${S_1}$, given a stack of complex-valued fields in the object plane, Table 3 describes the propagation of fields from plane to plane, Table 4 describes the downsampling process of the corrective phase screens, and Table 5 describes the model of the quadratic-phase regularization term, ${\Psi _q}$.

Tables Icon

Table 4. Forward and Gradient Models for Phase Screen Downsampling

Tables Icon

Table 5. Forward and Gradient Models for ${\Psi _q}$

Funding

Air Force Research Laboratory (AFRL-2021-0898).

Acknowledgment

The authors of this paper thank Joe Riley for providing the truth reflectance and range maps for the truck, Casey Pellizzari for his insight with respect to the performance metrics used in this paper, Wesley Farriss for many discussions on image sharpening, and Sam Thurman for insightful discussions on bootstrapping with regards to image sharpening. Approved for public release; distribution is unlimited.

Disclosures

The authors declare no conflicts of interest.

Data Availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

REFERENCES

1. D. Korff, G. Dryden, and R. Leavitt, “Isoplanicity: the translation invariance of the atmospheric Green’s function,” J. Opt. Soc. Am. 65, 1321–1330 (1975). [CrossRef]  

2. D. C. Johnston and B. M. Welsh, “Analysis of multiconjugate adaptive optics,” J. Opt. Soc. Am. A 11, 394–408 (1994). [CrossRef]  

3. F. J. Rigaut, B. L. Ellerbroek, and R. Flicker, “Principles, limitations, and performance of multiconjugate adaptive optics,” Proc. SPIE 4007, 1022–1031 (2000). [CrossRef]  

4. S. T. Thurman and J. R. Fienup, “Phase-error correction in digital holography,” J. Opt. Soc. Am. A 25, 983–994 (2008). [CrossRef]  

5. S. T. Thurman and J. R. Fienup, “Correction of anisoplanatic phase errors in digital holography,” J. Opt. Soc. Am. A 25, 995–999 (2008). [CrossRef]  

6. A. E. Tippie and J. R. Fienup, “Phase-error correction for multiple planes using a sharpness metric,” Opt. Lett. 34, 701–703 (2009). [CrossRef]  

7. A. E. Tippie and J. R. Fienup, “Multiple-plane anisoplanatic phase correction in a laboratory digital holography experiment,” Opt. Lett. 35, 3291–3293 (2010). [CrossRef]  

8. J. C. Marron and K. S. Schroeder, “Three-dimensional lensless imaging using laser frequency diversity,” Appl. Opt. 31, 255–262 (1992). [CrossRef]  

9. J. C. Marron and K. S. Schroeder, “Holographic laser radar,” Opt. Lett. 18, 385–387 (1993). [CrossRef]  

10. W. E. Farriss, J. R. Fienup, J. W. Stafford, and N. J. Miller, “Sharpness-based correction methods in holographic aperture ladar (HAL),” Proc. SPIE 10772, 107720K (2018). [CrossRef]  

11. M. F. Spencer, R. A. Raynor, M. T. Banet, and D. K. Marker, “Deep-turbulence wavefront sensing using digital-holographic detection in the off-axis image plane recording geometry,” Opt. Eng. 56, 031213 (2016). [CrossRef]  

12. M. T. Banet, M. F. Spencer, and R. A. Raynor, “Digital-holographic detection in the off-axis pupil plane recording geometry for deep-turbulence wavefront sensing,” Appl. Opt. 57, 465–475 (2018). [CrossRef]  

13. D. E. Thornton, M. F. Spencer, and G. P. Perram, “Deep-turbulence wavefront sensing using digital holography in the on-axis phase shifting recording geometry with comparisons to the self-referencing interferometer,” Appl. Opt. 58, A179–A189 (2019). [CrossRef]  

14. M. T. Banet and M. F. Spencer, “Multiplexed digital holography for atmospheric characterization,” in Imaging and Appl. Opt. (COSI, IS, MATH, pcAOP) (Optical Society of America, 2019).

15. M. T. Banet and M. F. Spencer, “Multiplexed digital holography for simultaneous imaging and wavefront sensing,” Proc. SPIE 11135, 1113503 (2019). [CrossRef]  

16. C. J. Pellizzari, M. F. Spencer, and C. A. Bouman, “Imaging through distributed-volume aberrations using single-shot digital holography,” J. Opt. Soc. Am. A 36, A20–A33 (2019). [CrossRef]  

17. C. J. Radosevich, C. J. Pellizzari, S. Horst, and M. F. Spencer, “Imaging through deep turbulence using single-shot digital holography data,” Opt. Express 28, 19390–19401 (2020). [CrossRef]  

18. R. G. Paxman, B. J. Thelen, and J. J. Miller, “Optimal simulation of volume turbulence with phase screens,” Proc. SPIE 3763, 2–10 (1999). [CrossRef]  

19. S. T. Thurman, “Phase-error correction in digital holography using single-shot data,” J. Opt. Soc. Am. A 36, D47–D61 (2019). [CrossRef]  

20. J. R. Fienup and J. J. Miller, “Aberration correction by maximizing generalized sharpness metrics,” J. Opt. Soc. Am. A 20, 609–620 (2003). [CrossRef]  

21. B. W. Krause, B. G. Tiemann, and P. Gatt, “Motion compensated frequency modulated continuous wave 3D coherent imaging ladar with scannerless architecture,” Appl. Opt. 51, 8745–8761 (2012). [CrossRef]  

22. J. D. Schmidt, Numerical Simulation of Optical Wave Propagation with Examples in MATLAB (SPIE, 2010).

23. D. G. Voelz, Computational Fourier optics: a MATLAB tutorial (SPIE, 2011).

24. R. G. Lane, A. Glindemann, and J. C. Dainty, “Simulation of a Kolmogorov phase screen,” Waves Random Media 2, 209–224 (1992). [CrossRef]  

25. A. S. Jurling and J. R. Fienup, “Applications of algorithmic differentiation to phase retrieval algorithms,” J. Opt. Soc. Am. A 31, 1348–1359 (2014). [CrossRef]  

26. L. C. Andrews and R. L. Phillips, Laser Beam Propagation through Random Media, 2nd ed. (SPIE, 2005).

27. D.-J. Kroon, “FMINLBFGS: fast limited memory optimizer,” MathWorks (2021), https://www.mathworks.com/matlabcentral/fileexchange/23245-fminlbfgs-fast-limited-memory-optimizer.

28. M. Guizar-Sicairos, S. T. Thurman, and J. R. Fienup, “Efficient subpixel image registration algorithms,” Opt. Lett. 33, 156–158 (2008). [CrossRef]  

29. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004). [CrossRef]  

References

  • View by:

  1. D. Korff, G. Dryden, and R. Leavitt, “Isoplanicity: the translation invariance of the atmospheric Green’s function,” J. Opt. Soc. Am. 65, 1321–1330 (1975).
    [Crossref]
  2. D. C. Johnston and B. M. Welsh, “Analysis of multiconjugate adaptive optics,” J. Opt. Soc. Am. A 11, 394–408 (1994).
    [Crossref]
  3. F. J. Rigaut, B. L. Ellerbroek, and R. Flicker, “Principles, limitations, and performance of multiconjugate adaptive optics,” Proc. SPIE 4007, 1022–1031 (2000).
    [Crossref]
  4. S. T. Thurman and J. R. Fienup, “Phase-error correction in digital holography,” J. Opt. Soc. Am. A 25, 983–994 (2008).
    [Crossref]
  5. S. T. Thurman and J. R. Fienup, “Correction of anisoplanatic phase errors in digital holography,” J. Opt. Soc. Am. A 25, 995–999 (2008).
    [Crossref]
  6. A. E. Tippie and J. R. Fienup, “Phase-error correction for multiple planes using a sharpness metric,” Opt. Lett. 34, 701–703 (2009).
    [Crossref]
  7. A. E. Tippie and J. R. Fienup, “Multiple-plane anisoplanatic phase correction in a laboratory digital holography experiment,” Opt. Lett. 35, 3291–3293 (2010).
    [Crossref]
  8. J. C. Marron and K. S. Schroeder, “Three-dimensional lensless imaging using laser frequency diversity,” Appl. Opt. 31, 255–262 (1992).
    [Crossref]
  9. J. C. Marron and K. S. Schroeder, “Holographic laser radar,” Opt. Lett. 18, 385–387 (1993).
    [Crossref]
  10. W. E. Farriss, J. R. Fienup, J. W. Stafford, and N. J. Miller, “Sharpness-based correction methods in holographic aperture ladar (HAL),” Proc. SPIE 10772, 107720K (2018).
    [Crossref]
  11. M. F. Spencer, R. A. Raynor, M. T. Banet, and D. K. Marker, “Deep-turbulence wavefront sensing using digital-holographic detection in the off-axis image plane recording geometry,” Opt. Eng. 56, 031213 (2016).
    [Crossref]
  12. M. T. Banet, M. F. Spencer, and R. A. Raynor, “Digital-holographic detection in the off-axis pupil plane recording geometry for deep-turbulence wavefront sensing,” Appl. Opt. 57, 465–475 (2018).
    [Crossref]
  13. D. E. Thornton, M. F. Spencer, and G. P. Perram, “Deep-turbulence wavefront sensing using digital holography in the on-axis phase shifting recording geometry with comparisons to the self-referencing interferometer,” Appl. Opt. 58, A179–A189 (2019).
    [Crossref]
  14. M. T. Banet and M. F. Spencer, “Multiplexed digital holography for atmospheric characterization,” in Imaging and Appl. Opt. (COSI, IS, MATH, pcAOP) (Optical Society of America, 2019).
  15. M. T. Banet and M. F. Spencer, “Multiplexed digital holography for simultaneous imaging and wavefront sensing,” Proc. SPIE 11135, 1113503 (2019).
    [Crossref]
  16. C. J. Pellizzari, M. F. Spencer, and C. A. Bouman, “Imaging through distributed-volume aberrations using single-shot digital holography,” J. Opt. Soc. Am. A 36, A20–A33 (2019).
    [Crossref]
  17. C. J. Radosevich, C. J. Pellizzari, S. Horst, and M. F. Spencer, “Imaging through deep turbulence using single-shot digital holography data,” Opt. Express 28, 19390–19401 (2020).
    [Crossref]
  18. R. G. Paxman, B. J. Thelen, and J. J. Miller, “Optimal simulation of volume turbulence with phase screens,” Proc. SPIE 3763, 2–10 (1999).
    [Crossref]
  19. S. T. Thurman, “Phase-error correction in digital holography using single-shot data,” J. Opt. Soc. Am. A 36, D47–D61 (2019).
    [Crossref]
  20. J. R. Fienup and J. J. Miller, “Aberration correction by maximizing generalized sharpness metrics,” J. Opt. Soc. Am. A 20, 609–620 (2003).
    [Crossref]
  21. B. W. Krause, B. G. Tiemann, and P. Gatt, “Motion compensated frequency modulated continuous wave 3D coherent imaging ladar with scannerless architecture,” Appl. Opt. 51, 8745–8761 (2012).
    [Crossref]
  22. J. D. Schmidt, Numerical Simulation of Optical Wave Propagation with Examples in MATLAB (SPIE, 2010).
  23. D. G. Voelz, Computational Fourier optics: a MATLAB tutorial (SPIE, 2011).
  24. R. G. Lane, A. Glindemann, and J. C. Dainty, “Simulation of a Kolmogorov phase screen,” Waves Random Media 2, 209–224 (1992).
    [Crossref]
  25. A. S. Jurling and J. R. Fienup, “Applications of algorithmic differentiation to phase retrieval algorithms,” J. Opt. Soc. Am. A 31, 1348–1359 (2014).
    [Crossref]
  26. L. C. Andrews and R. L. Phillips, Laser Beam Propagation through Random Media, 2nd ed. (SPIE, 2005).
  27. D.-J. Kroon, “FMINLBFGS: fast limited memory optimizer,” MathWorks (2021), https://www.mathworks.com/matlabcentral/fileexchange/23245-fminlbfgs-fast-limited-memory-optimizer .
  28. M. Guizar-Sicairos, S. T. Thurman, and J. R. Fienup, “Efficient subpixel image registration algorithms,” Opt. Lett. 33, 156–158 (2008).
    [Crossref]
  29. Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
    [Crossref]

2020 (1)

2019 (4)

2018 (2)

W. E. Farriss, J. R. Fienup, J. W. Stafford, and N. J. Miller, “Sharpness-based correction methods in holographic aperture ladar (HAL),” Proc. SPIE 10772, 107720K (2018).
[Crossref]

M. T. Banet, M. F. Spencer, and R. A. Raynor, “Digital-holographic detection in the off-axis pupil plane recording geometry for deep-turbulence wavefront sensing,” Appl. Opt. 57, 465–475 (2018).
[Crossref]

2016 (1)

M. F. Spencer, R. A. Raynor, M. T. Banet, and D. K. Marker, “Deep-turbulence wavefront sensing using digital-holographic detection in the off-axis image plane recording geometry,” Opt. Eng. 56, 031213 (2016).
[Crossref]

2014 (1)

2012 (1)

2010 (1)

2009 (1)

2008 (3)

2004 (1)

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[Crossref]

2003 (1)

2000 (1)

F. J. Rigaut, B. L. Ellerbroek, and R. Flicker, “Principles, limitations, and performance of multiconjugate adaptive optics,” Proc. SPIE 4007, 1022–1031 (2000).
[Crossref]

1999 (1)

R. G. Paxman, B. J. Thelen, and J. J. Miller, “Optimal simulation of volume turbulence with phase screens,” Proc. SPIE 3763, 2–10 (1999).
[Crossref]

1994 (1)

1993 (1)

1992 (2)

J. C. Marron and K. S. Schroeder, “Three-dimensional lensless imaging using laser frequency diversity,” Appl. Opt. 31, 255–262 (1992).
[Crossref]

R. G. Lane, A. Glindemann, and J. C. Dainty, “Simulation of a Kolmogorov phase screen,” Waves Random Media 2, 209–224 (1992).
[Crossref]

1975 (1)

Andrews, L. C.

L. C. Andrews and R. L. Phillips, Laser Beam Propagation through Random Media, 2nd ed. (SPIE, 2005).

Banet, M. T.

M. T. Banet and M. F. Spencer, “Multiplexed digital holography for simultaneous imaging and wavefront sensing,” Proc. SPIE 11135, 1113503 (2019).
[Crossref]

M. T. Banet, M. F. Spencer, and R. A. Raynor, “Digital-holographic detection in the off-axis pupil plane recording geometry for deep-turbulence wavefront sensing,” Appl. Opt. 57, 465–475 (2018).
[Crossref]

M. F. Spencer, R. A. Raynor, M. T. Banet, and D. K. Marker, “Deep-turbulence wavefront sensing using digital-holographic detection in the off-axis image plane recording geometry,” Opt. Eng. 56, 031213 (2016).
[Crossref]

M. T. Banet and M. F. Spencer, “Multiplexed digital holography for atmospheric characterization,” in Imaging and Appl. Opt. (COSI, IS, MATH, pcAOP) (Optical Society of America, 2019).

Bouman, C. A.

Bovik, A. C.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[Crossref]

Dainty, J. C.

R. G. Lane, A. Glindemann, and J. C. Dainty, “Simulation of a Kolmogorov phase screen,” Waves Random Media 2, 209–224 (1992).
[Crossref]

Dryden, G.

Ellerbroek, B. L.

F. J. Rigaut, B. L. Ellerbroek, and R. Flicker, “Principles, limitations, and performance of multiconjugate adaptive optics,” Proc. SPIE 4007, 1022–1031 (2000).
[Crossref]

Farriss, W. E.

W. E. Farriss, J. R. Fienup, J. W. Stafford, and N. J. Miller, “Sharpness-based correction methods in holographic aperture ladar (HAL),” Proc. SPIE 10772, 107720K (2018).
[Crossref]

Fienup, J. R.

Flicker, R.

F. J. Rigaut, B. L. Ellerbroek, and R. Flicker, “Principles, limitations, and performance of multiconjugate adaptive optics,” Proc. SPIE 4007, 1022–1031 (2000).
[Crossref]

Gatt, P.

Glindemann, A.

R. G. Lane, A. Glindemann, and J. C. Dainty, “Simulation of a Kolmogorov phase screen,” Waves Random Media 2, 209–224 (1992).
[Crossref]

Guizar-Sicairos, M.

Horst, S.

Johnston, D. C.

Jurling, A. S.

Korff, D.

Krause, B. W.

Lane, R. G.

R. G. Lane, A. Glindemann, and J. C. Dainty, “Simulation of a Kolmogorov phase screen,” Waves Random Media 2, 209–224 (1992).
[Crossref]

Leavitt, R.

Marker, D. K.

M. F. Spencer, R. A. Raynor, M. T. Banet, and D. K. Marker, “Deep-turbulence wavefront sensing using digital-holographic detection in the off-axis image plane recording geometry,” Opt. Eng. 56, 031213 (2016).
[Crossref]

Marron, J. C.

Miller, J. J.

J. R. Fienup and J. J. Miller, “Aberration correction by maximizing generalized sharpness metrics,” J. Opt. Soc. Am. A 20, 609–620 (2003).
[Crossref]

R. G. Paxman, B. J. Thelen, and J. J. Miller, “Optimal simulation of volume turbulence with phase screens,” Proc. SPIE 3763, 2–10 (1999).
[Crossref]

Miller, N. J.

W. E. Farriss, J. R. Fienup, J. W. Stafford, and N. J. Miller, “Sharpness-based correction methods in holographic aperture ladar (HAL),” Proc. SPIE 10772, 107720K (2018).
[Crossref]

Paxman, R. G.

R. G. Paxman, B. J. Thelen, and J. J. Miller, “Optimal simulation of volume turbulence with phase screens,” Proc. SPIE 3763, 2–10 (1999).
[Crossref]

Pellizzari, C. J.

Perram, G. P.

Phillips, R. L.

L. C. Andrews and R. L. Phillips, Laser Beam Propagation through Random Media, 2nd ed. (SPIE, 2005).

Radosevich, C. J.

Raynor, R. A.

M. T. Banet, M. F. Spencer, and R. A. Raynor, “Digital-holographic detection in the off-axis pupil plane recording geometry for deep-turbulence wavefront sensing,” Appl. Opt. 57, 465–475 (2018).
[Crossref]

M. F. Spencer, R. A. Raynor, M. T. Banet, and D. K. Marker, “Deep-turbulence wavefront sensing using digital-holographic detection in the off-axis image plane recording geometry,” Opt. Eng. 56, 031213 (2016).
[Crossref]

Rigaut, F. J.

F. J. Rigaut, B. L. Ellerbroek, and R. Flicker, “Principles, limitations, and performance of multiconjugate adaptive optics,” Proc. SPIE 4007, 1022–1031 (2000).
[Crossref]

Schmidt, J. D.

J. D. Schmidt, Numerical Simulation of Optical Wave Propagation with Examples in MATLAB (SPIE, 2010).

Schroeder, K. S.

Sheikh, H. R.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[Crossref]

Simoncelli, E. P.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[Crossref]

Spencer, M. F.

Stafford, J. W.

W. E. Farriss, J. R. Fienup, J. W. Stafford, and N. J. Miller, “Sharpness-based correction methods in holographic aperture ladar (HAL),” Proc. SPIE 10772, 107720K (2018).
[Crossref]

Thelen, B. J.

R. G. Paxman, B. J. Thelen, and J. J. Miller, “Optimal simulation of volume turbulence with phase screens,” Proc. SPIE 3763, 2–10 (1999).
[Crossref]

Thornton, D. E.

Thurman, S. T.

Tiemann, B. G.

Tippie, A. E.

Voelz, D. G.

D. G. Voelz, Computational Fourier optics: a MATLAB tutorial (SPIE, 2011).

Wang, Z.

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[Crossref]

Welsh, B. M.

Appl. Opt. (4)

IEEE Trans. Image Process. (1)

Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process. 13, 600–612 (2004).
[Crossref]

J. Opt. Soc. Am. (1)

J. Opt. Soc. Am. A (7)

Opt. Eng. (1)

M. F. Spencer, R. A. Raynor, M. T. Banet, and D. K. Marker, “Deep-turbulence wavefront sensing using digital-holographic detection in the off-axis image plane recording geometry,” Opt. Eng. 56, 031213 (2016).
[Crossref]

Opt. Express (1)

Opt. Lett. (4)

Proc. SPIE (4)

W. E. Farriss, J. R. Fienup, J. W. Stafford, and N. J. Miller, “Sharpness-based correction methods in holographic aperture ladar (HAL),” Proc. SPIE 10772, 107720K (2018).
[Crossref]

F. J. Rigaut, B. L. Ellerbroek, and R. Flicker, “Principles, limitations, and performance of multiconjugate adaptive optics,” Proc. SPIE 4007, 1022–1031 (2000).
[Crossref]

R. G. Paxman, B. J. Thelen, and J. J. Miller, “Optimal simulation of volume turbulence with phase screens,” Proc. SPIE 3763, 2–10 (1999).
[Crossref]

M. T. Banet and M. F. Spencer, “Multiplexed digital holography for simultaneous imaging and wavefront sensing,” Proc. SPIE 11135, 1113503 (2019).
[Crossref]

Waves Random Media (1)

R. G. Lane, A. Glindemann, and J. C. Dainty, “Simulation of a Kolmogorov phase screen,” Waves Random Media 2, 209–224 (1992).
[Crossref]

Other (5)

L. C. Andrews and R. L. Phillips, Laser Beam Propagation through Random Media, 2nd ed. (SPIE, 2005).

D.-J. Kroon, “FMINLBFGS: fast limited memory optimizer,” MathWorks (2021), https://www.mathworks.com/matlabcentral/fileexchange/23245-fminlbfgs-fast-limited-memory-optimizer .

J. D. Schmidt, Numerical Simulation of Optical Wave Propagation with Examples in MATLAB (SPIE, 2010).

D. G. Voelz, Computational Fourier optics: a MATLAB tutorial (SPIE, 2011).

M. T. Banet and M. F. Spencer, “Multiplexed digital holography for atmospheric characterization,” in Imaging and Appl. Opt. (COSI, IS, MATH, pcAOP) (Optical Society of America, 2019).

Data Availability

Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (5)

Fig. 1.
Fig. 1. (a) Truth reflectance map, (b) truth range map, (c) 2D frequency-averaged irradiance image taken through vacuum, and (d) range image taken through vacuum.
Fig. 2.
Fig. 2. Overview of SMM.
Fig. 3.
Fig. 3. Example image reconstructions with $\alpha = 0$ , $D/{r_0} = 15$ , and $N = 10$ .
Fig. 4.
Fig. 4. Corrected range images for all turbulence scenarios in Table 1 and several corrective phase screen configurations.
Fig. 5.
Fig. 5. (a) Error, $\varepsilon$ , versus $N$ , (b) MSSIM versus $N$ , (c) magnification correction, $m$ , versus $N$ , and (d) algorithm runtime versus $N$ for all turbulence scenarios in Table 1.

Tables (5)

Tables Icon

Table 1. Turbulence Scenarios

Tables Icon

Table 2. Forward and Gradient Models for S 1

Tables Icon

Table 3. Forward and Gradient Models for Split-Step Propagation

Tables Icon

Table 4. Forward and Gradient Models for Phase Screen Downsampling

Tables Icon

Table 5. Forward and Gradient Models for Ψ q

Equations (17)

Equations on this page are rendered with MathJax. Learn more.

G ( x O , y O ; ν ) = T ( x O , y O ) U ( x O , y O ; ν ) [ N ( 0 , 0.5 ) + i N ( 0 , 0.5 ) ] exp [ i ( 4 π ν / c ) Z ( x O , y O ) ] ,
δ z = c 2 Δ ν ,
Δ z = c 2 δ ν ,
ϕ ^ d o w n ( n ) = A d ϕ ^ ( n ) ,
κ = c e i l [ M o p t 1 r o u n d ( d / δ ) ] + 1.
ϕ ^ c o m b ( n ) = A d T ϕ ^ d o w n ( n ) ,
g d ( x n , y n ) = 1 2 π d 2 exp ( x n 2 + y n 2 2 d 2 ) ,
ϕ ^ ( n ) ( x n , y n ) = ϕ ^ c o m b ( n ) ( x n , y n ) g d ( x n , y n ) ,
P z 1 z 2 { U ; | z 2 z 1 | } ,
P z 2 z 1 { U ; | z 2 z 1 | } ,
S = S 1 + Ψ q ,
S 1 = x O , y O , z O s 1 ( x O , y O ; z O ) = x O , y O , z O s g n ( β 1 ) I β ( x O , y O ; z O ) ,
Ψ q = α n = 1 N q = 4 6 | a q ( n ) | 2 .
R ( x O , y O ) = a r g m a x z O { I ( x O , y O ; z O ) } ,
ε = m i n m , x O , y O [ x O , y O W ( x O , y O ) | m o d { R ( x O , y O ) R ^ ( x O x O m , y O y O m ) + Δ z / 2 , Δ z } Δ z / 2 | 2 x O , y O W ( x O , y O ) | R ( x O , y O ) | 2 ] ,
x ¯ S x .
S 1 ¯ = Ψ q ¯ = S ¯ = 1.

Metrics