Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Local resolution characteristics of atmospheric turbulence

Open Access Open Access

Abstract

D. L. Fried’s concept of a “Lucky Image” for turbulent image streams can be seen as creating different degrees of localized resolution in images. These localized regions of resolution can be derived from Fried’s equation for the probability of obtaining a Lucky Image. The existence of local resolution variations when imaging through turbulence also implies local variations in the point-spread-functions (PSFs) caused by turbulence. We characterize these local variations by using simple measures on PSFs collected in the presence of atmospheric turbulence. We also compile these variations into an empirical probability density function (PDF) that describes the different resolutions in local regions of turbulent imagery and can be used to characterize specific conditions of turbulence, e.g., the coherence diameter.

© 2022 Optica Publishing Group under the terms of the Optica Open Access Publishing Agreement

1. Introduction

Images, acquired through turbulent atmosphere, are not equal in resolution quality. Individuals familiar with adaptive optics are aware that the degree of correction imposed by an active optical element is constantly changing, with a mix of modest corrections and much stronger corrections. These changes in image quality can be understood from Fried’s analysis for the concept of a “Lucky Image” [1]. Fried presented derivations and equations to quantify the probability of images acquired through turbulence having desirable (e.g., diffraction limited) resolution characteristics. The probability computed by Fried has since been validated by data collections [2] and modeling [3], and various optics systems have been created to exploit Lucky Imaging, e.g., for astronomical imagery [4].

Fried’s analysis, and the quantification of the Lucky Image probability, leads to a simple question: If an image is not “Lucky”, meaning not diffraction limited in resolution, then what is the nature of the resolution expected for images with resolution less than diffraction limited? Fried offered additional comments on that prospect, indicating we should expect variations in resolution, but he did not provide further analytical results. The expectation of resolution variations in the image plane, as suggested by Fried, is not the complete description that could be desired. For example, knowing how many of the independent regions, as noted by Fried, have twice as poor resolution as the diffraction limited case, could be important information in planning to partition and select individual regions of an image, such as with morphological segmentation or tiled image processing techniques. In other words, more useful than knowing the probability of a Lucky Image would be knowing the overall probabilistic distribution of different resolutions in the image plane. Thus, from the probability of a Lucky Image computed by Fried, we seek to construct a probability density function (PDF) for the various resolutions in the image plane.

Herein, we disclose how this PDF for varying resolution in the image plane can be derived from the Fried’s original analysis. Specifically, we discuss:

  • • The conversion of the Lucky Image equation derived by Fried into a PDF for the variation of local resolution characteristics.
  • • The distribution of resolution variations, as understood in the optical pupil plane.
  • • The distribution or resolution variations, as understood in the optical image plane.
  • • Demonstration of computing the PDF of local resolution probability from a collection of optical point-spread-functions (PSFs) observed in the presence of atmospheric turbulence.

2. Lucky imaging and pupil plane effects on resolution

Fried derived the probability of a Lucky Image on a first-principles basis. In his analysis, the relevant effects of turbulence are the variations created in the optical pupil phase due to random changes in the propagation medium encountered by a propagating electromagnetic wave. A Karhunen-Loeve expansion of the wavefront in the pupil was numerically integrated (by the Monte Carlo technique) to describe the total distortion in terms of the coefficients of the wavefront expansion. From this, the Lucky Image probability was derived for the case where distortion is less than 1 rad2. The tabulation of these probabilities and a numerical fit of an equation to the probability values are presented in the conclusion of Fried’s paper. The parameterization of the independent variable, on which the Lucky Image probability is defined, is the ratio of ${D / {{r_0}}}$, i.e., the optical pupil diameter divided by the coherence diameter of the turbulence in the propagation path of the wavefront.

Following the derivation of the Lucky Image probability, Fried extended his discussion of Lucky Imaging by commenting on an important implication of his analysis, namely that resolution can vary within the image plane due to turbulence in the optical path:

“It is appropriate to note that the probability we have calculated applies independently to separate isoplanatic patches on the image. This means that in any one image, rather than its being entirely good or entirely poor resolution, there will be distributed over the image field-of-view a set of rather small regions, isoplanatic patches, in which the resolution is good. The rest of the image area will have much poorer resolution.” [1]

From the assertion by Fried that there are a variety of resolutions in the independent patches across the image plane, there grows a desire to quantify the variations in resolution. The way that we do this is by starting with Fried’s equation for the probability of a Lucky Image. The probability of a Lucky Image is given from Fried’s Eq. (47) as:

$$\textrm{Lucky Prob}\textrm{.} = {P_L} \cong 5.6\exp \left[ { - 0.1557{{\left( {\frac{D}{{{r_0}}}} \right)}^2}} \right]\textrm{for}{\kern 5pt}{D / {{r_0}}} \ge 3.5$$

Equation (1) was derived as an approximate numerical fit to tabulated values in Fried’s paper. This equation, and the tabulated probability values, yields a simple way to extend Lucky Image probability to a broader understanding of variations in resolution, based on the elementary understanding of what a probability means.

Any probability is the assignment of a numerical measure to the different outcomes that may occur in the total space of outcomes that will characterize a particular random event. For our purposes here, we restrict our attention to the space of outcomes of the different resolutions that may occur in the context of Fried’s suggestion of many different resolutions. Thus, we define the probability, ${P_L}$, as the probability for the outcome being resolution equal to the diffraction limit. Since practical optical imaging systems do not acquire resolution greater than the diffraction limit, the probability of any lesser resolution is the logical complement, in the sense of all the possible outcomes, to the diffraction limited probability of ${P_L}$. The elementary laws of probability then lead to the statement:

$${P_U} = 1 - {P_L}, $$
where ${P_U}$ is now the probability of an “Unlucky Image,” i.e., an image that has resolution less than diffraction limited.

The tabulated values or the numerically fitted equation in Fried’s paper can be used to compute values of ${P_U}$ by Eq. (2) above. Fried notes that the exponential fit of ${P_L}$ to the tabulated values in his paper are, as noted above, only valid for values of ${D / {{r_0}}}$ greater than 3.5. This limitation is, apparently, due to restricting the form of the fit to a simple exponential expression for the Lucky Image probability. Indeed, Fried noted that the fit was “in good agreement” with a conjecture by Huffnagel that the probability of a Lucky Image should be a “negative exponential function of aperture area.” [1]

However, the restriction of the fit to require larger values of ${D / {{r_0}}}$ is artificial. The actual source of the tabulated values that were fitted was a comprehensive numerical integration. Therefore, any mechanism that generates a valid functional representation for the tabulated values has equal validity to Fried’s equation. In order to make a more accurate representation of the tabulated values in Fried’s paper, we made a piece-wise fit to the tabulated data, as implemented by a standard function available in MATLAB software, i.e., the piece-wise cubic Hermite interpolation polynomial function, “pchip”. The fit was achieved by using each of the pair of values tabulated in Fried’s paper and is shown as the blue dotted line in Fig. 1. The result of the fit, when subtracted from the value of unity, as required in Eq. (1) above, is plotted in Fig. 1 as the solid red line. These two functions plotted together in Fig. 1 make vivid the complementary aspects of the two probabilities.

 figure: Fig. 1.

Fig. 1. Lucky Image and derived Unlucky Image probabilities for turbulent imagery as a function of ${({{D / {{r_0}}}} )^2}$.

Download Full Size | PDF

The shape of the curve shown in Fig. 1 is not a surprise, given the data and graph in Fried’s paper. Consider the case of the pupil aperture, D, being a fixed quantity. As the value of ${r_0}$ increases and approaches the value of D, the “Unlucky Image” probability will approach zero. Likewise, as the value of ${r_0}$ decreases, relative to D, the Unlucky Image probability must approach unity.

Of greater interest, than the visible behavior to be seen in Fig. 1, is what the function ${P_U}$, the Unlucky Image probability, demonstrates about the Lucky Image probability. From the logic laid out by Fried, ${P_L}$ is the probability that diffraction limited resolution is achieved, as required by Fried’s probability values approaching 1 (unity) as the ratio of ${D / {{r_0}}}$ approaches 1 from larger values. In our case, the value of ${P_U}$ must approach 0 (zero), since there will be no chance that the resolution will be less than diffraction limited when turbulence is absent. Conversely, the probability of ${P_U}$ must approach 1 as the ratio of ${D / {{r_0}}}$ grows large.

Now consider a case between these two extremes, e.g., the case of ${({{D / {{r_0}}}} )^2} = 10$. For the specific value of ${({{D / {{r_0}}}} )^2}$ equal to 10, there are many ways that the failure to achieve diffraction limited resolution can occur. Each of the many different events corresponding to resolution less than diffraction limited contributes to the cumulative probability that is recorded on the vertical axis. The case of diffraction limited resolution is also included in the calculation, since that is also possible as an outcome of the random image formation events. Stated formally, following Eq. (45) in Ref. [1], any of the independent resolution patches that Fried identifies may have resolution that is less than or equal to diffraction limited, thus:

$$\begin{aligned} {P_U} &= Prob\left\{ {\sum\limits_n {S_n^2 > 1\,\textrm{ra}{\textrm{d}^2}} } \right\}\\ &= \textrm{Probability of resolution in a image patch} \le \textrm{diffraction limit} \end{aligned}$$
where the $S_n^2$ terms are zero-mean Gaussian random variables with variance representing the strength of the aperture-averaged, squared, wave-front distortion w.r.t. an orthonormal basis set expansion. The above equation relates phase distortion in the pupil to local resolution in the image plane.

If, for the sake of argument, one interprets the red curve of Fig. 1 as a Cumulative Distribution Function (CDF), then for a given value of ${({{D / {{r_0}}}} )^2}$ on the horizontal axis, the probability of any image local resolution less than or equal to the diffraction limit is the cumulative sum of the associated Probability Density Function (PDF) probabilities that are less than or equal to that same value of ${({{D / {{r_0}}}} )^2}$. For a random variable, v, this can be more formally stated as:

$$CDF(x )= \int_{ - \infty }^x {PDF(v )dv}$$

The “…less than or equal to…” conditional of Eq. (3) is from the limiting values that are infinitesimally less than the upper limit of integration, x. Because a CDF is the integral of a PDF, the PDF that integrates into a given CDF can be computed as the derivative of the corresponding CDF. From this, we have an additional understanding of the graph in Fig. 1. As stated by Fried, since the Lucky Image probability implies the presence of many different independent patches having resolution less than or equal to the diffraction limited resolution, it can be used to compute the different probabilities of resolutions in the independent patches.

Thus, using Eq. (4) to convert the CDF, seen in Fig. 1 as the red line, into a PDF, we computed the numerically differentiated values and smoothed them by a standard MATLAB function, “smoothdata”, using a seven-sample smoothing window of the mean. The numerical smoothing of the derivative is beneficial because the numerical integration by Fried, which produced the results in his table, are subject to error intervals (as also published by Fried). The errors lead to small “kinks” in the differentiated curve before the smoothing operation. The result is seen in Fig. 2.

 figure: Fig. 2.

Fig. 2. Probability density function for diffraction limited resolution

Download Full Size | PDF

One may point out that Fried explicitly considered D and ${r_0}$ to be fixed quantities for his analysis and derivation of the probability of a Lucky Image. In arriving at the PDF of Fig. 2, we are considering the quantity ${({{D / {{r_0}}}} )^2}$ to be a random variable through the statistical nature of ${r_0}$. Fried acknowledges this in his closing paragraph:

“What we have calculated is the probability that a sample drawn from the ensemble of wave-front distortions will be a sample with almost no distortion. Our ensemble is restricted to samples of the same value of ${r_0}$, i.e., there is no change in the strength of turbulence of the propagation path during the experiment. There is, however, a larger ensemble associated with different values of ${r_0}$. ${r_0}$ changes with changing turbulence conditions. …” [1]

The PDF of Fig. 2 appeals to this larger ensemble from which wave-front distortions may be drawn as part of a more general experiment. As Fried noted in our first quote from his Lucky Image paper, there are isoplanatic patches in the image plane where the resolution changes from patch-to-patch. This is the direct manifestation of the coherence diameter being a random quantity. The analysis that leads to the PDF depicted in Fig. 2 above is the quantification of that random behavior.

The basic shape of the PDF displayed in Fig. 2 comports with the basic results in Fried’s paper. The probability density in Fig. 2 has a long right tail, which is implied within the graph of Lucky Image probability seen in Fig. 1. Further, the long right tail is consistent with a declining exponential function. What is new information in this PDF is contained in the behavior of the PDF for values in the left portion of the graph. The fitted numerical expression for Lucky Image probability in Fried’s paper was stated as valid only for values of ${D / {{r_0} \le 3.5}}.$ The peak of the curve in Fig. 2 occurs near this value, i.e., in the interval $13.0 \le {({{D / {{r_0}}}} )^2} \le 13.7,$ the uncertainty interval being the consequence of the smoothing to minimize the effects of small random errors in the probabilities computed by Fried in the numerical integration of phase variations by the Monte Carlo (statistical) method. For values of ${D / {{r_0} \le 3.5}}$ the only probability information within Fried’s paper were the two tabulated values for ${D / {{r_0} = \textrm{2, 3}}}.$ In Fig. 2 above, however, there is information for probabilities as the value of ${r_0}$ approaches the size of the physical aperture of the optical pupil.

It is important to emphasize that Fig. 2, can only be used in the manner consistent with any PDF. A PDF is the description of how a specific quantity, a point on the horizontal axis, compares in the relative sense of being more probable or less probable in reference to another point on the horizonal axis. An actual probability is determined from the integration of the curve between two points on the horizontal axis. This is why the vertical axis of Fig. 2 is labeled “Probability Mass”. The probability of an event is determined by the mass associated with that event, the event being the physical circumstances associated with the two different positions on the horizontal axis.

Finally, we note that the PDF of Fig. 2 displays properties that are associated with the pupil plane of an optical system. The resolution is described in terms of a ratio, and the numerator of the ratio is the pupil diameter, D. The denominator of the ratio is the turbulence coherence diameter, ${r_0}$, which is considered as the effective pupil diameter, a resolution limit that is imposed by the effects of turbulence. Thus, when Fried speaks of various resolution patches in the image plane, as quoted above, he is speaking of different independent resolution patches formed by effective pupil diameters of many different sizes. The construction of the Fig. 2 PDF displays these local patches as having resolutions as predicted in the PDF, for specific effects in the optical system (the aperture $D$) and the prevailing turbulence conditions (the coherence diameter ${r_0}$).

The validity of the general shape of the PDF displayed in Fig. 2, in relation to the pupil plane, has been seen in other work. For example, see the dissertation of Law [4], where he produces stellar turbulent images from a very large number of high-quality image simulations with Kolmogorov phase statistics, then computes the histogram of Strehl ratios of the images to demonstrate the probabilities of different resolutions. A histogram is an empirical estimate of a PDF, and the histogram in Ref. [4] has the same shape as seen in Fig. 2. Further, the basic definition of Strehl ratio is a ratio of the maximum intensity of an aberrated PSF to maximum intensity of the diffraction limited PSF. However, the Strehl ratio can also be treated as a resolution metric computed in the spatial frequency domain and derived from the optical pupil plane. Considered this way, if the Strehl ratio drops below 1, this means the corresponding aberrated PSF is broader (i.e., less resolved) than the diffraction limited PSF, assuming that all optical pupil plane energy is conserved.

Thus, we assert the statistics of resolutions in the image plane, as predicted by Fried and seen in Fig. 2, can be more useful if expressed in properties of the image plane, as opposed to properties more commonly associated with the pupil plane. For that reason, we turn now to describing resolution statistics in the image plane domain, and to relate those to the pupil plane values that have been dominant in the development of the Lucky Image equation and the PDF of Fig. 2.

3. Development of image plane local resolution statistics

To thoroughly ground the discussion of resolution statistics in the image plane, we consider the case of the aperture D being fixed in the ratio ${D / {{r_0}}}$. This is motivated by the consideration that a realistic scenario for imaging is a given optical system with the aperture held constant for the duration of an imaging event. This is also the scenario implicit in Fried’s comments on varying resolution that are quoted above. For a fixed aperture, the parameter that has implicit freedom to vary is the coherence diameter, ${r_0}$. In this case, we consider the coherence diameter as the governing limit to resolution within Fried’s patches of varying resolutions, “distributed over the image field-of-view [in] a set of rather small regions.” [1] Stated another way, the resolutions in the different patches vary as though there are many different optical apertures applying to each, with resolution by a diffraction limited pupil with an effective pupil diameter of ${r_0}$. The effective pupil diameter varies according to the probabilities seen in the PDF of Fig. 2. We use “effective” here to emphasize that there is no meaningful difference in the patch resolution of an aperture with a localized-in-the-patch value of ${r_0}$, regardless of the actual value of the aperture, $D.$

This leads to the most useful way of thinking about the PDF of Fig. 2. For a fixed aperture of size $D,$ typical of an image formation event, the resolution probabilities that are seen in Fig. 2 are probabilistic variations in the localized value of the coherence diameter, ${r_0}$, that limits resolution. The probabilities characterize the variation in effective pupil diameter (and the corresponding variation in a local value of ${r_0}$) for each patch.

Thus, we refer to Local Resolution Statistics as the distribution of values of ${r_0}$ that change from patch-to-patch and determine the resolution of each patch locally. For a fixed D, the Local Resolution Statistics follow the PDF derived above. Unfortunately, the effective pupil diameter and local resolution statistics are functions of the coherence diameter, which is a metric quantity with greatest meaning in the pupil plane. We seek, instead, to quantify Local Resolution Statistics in the image plane.

As stated above, the effect of turbulence is to produce resolution within a patch that is limited to that which would be obtained from an aperture of size ${r_0}$. For example, Roggemann and Welsh state:

“The Fried parameter can be interpreted as the aperture size beyond which further increases in diameter result in no further increases in resolution.” [5]

Given this statement, there is a temptation to associate an aperture of size ${r_0}$ to an angular resolution similar to the Rayleigh criterion for any circular aperture:

$${\theta _{res}} = \frac{{1.22\lambda }}{{{r_0}}}$$

This relation may lead to the further temptation to calculate a physical resolved dimension, similar to how angular resolution could be physically interpreted in an image of Rayleigh characteristics. However, these temptations should be avoided, since the presence of turbulence means the system is not diffraction limited, which is the realm where Eq. (5) is defined and applies. In other words, the comments of Roggemann and Welsh must be carefully limited as stated, i.e., an aperture diameter that limits resolution, but the exact nature of the resolution is not defined by diffraction limited Rayleigh behavior and the Airy disc with peak-to-first null dimensions.

How can the change in resolution be quantified for effects of ${r_0}$ in a useful way? The best approach is to examine the behavior of the curve in Fig. 2 for a fixed value of $D$ as a function of changes in the Fried parameter ${r_0}$. In Fig. 2, increasing the value of ${r_0}$ towards the value of D results in moving to the left side of the plot where the peak occurs. This is consistent with the fact that an increase in ${r_0}$, relative to the value $D,$ diminishes the effect of turbulence and moves optical performance to diffraction limited resolution. Conversely, an increase in ${r_0}$ means decreased resolution. Correspondingly, the peak of the PDF for an angular resolution probability will move towards the right side, and there will be a long left tail and a short right tail associated with the new position of the peak.

Another means to quantify the resolution in the image plane is by concentrating on the focal plane effects of a PSF produced by turbulence. The effects of both long and short exposures during image formation have been quantified in terms of Optical Transfer Function (OTF) for circular apertures in long and short exposures [6]. For the fixed value of a given optical pupil diameter $D,$ the corresponding OTFs, as a function of ${r_0}$, can be computed and numerically transformed into the image plane (e.g., by FFT) to yield long and short exposure PSFs. A metric of width for the associated PSF (such as full PSF width at half PSF maximum, FWHM) can then be used to quantify the metric probability associated with transforming the pupil plane values of D and ${r_0}$ through short and long exposure OTFs. In this manner, the independent variable in Fig. 2 could be replaced by a suitable PSF metric, such as FWHM. However, it is again necessary to remember that this is not a diffraction limited FWHM on the Airy disc, but a metric that is useful to interpret the probabilities seen in Fig. 2.

Finally, the transformation of probabilities as a function of image plane parameters, rather than the pupil plane values of D and ${r_0}$, can be achieved by practical empirical means. If turbulent PSFs are available, then PSF metrics can be derived from analysis with respect to local resolution statistics, for a sufficiently large set of PSFs. This is the topic of the succeeding section.

4. Empirical image plane local resolution statistics

The size of a PSF has direct effect on resolution of the image formed in the focal plane of an optical system, since a broader PSF “smears” optical energy across more positions in the image plane. A useful metric for the size of a PSF is “ensquarement”. To calculate ensquarement, a square region of pixels is positioned with the center of the square placed onto the center-of-mass of a point source in the image plane, which is equivalent to the PSF in the pupil plane. We then compute the normalized portion of PSF mass within the square. In diffraction limited PSFs, a square with width the size of the Airy disk, would contain the majority of a PSF mass. This prompts the use of ensquarement as a surrogate for resolution, which we do in the following discussion. We choose ensquarement because it is simple to implement in digital focal plane image detectors. Further, ensquarement relates to “encirclement”, where a circle inscribed in a square contains ${\pi / 4}$ of the area of the square.

Simple geometric considerations show how ensquarement is a surrogate for resolution in the focal plane. For example, consider a square that is centered on an image of a point source, as shown in Fig. 3. If the PSF is large in extent, relative to the square, as in the case of the $\mathrm{17\ \times 17\ pixe}{\textrm{l}^\textrm{2}}$ red square, then much of the mass of the PSF will fall outside the square. In this case the smallness of the square, in comparison to the PSF, indicates that the action of the PSF, in forming an image of objects the size of or smaller than the square, will cause a loss of resolution. In the contrary case, the smallness of a PSF, in comparison to the larger $\mathrm{35\ \times 35\ pixe}{\textrm{l}^\textrm{2}}$ square, indicates the action of a PSF in forming an image will preserve resolution, because most of the PSF mass lies within the square.

 figure: Fig. 3.

Fig. 3. Boundaries of two different-sized ensquared regions illustrated on a collected image of a point source.

Download Full Size | PDF

This simple geometrical argument indicates how to use ensquarement to examine the image plane resolution of PSFs obtained in the presence of atmospheric turbulence. Consider an ensemble of PSFs collected in some fashion. We first select the size of a square, which we will center on each PSF in the ensemble. For a consistent size square, we determine the fraction of the total mass of each PSF that falls within the square, for all the PSFs in the ensemble. Since it is conventional that a PSF has a unit mass of 1, this process will produce a set of “ensquarement fractions”. The collection of these ensquarement fractions makes a set of Local Resolution Statistics, exactly in the spirit of the description of resolution variations in Ref. [1] and discussed in Sec. 2 above.

Figure 4 displays a diffraction limited PSF for a circular aperture, i.e., the Airy disc in 3-D perspective. The Rayleigh criterion marks on the PSF the side-to-side nulls that govern the angular resolution of images formed by this PSF. The great majority of the PSF mass is within the distance from one null to another, i.e., the PSF mass contained within the first dark ring, the fraction of the PSF mass within the nulls being approximately 84% of the total. Consider now a new PSF obtained in some fashion. Further, let the new PSF be from an optical system that does not produce a diffraction limited PSF. For example, assume the new PSF does not have the shape or degree of symmetry visible in the Airy disk perspective of Fig. 4. How can we compare this new PSF to the Airy disk?

 figure: Fig. 4.

Fig. 4. Diffraction limited PSF of a circular aperture, relative to the Rayleigh criterion size, as in Eq. (5) and indicated in green.

Download Full Size | PDF

One way to make the comparison of a new, arbitrary PSF to the Airy disc is via ensquarement fraction. For example, we can center a small square of some fixed number of pixels in dimension, as in Fig. 3, on the center-of-mass of the new PSF and calculate the fractional amount of PSF mass within the square. If we repeat this exercise of ensquarement for larger squares, at some point a square will be placed on the new PSF such that 95% or more of the PSF mass lies withing the square. In this case, we decide that the new PSF is equivalent to the Airy disk in the sense of 95% concentration.

We refer to this equivalence as the 95% rule. Why do we choose the 95% rule, rather than use an 84% rule from the discussion of the Airy disc above? Any turbulent PSF, where ${r_0}$ is substantially less than the aperture diameter $D,$ will lack the structure, size, and shape exhibited by the Airy disc. Furthermore, as discussed above with respect to Strehl ratio, the resulting PSF will be much broader, i.e., more dispersed in energy across the focal plane. If the enclosing square is too small, the random dispersion of PSF energy will not be readily captured within the square, resulting in substantial fluctuation in the statistics that are to be compiled in the form of an empirical PDF estimate of Fig. 2. Our experiments with different sizes of enclosing squares demonstrated that a larger square, exceeding the 84% threshold, was important to make the resulting statistics stable in filling the histogram bins in the empirical PDF estimates of the focal plane data.

Access to optically measured PSFs were available from previous experimental collection of atmospheric turbulence data at the Air Force Research Laboratories located at Wright-Patterson Air Force Base (WPAFB) in Dayton, OH. The data was collected over a propagation path distance of 5090 m, from a ground laser source to an optical telescope positioned in a tower, approximately 30-50 m higher than the flat terrain over which the propagation took place. There were 13 sets of collections, from 13 different conditions of turbulence, with variations in the characteristics of the optical telescope making PSF images onto a digital focal plane detector. The experiments took place on a warm day in Fall, September 22, 2015. The collections were made over an interval of several hours, leading to significant changes in the turbulent atmosphere conditions created by solar heating of the flat terrain under the propagation path of the collections. Further information on the test collections has been previously published [7].

After inspection of the collected PSFs, a set was identified as suitable in the quantity and quality of the PSF images. This set was collected in the middle of the day, near local noon, when the solar heating was at a maximum. A scintillometer was in operation during the collection, making measurements of the turbulence. The value of the structure constant for the chosen set of PSFs was measured by the scintillometer as $C_n^2 = 1.1 \times {10^{ - 14}}\; \textrm{m}^{-2/3}$, indicating a substantial level of turbulence on the propagation path. The physical sampling in the image plane was at the Nyquist rate, so that no PSF distortions due to aliasing occurred. From that collection, we selected a set of 800 distinct PSFs. The PSFs were extracted in small image chips of $65 \times 65$ pixels, as shown in Fig. 3, with each PSF computer-registered to pixel (33,33) in order to remove all effects of wavefront tilt and allow a focus on resolution statistics due only to variations in local resolution. The PSFs were normalized so that all intensities in the frame summed to unity. From these measured PSFs, we then sought to construct empirical estimates of Local Resolution Statistics, in the spirit of the discussion above.

After selecting the PSF data, we proceeded as stated above, by placing small squares onto the center of the $65 \times 65$ image frames containing PSFs, for each of the 800 PSF image frames. Each square placed on a PSF image was given an odd number of pixels, meaning that the center pixel of the square was also an odd number. This ensured that the number of pixels in the square was equal and even around the center pixel of the square. Likewise, since the pixel frames were $65 \times 65$, also odd numbers, the center of a PSF frame is always at odd numbered location (33, 33), with an even number of pixels surrounding that position.

The fractional mass of the PSF ensquared in each frame was then calculated after placement of the square. This resulted in a set of ensquarement fractions, a total of 800 ensquarement values for each size of the square positioned on the PSFs. Of interest, next, is the statistics of the ensquarement values. Since we related changes in resolution in the image plane to a PDF, we generated histograms for the ensquarement values collected in this process. The motivation for histogram calculations is the essence of a histogram as a discrete approximation to a probability density function. Figure 5 contains two examples of the histograms.

 figure: Fig. 5.

Fig. 5. Ensquarement fraction histograms (blue) and empirical probability distributions (red) for $35 \times 35$-pixel2 squares (left) and $17 \times 17$-pixel2 squares (right).

Download Full Size | PDF

The graph on the left of Fig. 5 is from the use of a $35 \times 35$ pixel2 square, which meets the conditions of the 95% rule described above, to collect the ensquarement fractions. The right graph of Fig. 5 is from a square that is half this size (i.e., a $17 \times 17$ pixel2 square), selected to illustrate the case where ensquarement does not capture the bulk of the PSF energy. The histograms were computed with 20 bins for the total number of intervals to assess the value of ensquarement fractions. The bin counts are displayed as the blue vertical rectangles in Fig. 5. The red lines passing through the plots in Fig. 5 are the result of using the MATLAB “ksdensity” function to compute an empirical estimate of the PDF from which the bin counts accumulate. The red lines are, thus, the corresponding empirical PDFs for the case of the two different ensquarement choices used to create Fig. 5.

Figure 5 shows structure that is notable in relation to the discussions that accompany Fig. 2 above. We note, first, that the overall shapes of each histogram and empirical PDF are similar, with a long-right-tail and a strong narrow peak at the left. These are features that are consistent with the PDF we discussed above and displayed in Fig. 2. Second, the smaller $17 \times 17$ ensquarement in Fig. 2 shows a shift of ensquarement fraction statistics to the left. A smaller fraction of the ensquarement counts are associated with the peak in the data on the right graph, compared to larger values of ensquarement associated with the peak displayed on the horizontal axis in the left graph. The peak in the $17 \times 17$ ensquarement is also narrower and more pronounced. This is expected behavior of the peak and location of the peak with respect to ensquarement fractions, because smaller squares reduce the probability a PSF will have a large fraction of PSF structures within the square collecting ensquarement values.

The basic shape in Fig. 2, and the occurrence of this shape in the empirical PDF plots, suggests the consideration of analytical PDFs that possess similar long right-tail behavior past a well-defined peak. We tested this possibility by fitting the raw ensquarement values displayed in Fig. 5 to analytical definitions of PDFs that are known for left-hand peaks and long right tails. Numerical fits were made from the data in Fig. 5 to the Rayleigh PDF, a one-parameter density, and the Gamma PDF and Lognormal PDF, which are two-parameter densities. The Rayleigh fit was poor in representing the ensquarement values, i.e., the fractions of ensquarement in Fig. 5 did not lie well with respect to the fitted Rayleigh PDF curve. Gamma and Lognormal fits were better, but they did not represent as much “peaking” on the left as in the distributions above, nor did they sufficiently represent the long-right-tail in the actual ensquarement values. All three fits produced fatter central lobes, with more modest right tails, when compared to the empirical values seen in the fitted curves. A more suitable fit is still an open question.

Finally, we note that the experiments described above have been repeated with other data sets, including high-quality simulations of turbulence, such as indicated by Law (but not Law’s data) and with ranges of turbulence of lesser and greater values of ${r_0}$. Results confirming the basic behavior of the empirical PDF for ensquarement values were observed.

5. Concluding comments

Figure 6 shows an example of a turbulent PSF. For turbulent PSFs, a metric for resolution that measures the dispersion across the image plane is more logical and more natural. The metric of ensquarement, discussed herein, has the virtue of being simple to calculate and comprehensive with respect to the PSF dispersion expected as turbulence becomes stronger.

 figure: Fig. 6.

Fig. 6. Example of a Turbulent Point-Spread-Function from Experimental Observations at WPAFB

Download Full Size | PDF

Besides ensquarement, other metrics logically suggest themselves if they focus on the presence of dispersion. For example, the second central moment about the PSF centroid, which measures the standard deviation of PSF mass about the centroid, would be suitable as a metric. A collection of values of second moment could then be accumulated and used as input to histogram calculations of the sort that led to the empirical PDF estimates of local resolution that are seen in Fig. 5.

The point we are emphasizing here is that a purely phenomenological approach, as simple as ensquarement, demonstrates that the basic shape of an empirical PDF is consistent with the theoretical analysis that produces the general shape for the PDF of Local Resolution Statistics derived and presented in Fig. 2. The results of empirical PDF estimates, as in Fig. 5, are a basic consequence of the Lucky Image probability equation originally set forth by Fried. Fried’s fundamental analysis inherently implies the nature of the statistics of variations in the localized regions identified by Fried, and the shape of the PDF of those variations follows directly, as demonstrated in our simple case of PSF ensquarement statistics.

Funding

Air Force Research Laboratory (FA8650-18-C-1017).

Acknowledgments

The authors gratefully thank the Air Force Research Laboratory at Wright-Patterson Air Force Base for the funding to conduct the research reported herein. The authors also are grateful to Michael Rucci of AFRL for his insight and related research into local resolution effects.

Disclosures

KBR Wyle Services, LLC (FE)

Data availability

Data underlying the results presented in this paper are controlled by the Department of the Air Force, AFRL/RYMT, under CUI Category: CTI and Distribution/Dissemination Control: FEDCON.

References

1. D. L. Fried, “Probability of getting a lucky short exposure image through turbulence,” J. Opt. Soc. Am. A 68(12), 1651–1658 (1978). [CrossRef]  

2. D. Bensimon, R. Englander, R. Karoubi, and M. Weiss, “Measurement of the probability of getting a lucky short exposure image through turbulence,” J. Opt. Soc. Am. A 71(9), 1638–1639 (1981). [CrossRef]  

3. M. A. Rucci, R. C. Hardie, and R. K. Martin, “Simulation of anisoplanatic lucky look imaging and statistics through optical turbulence using numerical wave propagation,” Appl. Opt. 60(25), G19–G29 (2021). [CrossRef]  

4. N. M. Law, C. D. MacKay, R. G. Dekany, M. Ireland, J. P. Lloyd, A. M. Moore, J. G. Robertson, P. Tuthill, and H. C. Woodruff, “Getting lucky with adaptive optics: Fast adaptive optics image selection in the visible with a large telescope,” Astrophys. J. 692(1), 924–930 (2009). [CrossRef]  

5. M. C. Roggemann and B. M. Welsh, Imaging Through Turbulence, (CRC Press, 1996), pp. 71.

6. D. L. Fried, “Optical Resolution Through a Randomly Inhomogeneous Medium for Very Long and Very Short Exposures,” J. Opt. Soc. Am. A 56(10), 1372–1379 (1966). [CrossRef]  

7. B. R. Hunt, A. L. Iler, and M. A. Rucci, “Scalability conjecture for the Fried parameter in synthesis of turbulent atmosphere point spread functions,” J. Appl. Rem. Sens. 12(4), 042402 (2018). [CrossRef]  

Data availability

Data underlying the results presented in this paper are controlled by the Department of the Air Force, AFRL/RYMT, under CUI Category: CTI and Distribution/Dissemination Control: FEDCON.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. Lucky Image and derived Unlucky Image probabilities for turbulent imagery as a function of ${({{D / {{r_0}}}} )^2}$.
Fig. 2.
Fig. 2. Probability density function for diffraction limited resolution
Fig. 3.
Fig. 3. Boundaries of two different-sized ensquared regions illustrated on a collected image of a point source.
Fig. 4.
Fig. 4. Diffraction limited PSF of a circular aperture, relative to the Rayleigh criterion size, as in Eq. (5) and indicated in green.
Fig. 5.
Fig. 5. Ensquarement fraction histograms (blue) and empirical probability distributions (red) for $35 \times 35$-pixel2 squares (left) and $17 \times 17$-pixel2 squares (right).
Fig. 6.
Fig. 6. Example of a Turbulent Point-Spread-Function from Experimental Observations at WPAFB

Equations (5)

Equations on this page are rendered with MathJax. Learn more.

Lucky Prob . = P L 5.6 exp [ 0.1557 ( D r 0 ) 2 ] for D / r 0 3.5
P U = 1 P L ,
P U = P r o b { n S n 2 > 1 ra d 2 } = Probability of resolution in a image patch diffraction limit
C D F ( x ) = x P D F ( v ) d v
θ r e s = 1.22 λ r 0
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.