## Abstract

We explore opportunities afforded by an extremely large telescope design comprised of ill-figured randomly varying subapertures. The veracity of this approach is demonstrated with a laboratory scaled system whereby we reconstruct a white light binary point source separated by 2.5 times the diffraction limit. With an inherently unknown varying random point spread function, the measured speckle images require a restoration framework that combine support vector machine based lucky imaging and non-negative matrix factorization based multiframe blind deconvolution. To further validate the approach, we model the experimental system to explore sub-diffraction-limited performance, and an object comprised of multiple point sources.

© 2017 Optical Society of America

## Corrections

Xiaopeng Peng, Garreth J. Ruane, Marco B. Quadrelli, and Grover A. Swartzlander, "Randomized apertures: high resolution imaging in far field: erratum," Opt. Express**25**, 20952-20952 (2017)

https://www.osapublishing.org/oe/abstract.cfm?uri=oe-25-17-20952

24 July 2017: Typographical corrections were made to the abstract; paragraph 1 and 2 of Section 1, paragraph 1 of Section 2.1, paragraphs 1–3 of Section 2.2, paragraph 1 of Section 3, paragraphs 1 and 2 of Section 3.1, paragraphs 1–3 of Section 3.2.1, paragraph 1 of Section 3.2.2, paragraph 1 of Section 3.2.3, paragraph 1 of Section 4, paragraph 1 of Section 4.1, paragraph 1 of Section 4.2, paragraphs 1–3 of Section 4.3, paragraph 1 of Section 4.4, paragraph 1 of Section 5, paragraph 1–4 of Section 5.2, paragraph 1 of Section 6, paragraph 1 of Section 7; Eqs. (1)–(18); the figure captions of Figs. 3, 4, and 7–10; Tables 1 and 2; the acknowledgments section; and Refs. 3–8, 10, 11, and 20–64.

## 1. Introduction

The quest to achieve extraordinary resolution with a large space telescope is impeded by the cost and complexity of a large primary mirror [1–4]. One may imagine a well-figured system of free-flying segments, but locking these to a common focal point becomes challenging as the number of segments increases to the hundreds or thousands. The alignment complexity of the system relaxes if wavefront errors can be tolerated. However, undesirable speckle images will form in the focal plane. What is more, the randomly varying position and orientation of the segments produces a randomly varying point spread function. Facing these challenges, we explored whether the collection of a large number of speckle images of a static object could be combined to form a reconstructed image using this approach. We call the proposed system a Granular Space Telescope (GST) [5,6]. The schematic shown in Fig. 1 depicts a broadband point source at infinity illuminating circular subapertures of diameter *d* that are distributed across a baseline aperture of diameter *D* ≫ *d*. These subapertures may be flat mirrors roughly aligned to a parabolic surface. The experiment described below represents a scaled-down facsimile of the proposed system. We constructed random aperture arrays (RAA’s) having a baseline *D* = 9 mm and subapertures *d* = 0.2 mm. We note that the subapertures are small enough to satisfy the far-field diffraction condition at the imaging sensor: *d*^{2} ≪ *λ*_{c} *f*, where *λ*_{c} = 560 nm is the central wavelength and *f* = 400 mm is the effective focal length of the array.

It is well known that a point object may form an image at the focal plane if there is significant interference between superimposed diffracted beams from a sufficient number of subapertures. This condition is partially satisfied if the footprint of the diffracted beam from a single subaperture spans the full extent of the imaging array. Assuming square pixels of pitch Δ*x* and a detector array comprised of *N*_{D} × *N*_{D} pixels, we require *λ*_{c} *f*/*d ~ N*_{D}Δ*x*. To insure that a sufficient number of diffracted beams overlap in the focal plane, while also providing for a broad degree of tip-tilt randomness, we require that the root mean square (rms) of tip-tilt angle be less than *λ*_{c}/*d*. Following classical optics, this proof-of-concept study first examines an object comprised of binary point sources. A numerical model further explores sub-diffraction-limited performance and multiple point sources. The proposed restoration framework first reduces the noise and aligns the image sequence. The lucky frames are then classified from the image sequence using the support vector machine [7,8]. Using the best lucky frame as an initial guess, a non-negative matrix factorization [9] based multiframe blind deconvolution is applied to the image sequence to remove the speckle and the remaining diffraction blur, thereby producing a high resolution image of the two point sources. One may expect that greater amounts of wavefront error will require more robust reconstruction algorithms than the one presented here. On the other hand, narrow spectral bandwidths may ease the reconstruction process.

## 2. Related work

#### 2.1. Optical synthetic aperture

The proposed GST is related to a Fizeau interferometer imaging system [10] in which light beams from individual aperture element are merged at a virtual common focal plane of the entire array. In astronomy the Fizeau array has been utilized for segmented telescopes [11] such as the JWST, TMT, E-ELT, and diluted configurations [12,13] like a sparse Golay aperture. The GST also shares similarities to a photon sieve telescope [14], although our system is comprised of randomly place subapertures in the pupil, whereas the sieve is a binary amplitude pupil mask that functions as a Fresnel lens. Aside from the attempts to increase the physical size of the objective, improvements of the resolving power of telescopes have been achieved by sharpening the intensity distribution of the point spread function (PSF) [15–17].

#### 2.2. Image recovery from random distortions

The restoration of images from random distortions has been a long standing interest in optics. In addition to ill-figured optical components, time varying perturbations such as air turbulence or system vibrations may distort an image. Autocorrelation techniques were proposed for imaging through turbulence [18], allowing the number of point like objects to be estimated from a series of short exposure observations. Later, bispectrum and triple correlation [19, 20], iterative phase retrieval [21, 22], and machine learning [23, 24] methods were developed to retrieve the object function from randomly distorted measurements. These approaches required well aligned subapertures or knowledge of the PSF, for example through a direct measurement, computational calibration, or a reasonable statistical model [25–28].

A significant advance came with the development of blind deconvolution methods [29–31] which restore images directly from a single or multiple observations with little knowledge of the system PSF. By removing random distortions and diffraction blur, a seemingly super-resolved reconstructed image may be achieved [32–34]. Although deconvolution methods allow for the expansion of the optical system cut-off frequency by inferring the missing frequencies, it should be noted that this only holds true if the spacing between the imaged objects is greater than the diffraction limit. In addition, blind deconvolution is a highly ill-posed problem, making it susceptible to the entrapment in local minima, thereby producing sub-optimal or erroneous estimations. Multiframe approaches [35, 36] increase the ratio of unknown (the PSF and the target image) to known (the observations) from 2:1 in single frame schemes to *M* +1:*M*, thereby reducing the uncertainty in the search for the global minimum. Regularization terms [37–40], which model the statistics of a variety of objects (e.g., natural scenes) and their properties (e.g., edge), may also serve as constraints for an optimal convergence.

Another method of image reconstruction from random distortions is called *lucky imaging*. This theory is based on the observation that nearly diffraction-limited PSF’s occasionally occur owing to the random distortions. Statistically, sharp frames may occur given a sufficient number of observations [41, 42]. The lucky frames may be recorded directly inside an isoplanatic region [43, 44] or computationally identified from regularly captured images using sharpness metrics. If the PSF is measured (e.g., by use of a guide star), the Strehl ratio may be used as a lucky image classification parameter. Alternatively, metrics such as image brightness, variance, and quality of the detected edges may be used to identify lucky images [45, 46]. By fusing the lucky frames or patches, a diffraction-limited image may be produced. However, lucky imaging may not remove all distortions without the application of other techniques like image registration and deconvolution.

## 3. Image formation

Both experimental measurements and a numerical model were used to generate speckle images from random aperture arrays. These are described below, followed by image reconstruction discussions in Sec. 4.

#### 3.1. Experimental demonstration

The experiment makes use of a transmissive configuration (see Fig. 2(a)). A set of *N* = 100 unique thin foil masks were produced, each having *M* = 150 randomly placed subaperture elements (holes of radius *r* ∼ 0.1 mm) across a baseline *D* = 9 mm. The corresponding diameter ratio and fill factor were respectively given by Γ = 2*r*/*D* = 2% and FF = *M*Γ^{2} = 8%. These amplitude masks were made by piercing thin aluminum foil with the tip of a fine fabric sewing needle. A single mask was sequentially positioned at the front surface of a convex lens. The foil was covered by a layer of wrinkled cellophane to randomize the phase at each pinhole. A laser-driven white light source (Energetic EQ-99XFC LDLS [47]) was spatially filtered using a 5 μm diameter pinhole to produce a spatially coherent beam of diameter slightly larger than the baseline *D*. In effect, the light source was spectrally sampled by the RGGB Bayer filter on the camera (Canon 5D Mark III) sensor, with a bandwidth of roughly Δ*λ* = 320 nm and a central wavelength *λ*_{c} = 560 nm. An amplifier gain of ISO =100 was selected for each *τ* = 1/100 sec exposure. The sensor size is 36 × 24 mm, with a square pixel pitch of Δ*x* = 6.25 μm. A beam splitter and a back-reflecting mirror (not shown) were used to produce a second, mutually incoherent light source that subtended the first beam by an angle ** θ**. The mask-lens combination therefore appeared to receive light from two mutually incoherent point sources at infinity, i.e, a binary point source. A sequence of

*N*images were recorded on the camera sensor by inserting a new mask between each frame.

For ground truth comparison a diffraction-limited image of the binary source was recorded without the amplitude and phase masks. Examples of a typical RAA image and the diffraction-limited image of the broadband binary source are shown in Figs. 2(b) and (c) respectively. The polychromatic RAA image forms a radial speckle pattern. For a random array with rms tip-tilt angle *σ _{α}* and rms piston displacement

*σ*, a characteristic variation of the optical path difference may be written as

_{z}*σ*=

_{L}*r*tan(

*σ*) +

_{α}*σ*. Theoretically, if

_{z}*σ*is less than the temporal coherence length of the light source ${L}_{c}={\lambda}_{c}^{2}/\mathrm{\Delta}\lambda $, the speckle patterns are well correlated across neighboring wavelengths in the vicinity of the central isoplanatic region [26]. Speckles that fall within the isoplanatic region appear as normal speckles, each having a diameter ~ 2

_{L}*λ*

_{c}

*f*/

*D*(see zoomed-in image of Figs. 2(b)). Based on the numerical model discussed below, we found that for a random array with rms tip-tilt angle on the order of 5

*λ*

_{c}/

*D*and rms piston of one wave, the isoplanatic region has a size of 10

*λ*

_{c}

*f*/

*D*(or 500 μm pixels on our sensor).

#### 3.2. Numerical model

### 3.2.1. Monochromatic source

The following numerical model was used to analyze the image formation of the RAA system with controlled amounts of tip-tilt and piston phase error. What is more, this model is instrumental in forming the training set described in Sec. 4.3. The source plane provides two tilted plane waves at the pupil plane (labeled **x**′= (*x*′, *y*′), see Fig. 3(a)). Using a *N*_{A} × *N*_{A} grid, the pupil plane samples the aperture array, which consists of *M* subapertures of diameter *d* = 2*r* within a baseline *D* = 2*R*. A lens of focal length *f* tends to bend light transmitted through the subapertures toward the central focal point. The detector array (labeled **x** = (*x*, *y*)) has a size of *N*_{D} × *N*_{D} and is placed at the back focal plane of the lens.

The randomized subaperture array may be represented by the sum of a random distribution of complex subaperture functions within the baseline:

*circ*function is unity valued if the argument is less than unity and zero otherwise,

**c**= (c

_{m}_{m,x}, c

_{m,y}) is the center location of the m

^{th}subaperture, and the phase term is given by:

*k*= 2

*π*/

*λ*is the wave number, and

*λ*is the wavelength. Typically, the amount of misalignment is governed by a statistical distribution. Here we assume Δ

*z*

_{m}is a uniformly random piston displacement with variance ${\sigma}_{z}^{2}$, and

**Δ**= (Δα

*α*_{m}_{m,x′}, Δα

_{m,y′}) are mutually independent random tip-tilt angles, each having a normal distribution with zero mean and variance ${\sigma}_{\alpha}^{2}$. Typical values of

*σ*and

_{α}*σ*are 5

_{z}*λ*/

_{c}*D*(or

*λ*/9

_{c}*d*) and one wave respectively.

In practice, a smoothed apodized circular aperture may be numerically generated by the use of a high order super-Gaussian function. We employed a Poisson disk sampling algorithm [48] to generate non-overlapping circular subapertures of diameter *d* within a baseline *D* = 45*d*. To avoid aliasing in sampling the pupil and detector planes, we also require: (1) Δx′·*N*_{A} > 2*σ _{α}* and (2) 1.22

*λ*

_{min}

*f*/

*D*≥ 2Δx, where Δx′ and Δx are respectively the square pitch of·the aperture plane and the detector plane. The first condition states that the number of samples across the diameter of each subaperture in the pupil plane has to be at least twice as large as the rms of the tip-tilt error. The second condition requires the diffraction pattern formed by the baseline is sampled by at least two pixels in the detector plane.

For a quasi-monochromatic point source that subtends the optical axis by an angle ** θ** = (

*θ*,

_{x}*θ*), the electric field may be expressed as Fraunhofer diffraction of the tilted field:

_{y}**k**=

_{m}*k*

**c**/

_{m}*f*, ψ

**=**

_{m}*k*(

**c**·

_{m}**Δ**− Δ

*α*_{m}*z*

_{m}),

*E*

_{0}is the field strength of the light source at the entrance pupil, and A

_{0}=

*πd*

^{2}/4 is the area of each subaperture. The Sombrero (or Bessel-Sinc) function may be defined as Somb(u) = 2J

_{1}(2

*π*u)/(2

*π*u), where J

_{1}(2

*π*u) is the Bessel function of the first kind. The electric field of a monochromatic source (see Eq. (3)) is a weighted superposition of

*M*Sombrero waves. Each wave has a wave number

**k**and is modulated by a complex coefficient exp(−i

_{m}**). Since**

*ψ*_{m}**c**has a maximum value of

_{m}*D*/2, the maximum wave number is given by

*k*

_{max}=

*πD*/(

*λ*

_{c}

*f*). Although the cut-off frequency of the aperture array is limited to the baseline

*D*, the superposition of these waves may lead to super-oscillatory fields, where the local wave vectors exceed

*k*

_{max}. These super-oscillatory fields typically associate with low image intensities. A subset of these nulls may occasionally be formed around the core of a bright speckle, reducing its size and appearing as a super-resolved focal spot as shown in Fig. 4. Lucky image frames typically contain such spots.

The image irradiance is proportional to the squared modulus of the electric field:

**= 0. The PSF is a superposition of**

*θ**M*×

*M*envelope functions, the center of which are determined by the tip-tilt angles of the corresponding subaperture. Each envelope function is a product of two Somb functions and is modulated by a cosine interference term. Although the envelope functions are shift-invariant, the interference terms can spoil the shift-invariance. Here we assume the system is quasi shift-invariant for a small rms wavefront errors, e.g., rms tip-tilt angle and piston errors smaller than 10

*λ*/

_{c}*D*and 1.5 waves respectively. As demonstrated below, this assumption is validated by the use of shift-invariant deconvolution techniques that successfully restore the image.

### 3.2.2. Polychromatic source

Polychromatic PSF of RAA may be numerically represented as an incoherent integral of the monochromatic PSF over the bandwidth:

*ρ*(

_{s}*λ*) and

*ρ*(

_{c}*λ*) are the spectral distribution of the source and the spectral response of the detector respectively. Although the spectral integral produces radial speckle structures, a central isoplanatic region may be formed if the optical path difference is less than the temporal coherence length of the light source (see Sec. 3.1). In addition, an optimal pupil function may appear according to Eq. (4), producing a super-resolved PSF. The comparisons of the effective focal spot of a filled aperture, a well aligned RAA, as well as a typical and a lucky misaligned RAA are shown in Figs. 4(a)–(d) respectively. The brightest speckle is considered as the effective focal spot (or the core of the RAA PSF), where there is the most constructive interference. Compared to the diffraction-limited PSF, the effective focal spots of RAA are noticeably smaller. Quantitatively, the cores of RAA PSF are smaller than the diffraction-limited focal spot by a factor of 1.2 and 1.6 times for a well aligned and a lucky misaligned array respectively.

In practice, an object may be decomposed into a distribution of mutually incoherent point sources *a*(**x**) = Σ* _{j} E_{j} δ*(

**x**−

*θ**), where*

_{j}f*E*is the field strength of the j

_{j}^{th}point source. Denoting the irradiance of a tilted polychromatic point source as

*h*(

_{θ}**x**) =

*h*(

**x**−

*θ**f*), the total irradiance of the object may be represented by the superposition of the irradiance of each source as:

### 3.2.3. Noise model

Two types of sensor noise are considered in this numerical model, they are photon shot noise and readout noise. Photon noise is due to the randomness of the arrival of photons, which accumulates as exposure time increases and is amplified by the system gain, i.e., ISO sensitivity. Photon noise is scene dependent and has a Poisson distribution with equal mean and variance given by ${\mu}_{\mathrm{p}}\left(\mathbf{x}\right)={\sigma}_{\mathrm{p}}^{2}\left(\mathbf{x}\right)=\nu \tau \mathrm{\Lambda}\left(\mathbf{x}\right)$, where *ν* is the amplifier gain and *τ* is the exposure time. The expected number of photon per unit time interval at each pixel is denoted as Λ(**x**), which is proportional to the scene irradiance *I*(**x**). Readout noise associates with voltage fluctuations and has a Gaussian distribution with zero mean and variance ${\sigma}_{\mathrm{r}}^{2}=\nu {\sigma}_{\text{read}}^{2}$. Here we consider only the white noise, where the wavelength dependency is ignored. A gray scale image corrupted by the photon and readout white noise can be written as:

_{n}is the standard deviation of the noise mixture, i.e.,

*I*noise −

*I*. If the reference noiseless image

*I*(

**x**) is unknown, the first six harmonics of the estimated power spectral density [51] will be selected and removed using a window function (e.g., Kaiser window). The power of the first harmonic is considered as an estimation of the noiseless image, while the average power of the remaining non-harmonic region is regarded as the noise floor.

## 4. Image recovery

An image reconstruction framework to recover a high resolution image of point sources from a sequence of *N* intensity measurements from a random aperture array is described below. We assume the sources fall within an isoplanatic region so that the shift-invariance may be assumed. The restoration framework consists of four steps (see Fig. 5(a)): denoising, image alignment, lucky imaging, and multiframe blind deconvolution. Given an input sequence $\left\{{I}_{n}^{N}\right\}$, the first step interpolates the recorded raw-format images into RGB format using the adaptive homogeneity directed demosaicing method [52] provided by dcraw [53]. The three color channel images are converted into gray scale images (Future work may explore the advantages of higher dynamic range monochrome camera.) The measured dark frame associated with additive noise is subtracted from the gray scale images. The next step aligns the sequence to the intensity peak of each image. The third step selects lucky frames using a pre-trained SVM classifier. Finally, the remaining distortion and diffraction blur are removed by the multiframe blind deconvolution with the best lucky frame serving as the initial guess.

#### 4.1. Denoise

As a useful preliminary step for the subsequent image restoration, we first remove the additive noise from the recorded image sequences. The fixed pattern noise is removed from the experimental images by subtracting the mean of a set of dark frames that are captured at the lowest amplifier gain (ISO = 100) and exposure time (*τ* = 1/8000 sec). We also subtract the readout noise from both experimental and numerical data. Readout noise associates with the amplifier gain and has a zero mean, which is typically measured by taking the difference of two dark frames that are taken at a specific ISO and exposure time, i.e., the settings used to capture the target images. Although we do not explicitly address the photon noise, the above denoising process allows us to increase the average SNR from 5.7 to 7.5 dB for the experimental sequence, and 6.5 to 7.8 dB for the numerical images.

#### 4.2. Image alignment

The recorded images may not be well aligned owning to the frame-to-frame placement of the random phase and amplitude masks. Recentering the position of the peak intensity point is suitable owing to the follow properties of our system: The targets of interests are point sources, which have relatively simple shapes and little texture; The scene is static and thus free from motion blur; The SNR is reasonably high during the image acquisition.

#### 4.3. Lucky imaging

Given a sufficient number of RAA frames, a small proportion of relatively sharp images (i.e., lucky frames) appear owing to random wavefront variations. Based on the prior knowledge that the target objects are an *unknown* number of point sources (e.g., stars), six important features are considered for the identification: The three parameters from a generalized extreme value (GEV) distribution [54] described below, the peak value of the normalized GEV histogram, the the full-width-half-maximum of the autocorrelation, and the SNR of each image. The histogram of each 80 × 80 region of pixels centered on the aplanatic patch is approximated by the GEV distribution using the maximum-a-posterior estimation provided by the “statistics and machine learning toolbox” of Matlab 2015b. The probability density function of GEV is given by:

*χ = ζ*· (

*I*(

**x**)−

*μ*)/

*σ*and ζ is the shape parameter, and

_{e}*μ*and

*σ*are respective position and scale variable. An asymmetric long-tailed distribution occurs when ζ ≥ 0, whereas ζ < 0 indicates a short-tailed and relatively symmetric shape.

_{e}The normalized histograms and fitted GEV curves are shown in Fig. 5(b) with corresponding insets for a diffraction-limited image, a lucky RAA image, and two typical RAA images. Each RAA histogram appears as a scaled mixing of the histogram of the object function *a*(**x**), which is comprised of a collection of delta functions (see Sec. 3.2.2). The histograms differ in shape owing to different speckle distributions. Lacking speckles, the diffraction-limited image has a very sparse distribution. The lucky RAA image has a long-tailed distribution with a peak value less than unity. The typical RAA images may be divided into two types: Type I has a long-tailed distribution and an extremely high peak; Type II has a low peak value and a roughly symmetric shape. The three distribution parameters (*ζ*, *μ* and *σ _{e}*) as well as the peak value of the normalized histogram are generally adequate to classify the RAA images into lucky or typical groupings, assuming the same rms phase error and SNR of the data set. However, the classification of RAA images with different phase errors and SNR may lead to ambiguities and loss of accuracy. Lacking sufficient prior knowledge of the phase error, the FWHM of the autocorrelation of each RAA image, which is an indicator of the rms tip-tilt and piston [55], is employed as an extra feature of the RAA images to improve the robustness of the classification procedure to the variation of phase errors. In addition, the SNR of each RAA image (see Sec. 3.2.3) is also estimated as a feature to compensate the influence of the noise, using the “signal processing toolbox” of Matlab 2015b. With these six features, a support vector machine (SVM) model is trained for the classification of RAA images.

SVM is a supervised learning method and is known for binary (two-classes) classification [7], which has been employed in astronomy for the classification of astronomical objects [56, 57]. An SVM model is a decision machine that transforms the data or feature points into a higher dimension space and classifies the points by creating a hyper-plane as the decision boundary. The data mapping is generally carried out through a kernel function. To avoid the problem of over-fitting, the hyper-plane is calculated by maximizing the margin between the neighboring classes. Since the feature points of the RAA images are not linearly separable, the soft-margin SVM is employed [8]. The soft-margin SVM builds the hyper-plane by formulating the margin maximization as solving the following optimization problem:

*q**,*

_{j}*l*) |

_{j}

*q**∈ ℝ*

_{j}^{6},

*l*= ±1,

_{j}*j*= 1, …,

*N*

_{t}} represents a set of

*N*

_{t}feature points and the corresponding class label. The decision boundary is given by the hyper-plane

*f*=

_{j}*f*(

*q**) =*

_{j}

*w**Φ(*

^{T}

*q**) +*

_{j}*b*, where

**is the normal vector of hyper-plane,**

*w**b*is the bias, and Φ(

*q**) is the feature transfer function. The coefficient*

_{i}*C*balances a wide margin and a small amount of misclassified features (or margin failure), which are represented by a set of slack variables

*s*. Using the method of Lagrange multipliers, the optimization problem in Eq. (10) could be transformed into a dual form, which is only a function of Lagrange multipliers

_{j}*η*:

_{j}*K*(

*q**,*

_{i}

*q**) = Φ*

_{j}*(*

^{T}

*q**)Φ(*

_{i}

*q**) is the kernel function for feature mapping. The coefficient*

_{j}*C*now serves as a box constraint to the Lagrange multipliers. The dual problem shown in Eq. (11) is solved using the sequential minimal optimization (SMO) method [58]. By decomposing the quadratic programming problem into smaller subproblems, SMO offers both a memory and time efficient solution to the dual problem. The optimal normal vector of the hyper-plane $\widehat{\mathit{w}}$ and the bias $\widehat{b}$ can be derived from the estimated Lagrange multipliers as:

**u**∈ ℝ

^{6}, the class label will be determined by the sign of the decision function $l\left(\mathbf{u}\right)=\text{sign}\left(\widehat{f}\left(\mathbf{u}\right)={\widehat{\mathit{w}}}^{T}\mathrm{\Phi}\left(\mathbf{u}\right)+\widehat{b}\right)$, and a classification score is given by the absolute value of the decision function as $S\left(\mathbf{u}\right)=\left|\widehat{f}\left(\mathbf{u}\right)\right|$. To determine the best lucky frame, the classification score is converted to the posterior probability

*P*(

*l*(

**z**)|

*S*(

**u**)) = 1/(1 + exp (

*S*(

**u**)

*Q*+

*B*)), where the transformation coefficients

*Q*and

*B*are estimated by fitting a sigmoid function to the training data [59]:

*t*= (

_{j}*l*+ 1)/2,

_{j}*p*=

_{j}*P*(

*l*|

_{j}

*q**) = 1/(1 + exp(*

_{j}*S*+

_{j}Q*B*)) is the posterior probability of the j

^{th}training feature being classified into group

*l*, and ${S}_{j}=|\widehat{f}\left({\mathit{q}}_{j}\right)|$ is the corresponding score.

_{j}In the training stage, we numerically constructed sets of RAA images comprised of one to ten point sources, each having one of five different rms tip-tilt and piston values (zero to 10 *λ _{c}*/

*D*and zero to 1 waves, respectively), and noise with SNR varying from 5 dB to 20 dB. Here we assume the fill factor FF and diameter ratio Γ of RAA are fixed. (We suggest training different SVM models for RAA’s having different values of FF and Γ.) The training dataset includes a total number of 250 numerically generated ensembles, each having 1000 frames that are classified via visual inspection as lucky or typical.

The class labels along with the six features of each frame are fed into the SVM for training. Here we use the “statistics and machine learning toolbox” of Matlab 2015b to solve Eqs. (11)–(13) for SVM training and classification, where a Gaussian radial basis function $\mathrm{K}\left({\mathit{q}}_{i},{\mathit{q}}_{j}\right)=\mathrm{exp}\left(-{\Vert {\mathit{q}}_{i}-{\mathit{q}}_{j}\Vert}^{2}/{\sigma}_{r}^{2}\right)$ with a variance *σ _{r}* = 1 is used as the kernel function and

*C*= 10 as the box constraint. The training and classification is carried on a Intel Core i7 4790 4GHz with 8GB Ram and is completed in less than 90 seconds. The trained model is also evaluated by a 10-fold cross validation to improve the accuracy. In the classification stage, the input RAA image sequence is classified into lucky and typical frames using the pre-trained SVM model. Each image is also assigned a posterior probability of being classified into the lucky group. Due to the highly imbalanced ratio of lucky to typical frame in the RAA sequences, the false negative rate (∼ 20%) is typically larger than the false positive rate (∼ 10%). The lucky frame

*g*

_{0}, which has the highest posterior probability is selected as the initial guess of the subsequent blind deconvolution, described below. Several factors that may influence the number of lucky frames include the diameter ratio Γ, the fill factor FF, the rms phase errors, the number of objects, the pairwise distance among the target objects, and the patch size. Typically, for a RAA of FF ~ 8%, Γ = 2%,

*σ*= 5

_{α}*λ*

_{c}/

*D*,

*σ*= 1 wave, and SNR ∼ 8 dB, the percentage of lucky frames in a recorded sequence is about 20% for a binary source with a spacing ∼ 1.22

_{z}*λ*/

_{c}*D*.

#### 4.4. Multiple frame blind deconvolution

Using the lucky frame, *g*_{0}, as the initial guess, multiframe blind deconvolution is applied to the image sequence to remove the undesired speckles, distorsion, and diffraction blur. Assuming a quasi-shift invariant RAA system (see Sec. 3.2), the image formation process can be simplified as a two dimensional convolution of the polychromatic PSF *h*_{n}(**x**) in the n^{th} observation and the geometric object function *a*(**x**):

*G*and

_{n}*A*denote the spatial frequency of the observed image and the object function respectively,

*H*is the optical transfer function in the n

_{n}^{th}observation,

**= (**

*ξ**ξ*,

_{x}*ξ*) is the two dimensional spatial frequency coordinate, and (

_{y}*U*◦

*V*)

*= (*

_{i,j}*U*)(

_{i,j}*V*) refers to the component-wise matrix product. The multiframe blind deconvolution estimates both the n

_{i,j}^{th}PSF and the reconstructed image by minimizing a non-negative least square error function:

^{−1}represents the inverse Fourier transform, ${\left|C\right|}_{F}=\sqrt{{\displaystyle {\sum}_{i}{\displaystyle {\sum}_{j}{\left|\left({C}_{i,j}\right)\right|}^{2}}}}$ denotes the Frobenius norm. Under the condition that

*a*and

*h*are not upper bounded, i.e., saturation can be ignored, Eq. 16 can then be solved as a non-negative matrix factorization using the following multiplicative update rule [9] in an alternating way by holding one variable constant while estimating the other:

_{n}*A*is the spatial frequency of the the recovered object in the n

_{n}^{th}loop. The non-negative intensity constraints of the recovered PSF and object function are enforced after each iteration by setting the negative pixel values to zero. The iteration stops as the reconstruction converges or the entire sequence has been processed. The final estimate of the object function, i.e., the reconstructed image, is given by $\widehat{a}={\mathcal{\mathcal{F}}}^{-1}\left({A}_{{N}_{P}}\right)$, where

*N*≤

_{P}*N*is the number of images that have been processed. Without having additional constraints, the minimization of the error function favors the bright region in each frame. To avoid the convergence to erroneous or suboptimal solutions, a two dimensional super-Gaussian window exp (−(

**x**/

*N*)

_{w}*) is multiplied with each observed image*

^{β}*g*before the deconvolution, where

_{n}*N*

_{w}corresponds to the span of the sampled region, i.e. the size of the isoplanatic region, and

*β*is the apodization rate of the super-Gaussian function. The use of window tapers the unexpected high pixel values that are outside the central isoplanatic region and accelerates the convergence rate of the deconvolution.

## 5. Results and discussion

In this section, we evaluate the performance of the RAA system using both experimental and numerically generated images of broadband (400–720 nm) point objects. A reconstructed image was formed from both a sequence of *N* = 100 experimental images, and several types of numerically generated images with *N* = 500. The parameter values of the experiment are given in Sec. 3.1. We match most of the experimental parameter values in the numerical modeling of the binary source, except for the the effective focal length *f* of the array, which is 400 mm in the experiment and 600 mm in the numerical model. For multiple sources, we use FF ~ 20% in the numerical simulation. In addition, the numerical pupil and detector planes are sampled by a grid size of *N _{A}* =

*N*= 1024. Each numerical image sequence also has a rms tip-tilt 5

_{D}*λ*/

_{c}*D*and rms piston of one wave. Quantitatively, we compare the restored images of the binary source with the corresponding diffraction-limited image in terms of: (1) FWEM: The average full width of the binary source at 80% of the image peak; (2) Peak Distance (PD): The distance between the two peaks of the binary source; and (3) Peak Ratio (PR): The ratio of the smaller peak of the binary source to the larger peak.

#### 5.1. Experimental results

Compared with the diffraction-limited image of the binary source as is shown in Fig. 6(a), a typical RAA image is degraded by the speckle, making the sources unresolvable (see Fig. 6(b)). In contrast, sharp and highly resolvable sources are shown in the lucky RAA image (see Fig. 6(c)), where the FWEM resolution is improved by a factor of 2 times compared with the diffraction limit (see Table 1). The reconstructed image, Fig. 6(d), has a FWEM resolution gain of roughly 3.1 times. The speckle and diffraction blur are clearly reduced, leaving a fully resolved binary source in the reconstructed image. Other quantitative comparisons listed in Table 1 suggest an accurate reconstruction: The error of the intensity peak ratio (PR) is 8% and that of the distance between peaks (PD) is less than a single 6.25μm pixel.

#### 5.2. Numerical results

**Result 1: Recovery of Binary Source with a Separation of 1.22***λ _{c}*/

**D.**Compared to the corresponding diffraction-limited image shown in the Fig. 7(a), the typical RAA frame shows unresolvable sources in the presence of phase errors (see Fig. 7(b)). The lucky RAA image, on the other hand, has a modest increase in resolution by a factor of 1.6 (the ratio of FWEM is 24/15), as observed in Fig. 7(c) and summarized in Table 2. In contrast, a striking 24/7 = 3.4 times improvement in resolution is afforded in the reconstructed image, as is evident in Fig. 7(d). It is instructive to compare this result with a well-aligned RAA having zero tip/tilt and piston error. The numerical images are depicted in the bottom row of Fig. 7. In this case a small ratio of FWEM is 24/19 = 1.3 is found. This unimpressive enhancement (compared to the misaligned case) may be attributed to the lack of optical sharpening that is afforded by local superoscillatory fields in a randomly misaligned system. Finally, the values of PR and PD agree well with the ground truth for both the random and well aligned system.

**Result 2: Recovery of Binary Source with a Separation of 0.85***λ _{c}*/

**D.**We demonstrate imaging beyond the diffraction limit in the reconstruction of shown in Fig. 8, which is identical to the pair described above, except the separation is 0.85

*λ*/

_{c}*D*. Whereas the filled unaberrated aperture produces an unresolved image (see Fig. 8(a)), typical and lucky images (see Figs. 8(b) and (c)) may be used to reconstruct a well-resolved image shown in Fig. 8(d). In contrast, the sources remain barely resolvable in the lucky and the reconstructed image obtained using a well aligned RAA (see Figs. 8(e)–(g)). This suggests that a RAA system having randomly varying phase errors is suitable for reconstructing white light point source separated by an angle greater than roughly 0.85

*λ*/

_{c}*D*.

**Result 3: Recovery of Multiple Sources.** A similar reconstruction is conducted for six broadband point sources, with a spacing of 1.22*λ _{c} f*/

*D*between the neighboring objects. The diffraction-limited image is shown in Fig. 9(a). In the presence of phase errors, a typical RAA image is generally unresolvable (see Fig. 9(b)). The lucky image, although blurred by the speckles, shows noticeable enhancement in local resolution (see Fig. 9(c)). Highly resolved sources are observed in Fig. 9(d), the reconstructed image. In the absence of phase errors, both typical and lucky frames (see Figs. 9(e) and (f)) show a quasi-diffraction-limited resolution. The reconstructed image in the latter case, shows a noticeable but limited improvement of the resolution in Fig. 9(g).

**Analysis of the System Strehl Ratio.** Although the misalignment of the RAA may randomly form a near optimal pupil function and lead to high resolving power, the concentration of the energy at the effective focal spots may be reduced as the phase errors increase. The energy may also decrease as the diameter ratio Γ and the fill factor FF of the array decreases. We examine the Strehl ratio as a function of FF for five arrays of different values of Γ in Fig. 10(a). For the same array, the Strehl ratio as a function of phase error is given in Fig. 10(b). Examples of the five RAA arrays are shown in Fig. 10(c), each having a FF ~ 12%. It is found that increasing the value of Γ or FF of the RAA increases the Strehl ratio. Decreasing the rms of phase error may also led to an improved Strehl ratio without significant negative impacts to the optical sharpening effect as long as the rms of phase error is above one waves across the baseline.

## 6. Potential application and future work

With near-term plans to build 30 meter ground-based telescopes for astronomy, the demand for higher resolution optics in space continues to grow not only for exoplanet detection, but also for earth-based science, including hyper-spectral imaging and for monitoring of the oceans and land masses (e.g. seismic monitoring) [60]. The Granular Imager [61, 62] is an on-going investigation that is attempting to create a large aperture space-based observatory from granular media, and is based on the *Orbiting Rainbows* paradigm [5, 6], which proposes to change the way space telescopes are built in virtue of: 1) The avoidance of any physical structure and sensing/actuation hardware on the primary, so sensing and actuation are done “at-a-distance” on an amorphous cloud, and all operational complexity is done outside the primary; 2) The reliance on optical trapping and manipulation [63, 64] to enable action “at-a-distance” (although, other mechanisms such as electrodynamic trapping and confinement are also being investigated); and 3) The relaxation of the requirements on the fine cloud control by doing the best possible job in software via robust computational imaging algorithms. The goal of the research on the Granular Imager is to identify ways to optically manipulate and maintain the shape of a cloud of small scatterers so that it can function as a variable surface with useful electromagnetic characteristics vis-a-vis for astrophysical applications. This concept, in which the aperture does not need to be continuous and monolithic, may provide opportunities to significantly increase the aperture size compared to large NASA space-borne observatories currently envisioned, allowing for a true Terrestrial Planet Imager that would be able to resolve exoplanet details and do meaningful spectroscopy on distant worlds.

## 7. Conclusion

We experimentally and numerically investigated a disordered optical synthetic aperture system using an array of randomized small apertures in the far field, for imaging broadband point targets. In general, a misaligned array produces speckles and degrades the quality of individually recorded images. However, optically sharpened lucky images formed by statistical happenstance offer the opportunity to reconstruct well-resolved images. Using the proposed restoration framework, we identify these lucky frames and restore high resolution images of point objects. Experimentally recorded images using randomized amplitude and phase masks, combined with our four-step image reconstruction framework provided excellent reconstruction with a 3.1 times improvement of the spatial resolution. The experimental system had roughly 5 waves of wavefront error across a pupil comprised of 150 pinholes. Roughly 20% of the 100 unique images were classified as lucky images. A numerical model of the system provided comparable results, and further illustrated the advantages of random wavefront errors over a error-free random aperture array system. Extended objects were not explored, but are the subject of continued work. For example, the proposed computational framework may be adapted to the recovery of extended objects by changing the intensity distribution model of the image, making use of texture based features in lucky frame selection. Future work may explore the use of a non-shift-invariant blind deconvolution image reconstruction scheme [65] for an extended source or point sources with large random wavefront errors. An important aspect of this investigation was that we assumed no prior knowledge of the PSF, as may be the case for a low-complexity granular space telescope. This sets our work apart from many previously described systems where the PSF is static or at least partially known.

## Funding

This work was funded by the US National Science Foundation (ECCS-1309517) and by the NASA Innovative Advanced Concepts (NIAC) program under contract 14-14NIAC-II-0008 (Orbiting Rainbows, Phase II). The research on the Orbiting Rainbows project was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration.

## Acknowledgments

The authors are grateful to Dr. Scott A. Basinger and Dr. David Palacios (Jet Propulsion Laboratory, California Institute of Technology) for discussions about the application of future granular space telescopes, and Wendy Garcia (New Mexico State University) for REU-related laboratory assistance at RIT.

## References and links

**1. **A. B. Meinel, “Cost-scaling laws applicable to very large optical telescopes,” Opt. Eng **18**(6), 645–647 (1979). [CrossRef]

**2. **D. Baiocchi and H. P. Stahl, “Enabling future space telescopes: mirror technology review and development roadmap,” Astro2010: The Astronomy and Astrophysics Decadal Survey, 23 (2009).

**3. **S. J. Chung and F. Y. Hadaegh, “Swarms of Femtosats for Synthetic Aperture Applications,” in Proceedings of the Fourth International Conference on Spacecraft Formation Flying Missions and Technologies, (2011).

**4. **D. M. Alloin and J. M. Mariotti, *Diffraction-limited imaging with very large telescopes*, (Springer, 2012).

**5. **M. B. Quadrelli, S. A. Basinger, and G. A. Swartzlander, “NIAC Phase I Orbiting rainbows: Optical manipulation of aerosols and the beginnings of future space construction,” (Jet Propulsion Laboratory, California Institute of Technology, 2013), https://www.nasa.gov/sites/default/files/files/Quadrelli_2012_PhI_OrbitingRainbows_.pdf

**6. **M. B. Quadrelli, “NIAC Phase II Orbiting Rainbows: Future Space Imaging with Granular Systems,” (Jet Propulsion Laboratory, California Institute of Technology, 2017), https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20170004834.pdf

**7. **C. Cortes and V. Vapnik, “Support-vector networks,” Mach. Learn **20**(3), 273–297 (1995). [CrossRef]

**8. **V. Vapnik, *The nature of statistical learning theory,*(Springer, 2013).

**9. **D. D. Lee and H. S. Seung, “Algorithms for non-negative matrix factorization,” in Proceedings of Advances in Neural Information Processing Systems (NIPS, 2001), pp. 556–562.

**10. **A. Labeyrie, “High-Resolution Techniques in Optical Astronomy,” *Progress in Optics*14 (Elsevier, 1977), Chap. 2. [CrossRef]

**11. **J. Nelson, “Segmented mirror telescopes,” in *Optics in Astrophysics*, (Springer, 2005).

**12. **J. E. Harvey, A. Kotha, and R. L. Phillips, “Image characteristics in applications utilizing dilute subaperture arrays,” Appl. Opt. **34**(16), 2983–2992 (1995). [CrossRef] [PubMed]

**13. **N. J. Miller, M. P. Dierking, and B. D. Duncan, “Optical sparse aperture imaging,” Appl. Opt. **46**(23), 5933–5943 (2007). [CrossRef] [PubMed]

**14. **G. Andersen and D. Tullson, “Broadband antihole photon sieve telescope,” Appl. Phys. **46**, 3706–3708 (2007).

**15. **K. S. Ford, B. McKernan, A. Sivaramakrishnan, A. Martel, K. Koekemoer, D. Lafrenière, and S. Parmentier, “Active galactic nucleus and quasar science with aperture masking interferometry on the James Webb Space Telescope,” Astrophys. J. **783**, 73–75 (2014). [CrossRef]

**16. **J. Lindberg, “Mathematical concepts of optical superresolution,” J. Opt. **14**, 083001 (2012). [CrossRef]

**17. **M. Mazilu, J. Baumgartl, S. Kosmeier, and K. Dholakia, “Optical eigenmodes; exploiting the quadratic nature of the energy flux and of scattering interactions,” Opt. Express **19**(2), 933–945 (2011). [CrossRef] [PubMed]

**18. **A. Labeyrie, “Attainment of diffraction-limited resolution in large telescopes by Fourier analysing speckle patterns in star images,” Astron. Astrophys. **6**(1), 85–87 (1970).

**19. **A. W. Lohmann, G. Weigelt, and B. Wirnitzer, “Speckle masking in astronomy: triple correlation theory and applications,” Appl. Opt. **22**(24), 4028–4037 (1970). [CrossRef]

**20. **W. Beavers, D. E. Dudgeon, J. W. Beletic, and M. T. Lane, “Speckle imaging through the atmosphere,” Lincoln Lab. J. **2**(2), 207–228, (1989).

**21. **K. T. Knox and B. J. Thompson, “Recovery of images from atmospherically degraded short-exposure photographs,” Astrophys. J. **193**, 45–48 (1974). [CrossRef]

**22. **J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. **21** (15), 2758–2769 (1982). [CrossRef] [PubMed]

**23. **R. Fergus, A. Torralba, and W. T. Freeman, “*Random lens imaging*,” Tech. Rep. TR-2006-058, MIT (2006).

**24. **A. Liutkus, D. Martina, S. Popoff, G. Chardon, O. Katz, G. Lerosey, S. Gigan, L. Daudet, and I. Carron, “Imaging with nature: Compressive imaging using a multiply scattering medium,” Sci. Rep.4 (2014). [PubMed]

**25. **J. W. Goodman, *Speckle phenomena in optics: theory and applications*, (Roberts and Company, 2007).

**26. **J. C. Dainty, “The statistics of speckle patterns,” *Progress in Optics*14, (Elsevier, 1977), Chap. 1. [CrossRef]

**27. **O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photon. **8**(10), 784–790 (2014). [CrossRef]

**28. **B. Judkewitz, R. Horstmeyer, I. M. Vellekoop, I. N. Papadopoulos, and C. Yang, “Translation correlations in anisotropically scattering media,” Nat. Phys. **11**(8), 684–691 (2015). [CrossRef]

**29. **D. Kundur and D. Hatzinakos, “Blind image deconvolution,” IEEE Signal Process. Mag. **13**(3), 43–64 (1996). [CrossRef]

**30. **J. R. Fienup and G. B. Feldkamp, “Astronomical imaging by processing stellar speckle interferometry data,” Proc. SPIE **0243**, 2723–2727 (1980).

**31. **E. Pantin, J. Starck, F. Murtagh, and K. Egiazarian, “Deconvolution and blind deconvolution in astronomy,” in *Blind image deconvolution: theory and applications*, (CRC Press, 2007). [CrossRef]

**32. **E. J. Candès and C. Fernandez-Granda, “Towards a mathematical theory of super-resolution,” Comm. Pure Appl. Math. **67**(6), 906–956 (2014). [CrossRef]

**33. **E. J. Candès and C. Fernandez-Granda, “Super-resolution from noisy data,” J. Fourier Anal Appl. **19** (6), 1229–1254 (2013). [CrossRef]

**34. **V. Duval and G. Peyré, “Exact support recovery for sparse spikes deconvolution,” Found. Comput. Math. **15** (5), 1315–1355 (2015). [CrossRef]

**35. **M. Hirsch, S. Harmeling, S. Sra, and B. Schölkopf, “Online multi-frame blind deconvolution with super-resolution and saturation correction,” Astron. Astrophys. **531**, A9(2011). [CrossRef]

**36. **S. M. Jefferies and J. C. Christou, “Restoration of astronomical images by iterative blind deconvolution,” Astrophys. J. **415**, 862(1993). [CrossRef]

**37. **D. Geman and C. Yang, “Nonlinear image recovery with half-quadratic regularization,” IEEE Trans. Image Process. **4**(7), 932–946 (1995). [CrossRef] [PubMed]

**38. **A. M. Bruckstein, D. Donoho, and M. Elad, “From sparse solutions of systems of equations to sparse modeling of signals and images,” SIAM review **51** (1), 34–81 (2009). [CrossRef]

**39. **D. Krishnan and R. Fergus, “Fast image deconvolution using hyper-laplacian priors,” in Proceedings of Advances in Neural Information Processing Systems, (NIPS, 2009), pp. 1033–1041.

**40. **J. Miskin and D. J. MacKay, “Ensemble learning for blind image separation and deconvolution,” in *Advances in independent component analysis*, (Springer, 2000). [CrossRef]

**41. **D. L. Fried, “Probability of getting a lucky short-exposure image through turbulence,” J. Opt. Soc. Am. **68**(12), 1651–1657 (1978). [CrossRef]

**42. **R. N. Tubbs, “Lucky exposures: diffraction-limited astronomical imaging through the atmosphere,” Ph.D thesis, Cambridge Univ. (2003).

**43. **G. C. Loos and C. B. Hogge, “Turbulence of the upper atmosphere and isoplanatism,” Appl. Opt. **18** (15), 2654–2661 (1979). [CrossRef] [PubMed]

**44. **F. Roddier, J. M. Gilli, and J. Vernin, “On the isoplanatic patch size in stellar speckle interferometry,” J. Optics **13**(2), 63–65 (1979). [CrossRef]

**45. **M. Aubailly, M. A. Vorontsov, G. W. Carhart, and M.T. Valley, “Automated video enhancement from a stream of atmospherically-distorted images: the lucky-region fusion approach,” Proc. SPIE **7463**, 74630C (2009). [CrossRef]

**46. **X. Zhu and P. Milanfar, “Removing atmospheric turbulence via space-invariant deconvolution,” IEEE Trans. Pattern Anal Mach Intell. **35**(1), 157–170 (2013). [CrossRef]

**47. **Energetic broadband laser-driven light source (EQ99XFC) data sheet, http://www.energetiq.com/DataSheets/EQ99XFC-Data-Sheet.pdf.

**48. **D. Dunbar and G. Humphreys, “A spatial data structure for fast poisson-disk sample generation,” ACM Trans. Graph. **25** (3), 503–508 (2006). [CrossRef]

**49. **J. Jiang and J. Gu, “What is the space of spectral sensitivity functions for digital color cameras?” in Proceedings of IEEE Workshop on Applications of Computer Vision, (IEEE, 2013), pp. 168–179.

**50. **J. Malcolm, P. Yalamanchili, C. McClanahanm, V. Venugopalakrishnan, K. Patel, and J. Melonakos, “ArrayFire: a GPU acceleration platform,” Proc. SPIE **8403**, 84030A (2012). [CrossRef]

**51. **P. Welch, “The use of fast Fourier transform for the estimation of power spectra: a method based on time averaging over short, modified periodograms,” IEEE Trans. Audio Electroacoust. **15** (2), 70–73, (1967) [CrossRef]

**52. **K. Hirakawa and T. W. Parks, “Adaptive homogeneity-directed demosaicing algorithm,” IEEE Trans. Image. Proc. **14**(3), 360–369 (2005). [CrossRef]

**53. **D. Coffin, “DCRAW: Decoding raw digital photos in linux,” (2008) https://www.cybercom.net/~dcoffin/dcraw/.

**54. **B. Narayanaswamy, *Continuous multivariate distributions*(Wiley, 2006).

**55. **J. C. Dainty, “Some statistical properties of random speckle patterns in coherent and partially coherent illumination,” J. Mod. Opt. **17** (10), 761–772 (1970).

**56. **Y. Zhang and Y. Zhao, “Automated clustering algorithms for classification of astronomical objects,” Astron. Astrophys. **422** (3), 1113–1121 (2004). [CrossRef]

**57. **D. Kim, P. Protopapas, Y. Byun, C. Alcock, R. Khardon, and M. Trichas, “Quasi-stellar object selection algorithm using time variability and machine learning: Selection of 1620 quasi-stellar object candidates from MACHO Large Magellanic Cloud database,” Astrophys. J. **735**(2), 68–70 (2011). [CrossRef]

**58. **J. C. Platt, “*Sequential minimal optimization: A fast algorithm for training support vector machines*,” Tech. Rep. TR-98-14, Microsoft Research (1998).

**59. **J. C. Platt, “Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods,” in *Advances in Large Margin Classifiers*, (MIT Press, 1999).

**60. **T. Andersen and A. Enmark, *Integrated modeling of Telescopes*, (Springer, 2011). [CrossRef]

**61. **M. B. Quadrelli, S. A. Basinger, G. A. Swartzlander, and D. Arumugam, “Dynamics and control of granular imaging systems,” in Proceedings of AIAA Space Conference and Exposition(AIAA, 2015), pp. 4484.

**62. **S. A. Basinger, D. Palacios, M. B. Quadrelli, and G. A. Swartzlander, “Optics of a granular imaging system (i.e. “orbiting rainbows”),” Proc. SPIE **9602**, 96020E(2015). [CrossRef]

**63. **G. A. Swartzlander, “Radiation Pressure on a Diffractive Sailcraft,” J. Opt. Soc. Am. B **34** (6), C25–C30, (2017). [CrossRef]

**64. **A. B. Artusio-Glimpse, J. H. Wirth, and G. A. Swartzlander, “Optical gradient force assist maneuver,” Opt. Lett. **41**(17), 4142–4145 (2016). [CrossRef] [PubMed]

**65. **M. Hirsch, S. Sra, B. Schölkopf, and S. Harmeling, “Efficient filter flow for space-variant multiframe blind deconvolution,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2010), pp. 607–614.