Abstract

We explore opportunities afforded by an extremely large telescope design comprised of ill-figured randomly varying subapertures. The veracity of this approach is demonstrated with a laboratory scaled system whereby we reconstruct a white light binary point source separated by 2.5 times the diffraction limit. With an inherently unknown varying random point spread function, the measured speckle images require a restoration framework that combine support vector machine based lucky imaging and non-negative matrix factorization based multiframe blind deconvolution. To further validate the approach, we model the experimental system to explore sub-diffraction-limited performance, and an object comprised of multiple point sources.

© 2017 Optical Society of America

Corrections

Xiaopeng Peng, Garreth J. Ruane, Marco B. Quadrelli, and Grover A. Swartzlander, "Randomized apertures: high resolution imaging in far field: erratum," Opt. Express 25, 20952-20952 (2017)
https://www.osapublishing.org/oe/abstract.cfm?uri=oe-25-17-20952

24 July 2017: Typographical corrections were made to the abstract; paragraph 1 and 2 of Section 1, paragraph 1 of Section 2.1, paragraphs 1–3 of Section 2.2, paragraph 1 of Section 3, paragraphs 1 and 2 of Section 3.1, paragraphs 1–3 of Section 3.2.1, paragraph 1 of Section 3.2.2, paragraph 1 of Section 3.2.3, paragraph 1 of Section 4, paragraph 1 of Section 4.1, paragraph 1 of Section 4.2, paragraphs 1–3 of Section 4.3, paragraph 1 of Section 4.4, paragraph 1 of Section 5, paragraph 1–4 of Section 5.2, paragraph 1 of Section 6, paragraph 1 of Section 7; Eqs. (1)–(18); the figure captions of Figs. 3, 4, and 7–10; Tables 1 and 2; the acknowledgments section; and Refs. 3–8, 10, 11, and 20–64.

1. Introduction

The quest to achieve extraordinary resolution with a large space telescope is impeded by the cost and complexity of a large primary mirror [1–4]. One may imagine a well-figured system of free-flying segments, but locking these to a common focal point becomes challenging as the number of segments increases to the hundreds or thousands. The alignment complexity of the system relaxes if wavefront errors can be tolerated. However, undesirable speckle images will form in the focal plane. What is more, the randomly varying position and orientation of the segments produces a randomly varying point spread function. Facing these challenges, we explored whether the collection of a large number of speckle images of a static object could be combined to form a reconstructed image using this approach. We call the proposed system a Granular Space Telescope (GST) [5,6]. The schematic shown in Fig. 1 depicts a broadband point source at infinity illuminating circular subapertures of diameter d that are distributed across a baseline aperture of diameter Dd. These subapertures may be flat mirrors roughly aligned to a parabolic surface. The experiment described below represents a scaled-down facsimile of the proposed system. We constructed random aperture arrays (RAA’s) having a baseline D = 9 mm and subapertures d = 0.2 mm. We note that the subapertures are small enough to satisfy the far-field diffraction condition at the imaging sensor: d2λc f, where λc = 560 nm is the central wavelength and f = 400 mm is the effective focal length of the array.

 figure: Fig. 1

Fig. 1 Schematic of a Granular Space Telescope: free-flying aperture elements distributed across a baseline D and roughly aligned to a focal point f. Inset: comparison of a diffraction-limited point source and speckles from a modeled random aperture array.

Download Full Size | PPT Slide | PDF

It is well known that a point object may form an image at the focal plane if there is significant interference between superimposed diffracted beams from a sufficient number of subapertures. This condition is partially satisfied if the footprint of the diffracted beam from a single subaperture spans the full extent of the imaging array. Assuming square pixels of pitch Δx and a detector array comprised of ND × ND pixels, we require λc f/d ~ NDΔx. To insure that a sufficient number of diffracted beams overlap in the focal plane, while also providing for a broad degree of tip-tilt randomness, we require that the root mean square (rms) of tip-tilt angle be less than λc/d. Following classical optics, this proof-of-concept study first examines an object comprised of binary point sources. A numerical model further explores sub-diffraction-limited performance and multiple point sources. The proposed restoration framework first reduces the noise and aligns the image sequence. The lucky frames are then classified from the image sequence using the support vector machine [7,8]. Using the best lucky frame as an initial guess, a non-negative matrix factorization [9] based multiframe blind deconvolution is applied to the image sequence to remove the speckle and the remaining diffraction blur, thereby producing a high resolution image of the two point sources. One may expect that greater amounts of wavefront error will require more robust reconstruction algorithms than the one presented here. On the other hand, narrow spectral bandwidths may ease the reconstruction process.

2. Related work

2.1. Optical synthetic aperture

The proposed GST is related to a Fizeau interferometer imaging system [10] in which light beams from individual aperture element are merged at a virtual common focal plane of the entire array. In astronomy the Fizeau array has been utilized for segmented telescopes [11] such as the JWST, TMT, E-ELT, and diluted configurations [12,13] like a sparse Golay aperture. The GST also shares similarities to a photon sieve telescope [14], although our system is comprised of randomly place subapertures in the pupil, whereas the sieve is a binary amplitude pupil mask that functions as a Fresnel lens. Aside from the attempts to increase the physical size of the objective, improvements of the resolving power of telescopes have been achieved by sharpening the intensity distribution of the point spread function (PSF) [15–17].

2.2. Image recovery from random distortions

The restoration of images from random distortions has been a long standing interest in optics. In addition to ill-figured optical components, time varying perturbations such as air turbulence or system vibrations may distort an image. Autocorrelation techniques were proposed for imaging through turbulence [18], allowing the number of point like objects to be estimated from a series of short exposure observations. Later, bispectrum and triple correlation [19, 20], iterative phase retrieval [21, 22], and machine learning [23, 24] methods were developed to retrieve the object function from randomly distorted measurements. These approaches required well aligned subapertures or knowledge of the PSF, for example through a direct measurement, computational calibration, or a reasonable statistical model [25–28].

A significant advance came with the development of blind deconvolution methods [29–31] which restore images directly from a single or multiple observations with little knowledge of the system PSF. By removing random distortions and diffraction blur, a seemingly super-resolved reconstructed image may be achieved [32–34]. Although deconvolution methods allow for the expansion of the optical system cut-off frequency by inferring the missing frequencies, it should be noted that this only holds true if the spacing between the imaged objects is greater than the diffraction limit. In addition, blind deconvolution is a highly ill-posed problem, making it susceptible to the entrapment in local minima, thereby producing sub-optimal or erroneous estimations. Multiframe approaches [35, 36] increase the ratio of unknown (the PSF and the target image) to known (the observations) from 2:1 in single frame schemes to M +1:M, thereby reducing the uncertainty in the search for the global minimum. Regularization terms [37–40], which model the statistics of a variety of objects (e.g., natural scenes) and their properties (e.g., edge), may also serve as constraints for an optimal convergence.

Another method of image reconstruction from random distortions is called lucky imaging. This theory is based on the observation that nearly diffraction-limited PSF’s occasionally occur owing to the random distortions. Statistically, sharp frames may occur given a sufficient number of observations [41, 42]. The lucky frames may be recorded directly inside an isoplanatic region [43, 44] or computationally identified from regularly captured images using sharpness metrics. If the PSF is measured (e.g., by use of a guide star), the Strehl ratio may be used as a lucky image classification parameter. Alternatively, metrics such as image brightness, variance, and quality of the detected edges may be used to identify lucky images [45, 46]. By fusing the lucky frames or patches, a diffraction-limited image may be produced. However, lucky imaging may not remove all distortions without the application of other techniques like image registration and deconvolution.

3. Image formation

Both experimental measurements and a numerical model were used to generate speckle images from random aperture arrays. These are described below, followed by image reconstruction discussions in Sec. 4.

3.1. Experimental demonstration

The experiment makes use of a transmissive configuration (see Fig. 2(a)). A set of N = 100 unique thin foil masks were produced, each having M = 150 randomly placed subaperture elements (holes of radius r ∼ 0.1 mm) across a baseline D = 9 mm. The corresponding diameter ratio and fill factor were respectively given by Γ = 2r/D = 2% and FF = MΓ2 = 8%. These amplitude masks were made by piercing thin aluminum foil with the tip of a fine fabric sewing needle. A single mask was sequentially positioned at the front surface of a convex lens. The foil was covered by a layer of wrinkled cellophane to randomize the phase at each pinhole. A laser-driven white light source (Energetic EQ-99XFC LDLS [47]) was spatially filtered using a 5 μm diameter pinhole to produce a spatially coherent beam of diameter slightly larger than the baseline D. In effect, the light source was spectrally sampled by the RGGB Bayer filter on the camera (Canon 5D Mark III) sensor, with a bandwidth of roughly Δλ = 320 nm and a central wavelength λc = 560 nm. An amplifier gain of ISO =100 was selected for each τ = 1/100 sec exposure. The sensor size is 36 × 24 mm, with a square pixel pitch of Δx = 6.25 μm. A beam splitter and a back-reflecting mirror (not shown) were used to produce a second, mutually incoherent light source that subtended the first beam by an angle θ. The mask-lens combination therefore appeared to receive light from two mutually incoherent point sources at infinity, i.e, a binary point source. A sequence of N images were recorded on the camera sensor by inserting a new mask between each frame.

 figure: Fig. 2

Fig. 2 (a) Experimental setup of a random aperture array system. Measured broadband images of a binary source using (b) RAA with random wavefront error and (c) filled aperture without amplitude and phase mask.

Download Full Size | PPT Slide | PDF

For ground truth comparison a diffraction-limited image of the binary source was recorded without the amplitude and phase masks. Examples of a typical RAA image and the diffraction-limited image of the broadband binary source are shown in Figs. 2(b) and (c) respectively. The polychromatic RAA image forms a radial speckle pattern. For a random array with rms tip-tilt angle σα and rms piston displacement σz, a characteristic variation of the optical path difference may be written as σL = r tan(σα) + σz. Theoretically, if σL is less than the temporal coherence length of the light source Lc=λc2/Δλ, the speckle patterns are well correlated across neighboring wavelengths in the vicinity of the central isoplanatic region [26]. Speckles that fall within the isoplanatic region appear as normal speckles, each having a diameter ~ 2λc f/D (see zoomed-in image of Figs. 2(b)). Based on the numerical model discussed below, we found that for a random array with rms tip-tilt angle on the order of 5λc/D and rms piston of one wave, the isoplanatic region has a size of 10λc f/D (or 500 μm pixels on our sensor).

3.2. Numerical model

3.2.1. Monochromatic source

The following numerical model was used to analyze the image formation of the RAA system with controlled amounts of tip-tilt and piston phase error. What is more, this model is instrumental in forming the training set described in Sec. 4.3. The source plane provides two tilted plane waves at the pupil plane (labeled x′= (x′, y′), see Fig. 3(a)). Using a NA × NA grid, the pupil plane samples the aperture array, which consists of M subapertures of diameter d = 2r within a baseline D = 2R. A lens of focal length f tends to bend light transmitted through the subapertures toward the central focal point. The detector array (labeled x = (x, y)) has a size of ND × ND and is placed at the back focal plane of the lens.

 figure: Fig. 3

Fig. 3 (a) Numerical model of the RAA system witha broadband binary point source. (b) Typical RAA image with respective rms tip-tilt and piston errors 5λc/D and one wave. (c) Diffraction-limited image.

Download Full Size | PPT Slide | PDF

The randomized subaperture array may be represented by the sum of a random distribution of complex subaperture functions within the baseline:

U(x)=circ(xR)m=1Mcirc(xcmr)exp(iϕm(x))
where the circ function is unity valued if the argument is less than unity and zero otherwise, cm = (cm,x, cm,y) is the center location of the mth subaperture, and the phase term is given by:
ϕm(x)=k(Δzm+xΔαm)
where k = 2π/λ is the wave number, and λ is the wavelength. Typically, the amount of misalignment is governed by a statistical distribution. Here we assume Δzm is a uniformly random piston displacement with variance σz2, and Δαm = (Δαm,x′, Δαm,y′) are mutually independent random tip-tilt angles, each having a normal distribution with zero mean and variance σα2. Typical values of σα and σz are 5λc/D (or λc/9d) and one wave respectively.

In practice, a smoothed apodized circular aperture may be numerically generated by the use of a high order super-Gaussian function. We employed a Poisson disk sampling algorithm [48] to generate non-overlapping circular subapertures of diameter d within a baseline D = 45d. To avoid aliasing in sampling the pupil and detector planes, we also require: (1) Δx′·NA > 2σα and (2) 1.22λmin f/D ≥ 2Δx, where Δx′ and Δx are respectively the square pitch of·the aperture plane and the detector plane. The first condition states that the number of samples across the diameter of each subaperture in the pupil plane has to be at least twice as large as the rms of the tip-tilt error. The second condition requires the diffraction pattern formed by the baseline is sampled by at least two pixels in the detector plane.

For a quasi-monochromatic point source that subtends the optical axis by an angle θ = (θx, θy), the electric field may be expressed as Fraunhofer diffraction of the tilted field:

E(x,λ,θ)=E0iλfexp(i2πλfxx)U(x)exp(ikxθ)dx=iE0A0m=1M[Somb(x(Δαm+θ)fλf/d)exp(ikmx)exp(iψm)exp(ikcmθ)]
where km = kcm/f, ψm = k(cm · Δαm − Δzm), E0 is the field strength of the light source at the entrance pupil, and A0 = πd2/4 is the area of each subaperture. The Sombrero (or Bessel-Sinc) function may be defined as Somb(u) = 2J1(2πu)/(2πu), where J1(2πu) is the Bessel function of the first kind. The electric field of a monochromatic source (see Eq. (3)) is a weighted superposition of M Sombrero waves. Each wave has a wave number km and is modulated by a complex coefficient exp(−iψm). Since cm has a maximum value of D/2, the maximum wave number is given by kmax = πD/(λcf). Although the cut-off frequency of the aperture array is limited to the baseline D, the superposition of these waves may lead to super-oscillatory fields, where the local wave vectors exceed kmax. These super-oscillatory fields typically associate with low image intensities. A subset of these nulls may occasionally be formed around the core of a bright speckle, reducing its size and appearing as a super-resolved focal spot as shown in Fig. 4. Lucky image frames typically contain such spots.

 figure: Fig. 4

Fig. 4 Comparisons of the polychromatic PSF with respective rms tip-tilt and piston error 5λc/D and one wave. (a) Filled aperture. (b) Well aligned RAA. (c) Typical RAA. (d) Lucky RAA. Line plots indicate the FWHM.

Download Full Size | PPT Slide | PDF

The image irradiance is proportional to the squared modulus of the electric field:

P(x,λ,θ)=(E0A0)2m=1Mn=1M[Somb(x(Δαm+θ)fλf/d)Somb(x(Δαn+θ)fλf/d)cos((kmkn)x+(ψmψn)+k(cmcn)θ)]
The above expression reduces to the monochromatic and incoherent PSF of the RAA system if θ = 0. The PSF is a superposition of M × M envelope functions, the center of which are determined by the tip-tilt angles of the corresponding subaperture. Each envelope function is a product of two Somb functions and is modulated by a cosine interference term. Although the envelope functions are shift-invariant, the interference terms can spoil the shift-invariance. Here we assume the system is quasi shift-invariant for a small rms wavefront errors, e.g., rms tip-tilt angle and piston errors smaller than 10λc/D and 1.5 waves respectively. As demonstrated below, this assumption is validated by the use of shift-invariant deconvolution techniques that successfully restore the image.

3.2.2. Polychromatic source

Polychromatic PSF of RAA may be numerically represented as an incoherent integral of the monochromatic PSF over the bandwidth:

h(x)=ρs(λ)ρc(λ)P(x,λ,0)dλ
where ρs (λ) and ρc (λ) are the spectral distribution of the source and the spectral response of the detector respectively. Although the spectral integral produces radial speckle structures, a central isoplanatic region may be formed if the optical path difference is less than the temporal coherence length of the light source (see Sec. 3.1). In addition, an optimal pupil function may appear according to Eq. (4), producing a super-resolved PSF. The comparisons of the effective focal spot of a filled aperture, a well aligned RAA, as well as a typical and a lucky misaligned RAA are shown in Figs. 4(a)–(d) respectively. The brightest speckle is considered as the effective focal spot (or the core of the RAA PSF), where there is the most constructive interference. Compared to the diffraction-limited PSF, the effective focal spots of RAA are noticeably smaller. Quantitatively, the cores of RAA PSF are smaller than the diffraction-limited focal spot by a factor of 1.2 and 1.6 times for a well aligned and a lucky misaligned array respectively.

In practice, an object may be decomposed into a distribution of mutually incoherent point sources a(x) = Σj Ej δ(xθj f), where Ej is the field strength of the jth point source. Denoting the irradiance of a tilted polychromatic point source as hθ (x) = h(xθ f), the total irradiance of the object may be represented by the superposition of the irradiance of each source as:

I(x)=jh(xθjf)
A numerically generated speckled RAA image of a broadband binary source (400–720 nm) is compared with the corresponding diffraction-limited image formed by a filled unobstructed aperture in Figs. 3(b) and (c). The spectral response function of Canon Mark II [49] is used in the numerical simulation. Both images show agreements with the experimentally obtained data (see Fig. 2). The numerical modeling is implemented in C++ and takes advantage of Arrayfire [50], which is an open source library for parallel computing on graphics processing unit (GPU). The frame rate is about 3 fps on GeForce GT 720 for RAA’s of a fill factor FF ∼ 8%.

3.2.3. Noise model

Two types of sensor noise are considered in this numerical model, they are photon shot noise and readout noise. Photon noise is due to the randomness of the arrival of photons, which accumulates as exposure time increases and is amplified by the system gain, i.e., ISO sensitivity. Photon noise is scene dependent and has a Poisson distribution with equal mean and variance given by μp(x)=σp2(x)=ντΛ(x), where ν is the amplifier gain and τ is the exposure time. The expected number of photon per unit time interval at each pixel is denoted as Λ(x), which is proportional to the scene irradiance I(x). Readout noise associates with voltage fluctuations and has a Gaussian distribution with zero mean and variance σr2=νσread2. Here we consider only the white noise, where the wavelength dependency is ignored. A gray scale image corrupted by the photon and readout white noise can be written as:

Inoise(x)=1ντPoisson(σp2(x))+Gaussian(0,σr2)(x)
The signal-to-noise-ratio (SNR) is given in logarithmic decibel (dB) scale by:
SNR=10log10μσn
where μ is the mean value of scene irradiance, and σn is the standard deviation of the noise mixture, i.e., Inoise − I. If the reference noiseless image I(x) is unknown, the first six harmonics of the estimated power spectral density [51] will be selected and removed using a window function (e.g., Kaiser window). The power of the first harmonic is considered as an estimation of the noiseless image, while the average power of the remaining non-harmonic region is regarded as the noise floor.

4. Image recovery

An image reconstruction framework to recover a high resolution image of point sources from a sequence of N intensity measurements from a random aperture array is described below. We assume the sources fall within an isoplanatic region so that the shift-invariance may be assumed. The restoration framework consists of four steps (see Fig. 5(a)): denoising, image alignment, lucky imaging, and multiframe blind deconvolution. Given an input sequence {InN}, the first step interpolates the recorded raw-format images into RGB format using the adaptive homogeneity directed demosaicing method [52] provided by dcraw [53]. The three color channel images are converted into gray scale images (Future work may explore the advantages of higher dynamic range monochrome camera.) The measured dark frame associated with additive noise is subtracted from the gray scale images. The next step aligns the sequence to the intensity peak of each image. The third step selects lucky frames using a pre-trained SVM classifier. Finally, the remaining distortion and diffraction blur are removed by the multiframe blind deconvolution with the best lucky frame serving as the initial guess.

 figure: Fig. 5

Fig. 5 (a) Illustration of the image reconstruction framework; (b) histograms and fitted generalized extreme value (GEV) distributions of the broadband binary source in (left to right): the diffraction-limited image, a lucky RAA frame, a typical RAA frame of type I, and a typical RAA frame of type II.

Download Full Size | PPT Slide | PDF

4.1. Denoise

As a useful preliminary step for the subsequent image restoration, we first remove the additive noise from the recorded image sequences. The fixed pattern noise is removed from the experimental images by subtracting the mean of a set of dark frames that are captured at the lowest amplifier gain (ISO = 100) and exposure time (τ = 1/8000 sec). We also subtract the readout noise from both experimental and numerical data. Readout noise associates with the amplifier gain and has a zero mean, which is typically measured by taking the difference of two dark frames that are taken at a specific ISO and exposure time, i.e., the settings used to capture the target images. Although we do not explicitly address the photon noise, the above denoising process allows us to increase the average SNR from 5.7 to 7.5 dB for the experimental sequence, and 6.5 to 7.8 dB for the numerical images.

4.2. Image alignment

The recorded images may not be well aligned owning to the frame-to-frame placement of the random phase and amplitude masks. Recentering the position of the peak intensity point is suitable owing to the follow properties of our system: The targets of interests are point sources, which have relatively simple shapes and little texture; The scene is static and thus free from motion blur; The SNR is reasonably high during the image acquisition.

4.3. Lucky imaging

Given a sufficient number of RAA frames, a small proportion of relatively sharp images (i.e., lucky frames) appear owing to random wavefront variations. Based on the prior knowledge that the target objects are an unknown number of point sources (e.g., stars), six important features are considered for the identification: The three parameters from a generalized extreme value (GEV) distribution [54] described below, the peak value of the normalized GEV histogram, the the full-width-half-maximum of the autocorrelation, and the SNR of each image. The histogram of each 80 × 80 region of pixels centered on the aplanatic patch is approximated by the GEV distribution using the maximum-a-posterior estimation provided by the “statistics and machine learning toolbox” of Matlab 2015b. The probability density function of GEV is given by:

gev(I(x))=1σe(1+χ)1ζ1exp((1+χ)1ζ)
where χ = ζ · (I(x)−μ)/σe and ζ is the shape parameter, and μ and σe are respective position and scale variable. An asymmetric long-tailed distribution occurs when ζ ≥ 0, whereas ζ < 0 indicates a short-tailed and relatively symmetric shape.

The normalized histograms and fitted GEV curves are shown in Fig. 5(b) with corresponding insets for a diffraction-limited image, a lucky RAA image, and two typical RAA images. Each RAA histogram appears as a scaled mixing of the histogram of the object function a(x), which is comprised of a collection of delta functions (see Sec. 3.2.2). The histograms differ in shape owing to different speckle distributions. Lacking speckles, the diffraction-limited image has a very sparse distribution. The lucky RAA image has a long-tailed distribution with a peak value less than unity. The typical RAA images may be divided into two types: Type I has a long-tailed distribution and an extremely high peak; Type II has a low peak value and a roughly symmetric shape. The three distribution parameters (ζ, μ and σe) as well as the peak value of the normalized histogram are generally adequate to classify the RAA images into lucky or typical groupings, assuming the same rms phase error and SNR of the data set. However, the classification of RAA images with different phase errors and SNR may lead to ambiguities and loss of accuracy. Lacking sufficient prior knowledge of the phase error, the FWHM of the autocorrelation of each RAA image, which is an indicator of the rms tip-tilt and piston [55], is employed as an extra feature of the RAA images to improve the robustness of the classification procedure to the variation of phase errors. In addition, the SNR of each RAA image (see Sec. 3.2.3) is also estimated as a feature to compensate the influence of the noise, using the “signal processing toolbox” of Matlab 2015b. With these six features, a support vector machine (SVM) model is trained for the classification of RAA images.

SVM is a supervised learning method and is known for binary (two-classes) classification [7], which has been employed in astronomy for the classification of astronomical objects [56, 57]. An SVM model is a decision machine that transforms the data or feature points into a higher dimension space and classifies the points by creating a hyper-plane as the decision boundary. The data mapping is generally carried out through a kernel function. To avoid the problem of over-fitting, the hyper-plane is calculated by maximizing the margin between the neighboring classes. Since the feature points of the RAA images are not linearly separable, the soft-margin SVM is employed [8]. The soft-margin SVM builds the hyper-plane by formulating the margin maximization as solving the following optimization problem:

minimizew,b,s12w2+Cj=1Ntsjsubject tosj0,ljf(qj)1sj,j=1,,Nt
where {(qj, lj) | qj ∈ ℝ6, lj = ±1, j = 1, …, Nt} represents a set of Nt feature points and the corresponding class label. The decision boundary is given by the hyper-plane fj = f (qj) = wT Φ(qj) + b, where w is the normal vector of hyper-plane, b is the bias, and Φ(qi) is the feature transfer function. The coefficient C balances a wide margin and a small amount of misclassified features (or margin failure), which are represented by a set of slack variables sj. Using the method of Lagrange multipliers, the optimization problem in Eq. (10) could be transformed into a dual form, which is only a function of Lagrange multipliers ηj:
minimizeη12j=1Ntk=1NtljlkK(qj,qk)ηjηkjNtηjsubject toj=1Ntηjlj=0,0ηjC,j=1,,Nt
where K(qi, qj) = ΦT (qi)Φ(qj) is the kernel function for feature mapping. The coefficient C now serves as a box constraint to the Lagrange multipliers. The dual problem shown in Eq. (11) is solved using the sequential minimal optimization (SMO) method [58]. By decomposing the quadratic programming problem into smaller subproblems, SMO offers both a memory and time efficient solution to the dual problem. The optimal normal vector of the hyper-plane w^ and the bias b^ can be derived from the estimated Lagrange multipliers as:
w^=j=1Ntηjljqj,b^=w^Tqjlj
Once the training process is completed, the SVM model will be applied to the testing images for classification. For a testing image which is represented by a feature point u ∈ ℝ6, the class label will be determined by the sign of the decision function l(u)=sign(f^(u)=w^TΦ(u)+b^), and a classification score is given by the absolute value of the decision function as S(u)=|f^(u)|. To determine the best lucky frame, the classification score is converted to the posterior probability P(l(z)|S(u)) = 1/(1 + exp (S(u) Q + B)), where the transformation coefficients Q and B are estimated by fitting a sigmoid function to the training data [59]:
minimizeQ,Bj=1Nttjlog(pj)+(1tj)log(1pj)
where tj = (lj + 1)/2, pj = P(lj |qj) = 1/(1 + exp(SjQ + B)) is the posterior probability of the jth training feature being classified into group lj, and Sj=|f^(qj)| is the corresponding score.

In the training stage, we numerically constructed sets of RAA images comprised of one to ten point sources, each having one of five different rms tip-tilt and piston values (zero to 10 λc/D and zero to 1 waves, respectively), and noise with SNR varying from 5 dB to 20 dB. Here we assume the fill factor FF and diameter ratio Γ of RAA are fixed. (We suggest training different SVM models for RAA’s having different values of FF and Γ.) The training dataset includes a total number of 250 numerically generated ensembles, each having 1000 frames that are classified via visual inspection as lucky or typical.

The class labels along with the six features of each frame are fed into the SVM for training. Here we use the “statistics and machine learning toolbox” of Matlab 2015b to solve Eqs. (11)(13) for SVM training and classification, where a Gaussian radial basis function K(qi,qj)=exp(qiqj2/σr2) with a variance σr = 1 is used as the kernel function and C = 10 as the box constraint. The training and classification is carried on a Intel Core i7 4790 4GHz with 8GB Ram and is completed in less than 90 seconds. The trained model is also evaluated by a 10-fold cross validation to improve the accuracy. In the classification stage, the input RAA image sequence is classified into lucky and typical frames using the pre-trained SVM model. Each image is also assigned a posterior probability of being classified into the lucky group. Due to the highly imbalanced ratio of lucky to typical frame in the RAA sequences, the false negative rate (∼ 20%) is typically larger than the false positive rate (∼ 10%). The lucky frame g0, which has the highest posterior probability is selected as the initial guess of the subsequent blind deconvolution, described below. Several factors that may influence the number of lucky frames include the diameter ratio Γ, the fill factor FF, the rms phase errors, the number of objects, the pairwise distance among the target objects, and the patch size. Typically, for a RAA of FF ~ 8%, Γ = 2%, σα = 5λc/D, σz = 1 wave, and SNR ∼ 8 dB, the percentage of lucky frames in a recorded sequence is about 20% for a binary source with a spacing ∼ 1.22λc/D.

4.4. Multiple frame blind deconvolution

Using the lucky frame, g0, as the initial guess, multiframe blind deconvolution is applied to the image sequence to remove the undesired speckles, distorsion, and diffraction blur. Assuming a quasi-shift invariant RAA system (see Sec. 3.2), the image formation process can be simplified as a two dimensional convolution of the polychromatic PSF hn(x) in the nth observation and the geometric object function a(x):

gn(x)=a*hn
Rewriting the convolution as a matrix multiplication in frequency domain produces:
Gn(ξ)=AHn
where Gn and A denote the spatial frequency of the observed image and the object function respectively, Hn is the optical transfer function in the nth observation, ξ = (ξx, ξy) is the two dimensional spatial frequency coordinate, and (UV)i,j = (Ui,j)(Vi,j) refers to the component-wise matrix product. The multiframe blind deconvolution estimates both the nth PSF and the reconstructed image by minimizing a non-negative least square error function:
{a^,hn}=1{minhn0,a01N2n=0N|GnAHn|F2}
where ℱ−1 represents the inverse Fourier transform, |C|F=ij|(Ci,j)|2 denotes the Frobenius norm. Under the condition that a and hn are not upper bounded, i.e., saturation can be ignored, Eq. 16 can then be solved as a non-negative matrix factorization using the following multiplicative update rule [9] in an alternating way by holding one variable constant while estimating the other:
Hn=Hn1AnTGnAnT(AnHn1)
An=An1HnTGnHnT(HnAn1)
where An is the spatial frequency of the the recovered object in the nth loop. The non-negative intensity constraints of the recovered PSF and object function are enforced after each iteration by setting the negative pixel values to zero. The iteration stops as the reconstruction converges or the entire sequence has been processed. The final estimate of the object function, i.e., the reconstructed image, is given by a^=1(ANP), where NPN is the number of images that have been processed. Without having additional constraints, the minimization of the error function favors the bright region in each frame. To avoid the convergence to erroneous or suboptimal solutions, a two dimensional super-Gaussian window exp (−(x/Nw)β) is multiplied with each observed image gn before the deconvolution, where Nw corresponds to the span of the sampled region, i.e. the size of the isoplanatic region, and β is the apodization rate of the super-Gaussian function. The use of window tapers the unexpected high pixel values that are outside the central isoplanatic region and accelerates the convergence rate of the deconvolution.

5. Results and discussion

In this section, we evaluate the performance of the RAA system using both experimental and numerically generated images of broadband (400–720 nm) point objects. A reconstructed image was formed from both a sequence of N = 100 experimental images, and several types of numerically generated images with N = 500. The parameter values of the experiment are given in Sec. 3.1. We match most of the experimental parameter values in the numerical modeling of the binary source, except for the the effective focal length f of the array, which is 400 mm in the experiment and 600 mm in the numerical model. For multiple sources, we use FF ~ 20% in the numerical simulation. In addition, the numerical pupil and detector planes are sampled by a grid size of NA = ND = 1024. Each numerical image sequence also has a rms tip-tilt 5λc/D and rms piston of one wave. Quantitatively, we compare the restored images of the binary source with the corresponding diffraction-limited image in terms of: (1) FWEM: The average full width of the binary source at 80% of the image peak; (2) Peak Distance (PD): The distance between the two peaks of the binary source; and (3) Peak Ratio (PR): The ratio of the smaller peak of the binary source to the larger peak.

5.1. Experimental results

Compared with the diffraction-limited image of the binary source as is shown in Fig. 6(a), a typical RAA image is degraded by the speckle, making the sources unresolvable (see Fig. 6(b)). In contrast, sharp and highly resolvable sources are shown in the lucky RAA image (see Fig. 6(c)), where the FWEM resolution is improved by a factor of 2 times compared with the diffraction limit (see Table 1). The reconstructed image, Fig. 6(d), has a FWEM resolution gain of roughly 3.1 times. The speckle and diffraction blur are clearly reduced, leaving a fully resolved binary source in the reconstructed image. Other quantitative comparisons listed in Table 1 suggest an accurate reconstruction: The error of the intensity peak ratio (PR) is 8% and that of the distance between peaks (PD) is less than a single 6.25μm pixel.

 figure: Fig. 6

Fig. 6 (a–c) Experimental recordings of a broadband binary source, and (d) a well-resolved reconstructed image.

Download Full Size | PPT Slide | PDF

Tables Icon

Table 1. Evaluation of the Experimental Results

5.2. Numerical results

Result 1: Recovery of Binary Source with a Separation of 1.22λc/D. Compared to the corresponding diffraction-limited image shown in the Fig. 7(a), the typical RAA frame shows unresolvable sources in the presence of phase errors (see Fig. 7(b)). The lucky RAA image, on the other hand, has a modest increase in resolution by a factor of 1.6 (the ratio of FWEM is 24/15), as observed in Fig. 7(c) and summarized in Table 2. In contrast, a striking 24/7 = 3.4 times improvement in resolution is afforded in the reconstructed image, as is evident in Fig. 7(d). It is instructive to compare this result with a well-aligned RAA having zero tip/tilt and piston error. The numerical images are depicted in the bottom row of Fig. 7. In this case a small ratio of FWEM is 24/19 = 1.3 is found. This unimpressive enhancement (compared to the misaligned case) may be attributed to the lack of optical sharpening that is afforded by local superoscillatory fields in a randomly misaligned system. Finally, the values of PR and PD agree well with the ground truth for both the random and well aligned system.

 figure: Fig. 7

Fig. 7 Reconstruction of a broadband binary point source with a separation of 1.22λc/D from numerical images. (a) Diffraction-limited image. (b–d) Example frames and reconstructed image of with rms tip-tilt and piston of 5λc/D and 1 wave, respectively. (e–g) Comparison of well aligned RAA with inferior reconstruction.

Download Full Size | PPT Slide | PDF

Tables Icon

Table 2. Evaluation of the Numerical Results

Result 2: Recovery of Binary Source with a Separation of 0.85λc/D. We demonstrate imaging beyond the diffraction limit in the reconstruction of shown in Fig. 8, which is identical to the pair described above, except the separation is 0.85λc/D. Whereas the filled unaberrated aperture produces an unresolved image (see Fig. 8(a)), typical and lucky images (see Figs. 8(b) and (c)) may be used to reconstruct a well-resolved image shown in Fig. 8(d). In contrast, the sources remain barely resolvable in the lucky and the reconstructed image obtained using a well aligned RAA (see Figs. 8(e)–(g)). This suggests that a RAA system having randomly varying phase errors is suitable for reconstructing white light point source separated by an angle greater than roughly 0.85λc/D.

 figure: Fig. 8

Fig. 8 Reconstruction of a broadband binary source with a separation of 0.85λc/D from numerical images. (a) Unresolved diffraction-limited image. (b–d) Example frames and reconstructed image of with rms tip-tilt and piston of 5λc/D and 1 wave, respectively. (e–g) Comparison of well aligned RAA with inferior reconstruction.

Download Full Size | PPT Slide | PDF

Result 3: Recovery of Multiple Sources. A similar reconstruction is conducted for six broadband point sources, with a spacing of 1.22λc f/D between the neighboring objects. The diffraction-limited image is shown in Fig. 9(a). In the presence of phase errors, a typical RAA image is generally unresolvable (see Fig. 9(b)). The lucky image, although blurred by the speckles, shows noticeable enhancement in local resolution (see Fig. 9(c)). Highly resolved sources are observed in Fig. 9(d), the reconstructed image. In the absence of phase errors, both typical and lucky frames (see Figs. 9(e) and (f)) show a quasi-diffraction-limited resolution. The reconstructed image in the latter case, shows a noticeable but limited improvement of the resolution in Fig. 9(g).

 figure: Fig. 9

Fig. 9 Reconstruction of multiple broadband point sources with a separation of 1.22λc/D between neighboring sources from numerical images. (a) A diffraction-limited image. (b–d) Example frames and reconstructed image of with rms tip-tilt and piston of 5λc/D and 1 wave, respectively. (e–g) Comparison of well aligned RAA with inferior reconstruction.

Download Full Size | PPT Slide | PDF

Analysis of the System Strehl Ratio. Although the misalignment of the RAA may randomly form a near optimal pupil function and lead to high resolving power, the concentration of the energy at the effective focal spots may be reduced as the phase errors increase. The energy may also decrease as the diameter ratio Γ and the fill factor FF of the array decreases. We examine the Strehl ratio as a function of FF for five arrays of different values of Γ in Fig. 10(a). For the same array, the Strehl ratio as a function of phase error is given in Fig. 10(b). Examples of the five RAA arrays are shown in Fig. 10(c), each having a FF ~ 12%. It is found that increasing the value of Γ or FF of the RAA increases the Strehl ratio. Decreasing the rms of phase error may also led to an improved Strehl ratio without significant negative impacts to the optical sharpening effect as long as the rms of phase error is above one waves across the baseline.

 figure: Fig. 10

Fig. 10 Strehl ratio as a function of (a) fill factor FF and (b) wavefront error for various values of the subaperture diameter ratio Γ. (c) An illustration of RAA with a same FF (∼ 12%) but different Γ.

Download Full Size | PPT Slide | PDF

6. Potential application and future work

With near-term plans to build 30 meter ground-based telescopes for astronomy, the demand for higher resolution optics in space continues to grow not only for exoplanet detection, but also for earth-based science, including hyper-spectral imaging and for monitoring of the oceans and land masses (e.g. seismic monitoring) [60]. The Granular Imager [61, 62] is an on-going investigation that is attempting to create a large aperture space-based observatory from granular media, and is based on the Orbiting Rainbows paradigm [5, 6], which proposes to change the way space telescopes are built in virtue of: 1) The avoidance of any physical structure and sensing/actuation hardware on the primary, so sensing and actuation are done “at-a-distance” on an amorphous cloud, and all operational complexity is done outside the primary; 2) The reliance on optical trapping and manipulation [63, 64] to enable action “at-a-distance” (although, other mechanisms such as electrodynamic trapping and confinement are also being investigated); and 3) The relaxation of the requirements on the fine cloud control by doing the best possible job in software via robust computational imaging algorithms. The goal of the research on the Granular Imager is to identify ways to optically manipulate and maintain the shape of a cloud of small scatterers so that it can function as a variable surface with useful electromagnetic characteristics vis-a-vis for astrophysical applications. This concept, in which the aperture does not need to be continuous and monolithic, may provide opportunities to significantly increase the aperture size compared to large NASA space-borne observatories currently envisioned, allowing for a true Terrestrial Planet Imager that would be able to resolve exoplanet details and do meaningful spectroscopy on distant worlds.

7. Conclusion

We experimentally and numerically investigated a disordered optical synthetic aperture system using an array of randomized small apertures in the far field, for imaging broadband point targets. In general, a misaligned array produces speckles and degrades the quality of individually recorded images. However, optically sharpened lucky images formed by statistical happenstance offer the opportunity to reconstruct well-resolved images. Using the proposed restoration framework, we identify these lucky frames and restore high resolution images of point objects. Experimentally recorded images using randomized amplitude and phase masks, combined with our four-step image reconstruction framework provided excellent reconstruction with a 3.1 times improvement of the spatial resolution. The experimental system had roughly 5 waves of wavefront error across a pupil comprised of 150 pinholes. Roughly 20% of the 100 unique images were classified as lucky images. A numerical model of the system provided comparable results, and further illustrated the advantages of random wavefront errors over a error-free random aperture array system. Extended objects were not explored, but are the subject of continued work. For example, the proposed computational framework may be adapted to the recovery of extended objects by changing the intensity distribution model of the image, making use of texture based features in lucky frame selection. Future work may explore the use of a non-shift-invariant blind deconvolution image reconstruction scheme [65] for an extended source or point sources with large random wavefront errors. An important aspect of this investigation was that we assumed no prior knowledge of the PSF, as may be the case for a low-complexity granular space telescope. This sets our work apart from many previously described systems where the PSF is static or at least partially known.

Funding

This work was funded by the US National Science Foundation (ECCS-1309517) and by the NASA Innovative Advanced Concepts (NIAC) program under contract 14-14NIAC-II-0008 (Orbiting Rainbows, Phase II). The research on the Orbiting Rainbows project was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration.

Acknowledgments

The authors are grateful to Dr. Scott A. Basinger and Dr. David Palacios (Jet Propulsion Laboratory, California Institute of Technology) for discussions about the application of future granular space telescopes, and Wendy Garcia (New Mexico State University) for REU-related laboratory assistance at RIT.

References and links

1. A. B. Meinel, “Cost-scaling laws applicable to very large optical telescopes,” Opt. Eng 18(6), 645–647 (1979). [CrossRef]  

2. D. Baiocchi and H. P. Stahl, “Enabling future space telescopes: mirror technology review and development roadmap,” Astro2010: The Astronomy and Astrophysics Decadal Survey, 23 (2009).

3. S. J. Chung and F. Y. Hadaegh, “Swarms of Femtosats for Synthetic Aperture Applications,” in Proceedings of the Fourth International Conference on Spacecraft Formation Flying Missions and Technologies, (2011).

4. D. M. Alloin and J. M. Mariotti, Diffraction-limited imaging with very large telescopes, (Springer, 2012).

5. M. B. Quadrelli, S. A. Basinger, and G. A. Swartzlander, “NIAC Phase I Orbiting rainbows: Optical manipulation of aerosols and the beginnings of future space construction,” (Jet Propulsion Laboratory, California Institute of Technology, 2013), https://www.nasa.gov/sites/default/files/files/Quadrelli_2012_PhI_OrbitingRainbows_.pdf

6. M. B. Quadrelli, “NIAC Phase II Orbiting Rainbows: Future Space Imaging with Granular Systems,” (Jet Propulsion Laboratory, California Institute of Technology, 2017), https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20170004834.pdf

7. C. Cortes and V. Vapnik, “Support-vector networks,” Mach. Learn 20(3), 273–297 (1995). [CrossRef]  

8. V. Vapnik, The nature of statistical learning theory,(Springer, 2013).

9. D. D. Lee and H. S. Seung, “Algorithms for non-negative matrix factorization,” in Proceedings of Advances in Neural Information Processing Systems (NIPS, 2001), pp. 556–562.

10. A. Labeyrie, “High-Resolution Techniques in Optical Astronomy,” Progress in Optics14 (Elsevier, 1977), Chap. 2. [CrossRef]  

11. J. Nelson, “Segmented mirror telescopes,” in Optics in Astrophysics, (Springer, 2005).

12. J. E. Harvey, A. Kotha, and R. L. Phillips, “Image characteristics in applications utilizing dilute subaperture arrays,” Appl. Opt. 34(16), 2983–2992 (1995). [CrossRef]   [PubMed]  

13. N. J. Miller, M. P. Dierking, and B. D. Duncan, “Optical sparse aperture imaging,” Appl. Opt. 46(23), 5933–5943 (2007). [CrossRef]   [PubMed]  

14. G. Andersen and D. Tullson, “Broadband antihole photon sieve telescope,” Appl. Phys. 46, 3706–3708 (2007).

15. K. S. Ford, B. McKernan, A. Sivaramakrishnan, A. Martel, K. Koekemoer, D. Lafrenière, and S. Parmentier, “Active galactic nucleus and quasar science with aperture masking interferometry on the James Webb Space Telescope,” Astrophys. J. 783, 73–75 (2014). [CrossRef]  

16. J. Lindberg, “Mathematical concepts of optical superresolution,” J. Opt. 14, 083001 (2012). [CrossRef]  

17. M. Mazilu, J. Baumgartl, S. Kosmeier, and K. Dholakia, “Optical eigenmodes; exploiting the quadratic nature of the energy flux and of scattering interactions,” Opt. Express 19(2), 933–945 (2011). [CrossRef]   [PubMed]  

18. A. Labeyrie, “Attainment of diffraction-limited resolution in large telescopes by Fourier analysing speckle patterns in star images,” Astron. Astrophys. 6(1), 85–87 (1970).

19. A. W. Lohmann, G. Weigelt, and B. Wirnitzer, “Speckle masking in astronomy: triple correlation theory and applications,” Appl. Opt. 22(24), 4028–4037 (1970). [CrossRef]  

20. W. Beavers, D. E. Dudgeon, J. W. Beletic, and M. T. Lane, “Speckle imaging through the atmosphere,” Lincoln Lab. J. 2(2), 207–228, (1989).

21. K. T. Knox and B. J. Thompson, “Recovery of images from atmospherically degraded short-exposure photographs,” Astrophys. J. 193, 45–48 (1974). [CrossRef]  

22. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21 (15), 2758–2769 (1982). [CrossRef]   [PubMed]  

23. R. Fergus, A. Torralba, and W. T. Freeman, “Random lens imaging,” Tech. Rep. TR-2006-058, MIT (2006).

24. A. Liutkus, D. Martina, S. Popoff, G. Chardon, O. Katz, G. Lerosey, S. Gigan, L. Daudet, and I. Carron, “Imaging with nature: Compressive imaging using a multiply scattering medium,” Sci. Rep.4 (2014). [PubMed]  

25. J. W. Goodman, Speckle phenomena in optics: theory and applications, (Roberts and Company, 2007).

26. J. C. Dainty, “The statistics of speckle patterns,” Progress in Optics14, (Elsevier, 1977), Chap. 1. [CrossRef]  

27. O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photon. 8(10), 784–790 (2014). [CrossRef]  

28. B. Judkewitz, R. Horstmeyer, I. M. Vellekoop, I. N. Papadopoulos, and C. Yang, “Translation correlations in anisotropically scattering media,” Nat. Phys. 11(8), 684–691 (2015). [CrossRef]  

29. D. Kundur and D. Hatzinakos, “Blind image deconvolution,” IEEE Signal Process. Mag. 13(3), 43–64 (1996). [CrossRef]  

30. J. R. Fienup and G. B. Feldkamp, “Astronomical imaging by processing stellar speckle interferometry data,” Proc. SPIE 0243, 2723–2727 (1980).

31. E. Pantin, J. Starck, F. Murtagh, and K. Egiazarian, “Deconvolution and blind deconvolution in astronomy,” in Blind image deconvolution: theory and applications, (CRC Press, 2007). [CrossRef]  

32. E. J. Candès and C. Fernandez-Granda, “Towards a mathematical theory of super-resolution,” Comm. Pure Appl. Math. 67(6), 906–956 (2014). [CrossRef]  

33. E. J. Candès and C. Fernandez-Granda, “Super-resolution from noisy data,” J. Fourier Anal Appl. 19 (6), 1229–1254 (2013). [CrossRef]  

34. V. Duval and G. Peyré, “Exact support recovery for sparse spikes deconvolution,” Found. Comput. Math. 15 (5), 1315–1355 (2015). [CrossRef]  

35. M. Hirsch, S. Harmeling, S. Sra, and B. Schölkopf, “Online multi-frame blind deconvolution with super-resolution and saturation correction,” Astron. Astrophys. 531, A9(2011). [CrossRef]  

36. S. M. Jefferies and J. C. Christou, “Restoration of astronomical images by iterative blind deconvolution,” Astrophys. J. 415, 862(1993). [CrossRef]  

37. D. Geman and C. Yang, “Nonlinear image recovery with half-quadratic regularization,” IEEE Trans. Image Process. 4(7), 932–946 (1995). [CrossRef]   [PubMed]  

38. A. M. Bruckstein, D. Donoho, and M. Elad, “From sparse solutions of systems of equations to sparse modeling of signals and images,” SIAM review 51 (1), 34–81 (2009). [CrossRef]  

39. D. Krishnan and R. Fergus, “Fast image deconvolution using hyper-laplacian priors,” in Proceedings of Advances in Neural Information Processing Systems, (NIPS, 2009), pp. 1033–1041.

40. J. Miskin and D. J. MacKay, “Ensemble learning for blind image separation and deconvolution,” in Advances in independent component analysis, (Springer, 2000). [CrossRef]  

41. D. L. Fried, “Probability of getting a lucky short-exposure image through turbulence,” J. Opt. Soc. Am. 68(12), 1651–1657 (1978). [CrossRef]  

42. R. N. Tubbs, “Lucky exposures: diffraction-limited astronomical imaging through the atmosphere,” Ph.D thesis, Cambridge Univ. (2003).

43. G. C. Loos and C. B. Hogge, “Turbulence of the upper atmosphere and isoplanatism,” Appl. Opt. 18 (15), 2654–2661 (1979). [CrossRef]   [PubMed]  

44. F. Roddier, J. M. Gilli, and J. Vernin, “On the isoplanatic patch size in stellar speckle interferometry,” J. Optics 13(2), 63–65 (1979). [CrossRef]  

45. M. Aubailly, M. A. Vorontsov, G. W. Carhart, and M.T. Valley, “Automated video enhancement from a stream of atmospherically-distorted images: the lucky-region fusion approach,” Proc. SPIE 7463, 74630C (2009). [CrossRef]  

46. X. Zhu and P. Milanfar, “Removing atmospheric turbulence via space-invariant deconvolution,” IEEE Trans. Pattern Anal Mach Intell. 35(1), 157–170 (2013). [CrossRef]  

47. Energetic broadband laser-driven light source (EQ99XFC) data sheet, http://www.energetiq.com/DataSheets/EQ99XFC-Data-Sheet.pdf.

48. D. Dunbar and G. Humphreys, “A spatial data structure for fast poisson-disk sample generation,” ACM Trans. Graph. 25 (3), 503–508 (2006). [CrossRef]  

49. J. Jiang and J. Gu, “What is the space of spectral sensitivity functions for digital color cameras?” in Proceedings of IEEE Workshop on Applications of Computer Vision, (IEEE, 2013), pp. 168–179.

50. J. Malcolm, P. Yalamanchili, C. McClanahanm, V. Venugopalakrishnan, K. Patel, and J. Melonakos, “ArrayFire: a GPU acceleration platform,” Proc. SPIE 8403, 84030A (2012). [CrossRef]  

51. P. Welch, “The use of fast Fourier transform for the estimation of power spectra: a method based on time averaging over short, modified periodograms,” IEEE Trans. Audio Electroacoust. 15 (2), 70–73, (1967) [CrossRef]  

52. K. Hirakawa and T. W. Parks, “Adaptive homogeneity-directed demosaicing algorithm,” IEEE Trans. Image. Proc. 14(3), 360–369 (2005). [CrossRef]  

53. D. Coffin, “DCRAW: Decoding raw digital photos in linux,” (2008) https://www.cybercom.net/~dcoffin/dcraw/.

54. B. Narayanaswamy, Continuous multivariate distributions(Wiley, 2006).

55. J. C. Dainty, “Some statistical properties of random speckle patterns in coherent and partially coherent illumination,” J. Mod. Opt. 17 (10), 761–772 (1970).

56. Y. Zhang and Y. Zhao, “Automated clustering algorithms for classification of astronomical objects,” Astron. Astrophys. 422 (3), 1113–1121 (2004). [CrossRef]  

57. D. Kim, P. Protopapas, Y. Byun, C. Alcock, R. Khardon, and M. Trichas, “Quasi-stellar object selection algorithm using time variability and machine learning: Selection of 1620 quasi-stellar object candidates from MACHO Large Magellanic Cloud database,” Astrophys. J. 735(2), 68–70 (2011). [CrossRef]  

58. J. C. Platt, “Sequential minimal optimization: A fast algorithm for training support vector machines,” Tech. Rep. TR-98-14, Microsoft Research (1998).

59. J. C. Platt, “Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods,” in Advances in Large Margin Classifiers, (MIT Press, 1999).

60. T. Andersen and A. Enmark, Integrated modeling of Telescopes, (Springer, 2011). [CrossRef]  

61. M. B. Quadrelli, S. A. Basinger, G. A. Swartzlander, and D. Arumugam, “Dynamics and control of granular imaging systems,” in Proceedings of AIAA Space Conference and Exposition(AIAA, 2015), pp. 4484.

62. S. A. Basinger, D. Palacios, M. B. Quadrelli, and G. A. Swartzlander, “Optics of a granular imaging system (i.e. “orbiting rainbows”),” Proc. SPIE 9602, 96020E(2015). [CrossRef]  

63. G. A. Swartzlander, “Radiation Pressure on a Diffractive Sailcraft,” J. Opt. Soc. Am. B 34 (6), C25–C30, (2017). [CrossRef]  

64. A. B. Artusio-Glimpse, J. H. Wirth, and G. A. Swartzlander, “Optical gradient force assist maneuver,” Opt. Lett. 41(17), 4142–4145 (2016). [CrossRef]   [PubMed]  

65. M. Hirsch, S. Sra, B. Schölkopf, and S. Harmeling, “Efficient filter flow for space-variant multiframe blind deconvolution,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2010), pp. 607–614.

References

  • View by:
  • |
  • |
  • |

  1. A. B. Meinel, “Cost-scaling laws applicable to very large optical telescopes,” Opt. Eng 18(6), 645–647 (1979).
    [Crossref]
  2. D. Baiocchi and H. P. Stahl, “Enabling future space telescopes: mirror technology review and development roadmap,” Astro2010: The Astronomy and Astrophysics Decadal Survey, 23 (2009).
  3. S. J. Chung and F. Y. Hadaegh, “Swarms of Femtosats for Synthetic Aperture Applications,” in Proceedings of the Fourth International Conference on Spacecraft Formation Flying Missions and Technologies, (2011).
  4. D. M. Alloin and J. M. Mariotti, Diffraction-limited imaging with very large telescopes, (Springer, 2012).
  5. M. B. Quadrelli, S. A. Basinger, and G. A. Swartzlander, “NIAC Phase I Orbiting rainbows: Optical manipulation of aerosols and the beginnings of future space construction,” (Jet Propulsion Laboratory, California Institute of Technology, 2013), https://www.nasa.gov/sites/default/files/files/Quadrelli_2012_PhI_OrbitingRainbows_.pdf
  6. M. B. Quadrelli, “NIAC Phase II Orbiting Rainbows: Future Space Imaging with Granular Systems,” (Jet Propulsion Laboratory, California Institute of Technology, 2017), https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20170004834.pdf
  7. C. Cortes and V. Vapnik, “Support-vector networks,” Mach. Learn 20(3), 273–297 (1995).
    [Crossref]
  8. V. Vapnik, The nature of statistical learning theory,(Springer, 2013).
  9. D. D. Lee and H. S. Seung, “Algorithms for non-negative matrix factorization,” in Proceedings of Advances in Neural Information Processing Systems (NIPS, 2001), pp. 556–562.
  10. A. Labeyrie, “High-Resolution Techniques in Optical Astronomy,” Progress in Optics14 (Elsevier, 1977), Chap. 2.
    [Crossref]
  11. J. Nelson, “Segmented mirror telescopes,” in Optics in Astrophysics, (Springer, 2005).
  12. J. E. Harvey, A. Kotha, and R. L. Phillips, “Image characteristics in applications utilizing dilute subaperture arrays,” Appl. Opt. 34(16), 2983–2992 (1995).
    [Crossref] [PubMed]
  13. N. J. Miller, M. P. Dierking, and B. D. Duncan, “Optical sparse aperture imaging,” Appl. Opt. 46(23), 5933–5943 (2007).
    [Crossref] [PubMed]
  14. G. Andersen and D. Tullson, “Broadband antihole photon sieve telescope,” Appl. Phys. 46, 3706–3708 (2007).
  15. K. S. Ford, B. McKernan, A. Sivaramakrishnan, A. Martel, K. Koekemoer, D. Lafrenière, and S. Parmentier, “Active galactic nucleus and quasar science with aperture masking interferometry on the James Webb Space Telescope,” Astrophys. J. 783, 73–75 (2014).
    [Crossref]
  16. J. Lindberg, “Mathematical concepts of optical superresolution,” J. Opt. 14, 083001 (2012).
    [Crossref]
  17. M. Mazilu, J. Baumgartl, S. Kosmeier, and K. Dholakia, “Optical eigenmodes; exploiting the quadratic nature of the energy flux and of scattering interactions,” Opt. Express 19(2), 933–945 (2011).
    [Crossref] [PubMed]
  18. A. Labeyrie, “Attainment of diffraction-limited resolution in large telescopes by Fourier analysing speckle patterns in star images,” Astron. Astrophys. 6(1), 85–87 (1970).
  19. A. W. Lohmann, G. Weigelt, and B. Wirnitzer, “Speckle masking in astronomy: triple correlation theory and applications,” Appl. Opt. 22(24), 4028–4037 (1970).
    [Crossref]
  20. W. Beavers, D. E. Dudgeon, J. W. Beletic, and M. T. Lane, “Speckle imaging through the atmosphere,” Lincoln Lab. J. 2(2), 207–228, (1989).
  21. K. T. Knox and B. J. Thompson, “Recovery of images from atmospherically degraded short-exposure photographs,” Astrophys. J. 193, 45–48 (1974).
    [Crossref]
  22. J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21 (15), 2758–2769 (1982).
    [Crossref] [PubMed]
  23. R. Fergus, A. Torralba, and W. T. Freeman, “Random lens imaging,” Tech. Rep. TR-2006-058, MIT (2006).
  24. A. Liutkus, D. Martina, S. Popoff, G. Chardon, O. Katz, G. Lerosey, S. Gigan, L. Daudet, and I. Carron, “Imaging with nature: Compressive imaging using a multiply scattering medium,” Sci. Rep.4 (2014).
    [PubMed]
  25. J. W. Goodman, Speckle phenomena in optics: theory and applications, (Roberts and Company, 2007).
  26. J. C. Dainty, “The statistics of speckle patterns,” Progress in Optics14, (Elsevier, 1977), Chap. 1.
    [Crossref]
  27. O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photon. 8(10), 784–790 (2014).
    [Crossref]
  28. B. Judkewitz, R. Horstmeyer, I. M. Vellekoop, I. N. Papadopoulos, and C. Yang, “Translation correlations in anisotropically scattering media,” Nat. Phys. 11(8), 684–691 (2015).
    [Crossref]
  29. D. Kundur and D. Hatzinakos, “Blind image deconvolution,” IEEE Signal Process. Mag. 13(3), 43–64 (1996).
    [Crossref]
  30. J. R. Fienup and G. B. Feldkamp, “Astronomical imaging by processing stellar speckle interferometry data,” Proc. SPIE 0243, 2723–2727 (1980).
  31. E. Pantin, J. Starck, F. Murtagh, and K. Egiazarian, “Deconvolution and blind deconvolution in astronomy,” in Blind image deconvolution: theory and applications, (CRC Press, 2007).
    [Crossref]
  32. E. J. Candès and C. Fernandez-Granda, “Towards a mathematical theory of super-resolution,” Comm. Pure Appl. Math. 67(6), 906–956 (2014).
    [Crossref]
  33. E. J. Candès and C. Fernandez-Granda, “Super-resolution from noisy data,” J. Fourier Anal Appl. 19 (6), 1229–1254 (2013).
    [Crossref]
  34. V. Duval and G. Peyré, “Exact support recovery for sparse spikes deconvolution,” Found. Comput. Math. 15 (5), 1315–1355 (2015).
    [Crossref]
  35. M. Hirsch, S. Harmeling, S. Sra, and B. Schölkopf, “Online multi-frame blind deconvolution with super-resolution and saturation correction,” Astron. Astrophys. 531, A9(2011).
    [Crossref]
  36. S. M. Jefferies and J. C. Christou, “Restoration of astronomical images by iterative blind deconvolution,” Astrophys. J. 415, 862(1993).
    [Crossref]
  37. D. Geman and C. Yang, “Nonlinear image recovery with half-quadratic regularization,” IEEE Trans. Image Process. 4(7), 932–946 (1995).
    [Crossref] [PubMed]
  38. A. M. Bruckstein, D. Donoho, and M. Elad, “From sparse solutions of systems of equations to sparse modeling of signals and images,” SIAM review 51 (1), 34–81 (2009).
    [Crossref]
  39. D. Krishnan and R. Fergus, “Fast image deconvolution using hyper-laplacian priors,” in Proceedings of Advances in Neural Information Processing Systems, (NIPS, 2009), pp. 1033–1041.
  40. J. Miskin and D. J. MacKay, “Ensemble learning for blind image separation and deconvolution,” in Advances in independent component analysis, (Springer, 2000).
    [Crossref]
  41. D. L. Fried, “Probability of getting a lucky short-exposure image through turbulence,” J. Opt. Soc. Am. 68(12), 1651–1657 (1978).
    [Crossref]
  42. R. N. Tubbs, “Lucky exposures: diffraction-limited astronomical imaging through the atmosphere,” Ph.D thesis, Cambridge Univ. (2003).
  43. G. C. Loos and C. B. Hogge, “Turbulence of the upper atmosphere and isoplanatism,” Appl. Opt. 18 (15), 2654–2661 (1979).
    [Crossref] [PubMed]
  44. F. Roddier, J. M. Gilli, and J. Vernin, “On the isoplanatic patch size in stellar speckle interferometry,” J. Optics 13(2), 63–65 (1979).
    [Crossref]
  45. M. Aubailly, M. A. Vorontsov, G. W. Carhart, and M.T. Valley, “Automated video enhancement from a stream of atmospherically-distorted images: the lucky-region fusion approach,” Proc. SPIE 7463, 74630C (2009).
    [Crossref]
  46. X. Zhu and P. Milanfar, “Removing atmospheric turbulence via space-invariant deconvolution,” IEEE Trans. Pattern Anal Mach Intell. 35(1), 157–170 (2013).
    [Crossref]
  47. Energetic broadband laser-driven light source (EQ99XFC) data sheet, http://www.energetiq.com/DataSheets/EQ99XFC-Data-Sheet.pdf .
  48. D. Dunbar and G. Humphreys, “A spatial data structure for fast poisson-disk sample generation,” ACM Trans. Graph. 25 (3), 503–508 (2006).
    [Crossref]
  49. J. Jiang and J. Gu, “What is the space of spectral sensitivity functions for digital color cameras?” in Proceedings of IEEE Workshop on Applications of Computer Vision, (IEEE, 2013), pp. 168–179.
  50. J. Malcolm, P. Yalamanchili, C. McClanahanm, V. Venugopalakrishnan, K. Patel, and J. Melonakos, “ArrayFire: a GPU acceleration platform,” Proc. SPIE 8403, 84030A (2012).
    [Crossref]
  51. P. Welch, “The use of fast Fourier transform for the estimation of power spectra: a method based on time averaging over short, modified periodograms,” IEEE Trans. Audio Electroacoust. 15 (2), 70–73, (1967)
    [Crossref]
  52. K. Hirakawa and T. W. Parks, “Adaptive homogeneity-directed demosaicing algorithm,” IEEE Trans. Image. Proc. 14(3), 360–369 (2005).
    [Crossref]
  53. D. Coffin, “DCRAW: Decoding raw digital photos in linux,” (2008) https://www.cybercom.net/~dcoffin/dcraw/ .
  54. B. Narayanaswamy, Continuous multivariate distributions(Wiley, 2006).
  55. J. C. Dainty, “Some statistical properties of random speckle patterns in coherent and partially coherent illumination,” J. Mod. Opt. 17 (10), 761–772 (1970).
  56. Y. Zhang and Y. Zhao, “Automated clustering algorithms for classification of astronomical objects,” Astron. Astrophys. 422 (3), 1113–1121 (2004).
    [Crossref]
  57. D. Kim, P. Protopapas, Y. Byun, C. Alcock, R. Khardon, and M. Trichas, “Quasi-stellar object selection algorithm using time variability and machine learning: Selection of 1620 quasi-stellar object candidates from MACHO Large Magellanic Cloud database,” Astrophys. J. 735(2), 68–70 (2011).
    [Crossref]
  58. J. C. Platt, “Sequential minimal optimization: A fast algorithm for training support vector machines,” Tech. Rep. TR-98-14, Microsoft Research (1998).
  59. J. C. Platt, “Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods,” in Advances in Large Margin Classifiers, (MIT Press, 1999).
  60. T. Andersen and A. Enmark, Integrated modeling of Telescopes, (Springer, 2011).
    [Crossref]
  61. M. B. Quadrelli, S. A. Basinger, G. A. Swartzlander, and D. Arumugam, “Dynamics and control of granular imaging systems,” in Proceedings of AIAA Space Conference and Exposition(AIAA, 2015), pp. 4484.
  62. S. A. Basinger, D. Palacios, M. B. Quadrelli, and G. A. Swartzlander, “Optics of a granular imaging system (i.e. “orbiting rainbows”),” Proc. SPIE 9602, 96020E(2015).
    [Crossref]
  63. G. A. Swartzlander, “Radiation Pressure on a Diffractive Sailcraft,” J. Opt. Soc. Am. B 34 (6), C25–C30, (2017).
    [Crossref]
  64. A. B. Artusio-Glimpse, J. H. Wirth, and G. A. Swartzlander, “Optical gradient force assist maneuver,” Opt. Lett. 41(17), 4142–4145 (2016).
    [Crossref] [PubMed]
  65. M. Hirsch, S. Sra, B. Schölkopf, and S. Harmeling, “Efficient filter flow for space-variant multiframe blind deconvolution,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2010), pp. 607–614.

2017 (1)

2016 (1)

2015 (3)

S. A. Basinger, D. Palacios, M. B. Quadrelli, and G. A. Swartzlander, “Optics of a granular imaging system (i.e. “orbiting rainbows”),” Proc. SPIE 9602, 96020E(2015).
[Crossref]

B. Judkewitz, R. Horstmeyer, I. M. Vellekoop, I. N. Papadopoulos, and C. Yang, “Translation correlations in anisotropically scattering media,” Nat. Phys. 11(8), 684–691 (2015).
[Crossref]

V. Duval and G. Peyré, “Exact support recovery for sparse spikes deconvolution,” Found. Comput. Math. 15 (5), 1315–1355 (2015).
[Crossref]

2014 (3)

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photon. 8(10), 784–790 (2014).
[Crossref]

E. J. Candès and C. Fernandez-Granda, “Towards a mathematical theory of super-resolution,” Comm. Pure Appl. Math. 67(6), 906–956 (2014).
[Crossref]

K. S. Ford, B. McKernan, A. Sivaramakrishnan, A. Martel, K. Koekemoer, D. Lafrenière, and S. Parmentier, “Active galactic nucleus and quasar science with aperture masking interferometry on the James Webb Space Telescope,” Astrophys. J. 783, 73–75 (2014).
[Crossref]

2013 (2)

E. J. Candès and C. Fernandez-Granda, “Super-resolution from noisy data,” J. Fourier Anal Appl. 19 (6), 1229–1254 (2013).
[Crossref]

X. Zhu and P. Milanfar, “Removing atmospheric turbulence via space-invariant deconvolution,” IEEE Trans. Pattern Anal Mach Intell. 35(1), 157–170 (2013).
[Crossref]

2012 (2)

J. Malcolm, P. Yalamanchili, C. McClanahanm, V. Venugopalakrishnan, K. Patel, and J. Melonakos, “ArrayFire: a GPU acceleration platform,” Proc. SPIE 8403, 84030A (2012).
[Crossref]

J. Lindberg, “Mathematical concepts of optical superresolution,” J. Opt. 14, 083001 (2012).
[Crossref]

2011 (3)

M. Mazilu, J. Baumgartl, S. Kosmeier, and K. Dholakia, “Optical eigenmodes; exploiting the quadratic nature of the energy flux and of scattering interactions,” Opt. Express 19(2), 933–945 (2011).
[Crossref] [PubMed]

M. Hirsch, S. Harmeling, S. Sra, and B. Schölkopf, “Online multi-frame blind deconvolution with super-resolution and saturation correction,” Astron. Astrophys. 531, A9(2011).
[Crossref]

D. Kim, P. Protopapas, Y. Byun, C. Alcock, R. Khardon, and M. Trichas, “Quasi-stellar object selection algorithm using time variability and machine learning: Selection of 1620 quasi-stellar object candidates from MACHO Large Magellanic Cloud database,” Astrophys. J. 735(2), 68–70 (2011).
[Crossref]

2009 (2)

A. M. Bruckstein, D. Donoho, and M. Elad, “From sparse solutions of systems of equations to sparse modeling of signals and images,” SIAM review 51 (1), 34–81 (2009).
[Crossref]

M. Aubailly, M. A. Vorontsov, G. W. Carhart, and M.T. Valley, “Automated video enhancement from a stream of atmospherically-distorted images: the lucky-region fusion approach,” Proc. SPIE 7463, 74630C (2009).
[Crossref]

2007 (2)

N. J. Miller, M. P. Dierking, and B. D. Duncan, “Optical sparse aperture imaging,” Appl. Opt. 46(23), 5933–5943 (2007).
[Crossref] [PubMed]

G. Andersen and D. Tullson, “Broadband antihole photon sieve telescope,” Appl. Phys. 46, 3706–3708 (2007).

2006 (1)

D. Dunbar and G. Humphreys, “A spatial data structure for fast poisson-disk sample generation,” ACM Trans. Graph. 25 (3), 503–508 (2006).
[Crossref]

2005 (1)

K. Hirakawa and T. W. Parks, “Adaptive homogeneity-directed demosaicing algorithm,” IEEE Trans. Image. Proc. 14(3), 360–369 (2005).
[Crossref]

2004 (1)

Y. Zhang and Y. Zhao, “Automated clustering algorithms for classification of astronomical objects,” Astron. Astrophys. 422 (3), 1113–1121 (2004).
[Crossref]

1996 (1)

D. Kundur and D. Hatzinakos, “Blind image deconvolution,” IEEE Signal Process. Mag. 13(3), 43–64 (1996).
[Crossref]

1995 (3)

C. Cortes and V. Vapnik, “Support-vector networks,” Mach. Learn 20(3), 273–297 (1995).
[Crossref]

J. E. Harvey, A. Kotha, and R. L. Phillips, “Image characteristics in applications utilizing dilute subaperture arrays,” Appl. Opt. 34(16), 2983–2992 (1995).
[Crossref] [PubMed]

D. Geman and C. Yang, “Nonlinear image recovery with half-quadratic regularization,” IEEE Trans. Image Process. 4(7), 932–946 (1995).
[Crossref] [PubMed]

1993 (1)

S. M. Jefferies and J. C. Christou, “Restoration of astronomical images by iterative blind deconvolution,” Astrophys. J. 415, 862(1993).
[Crossref]

1989 (1)

W. Beavers, D. E. Dudgeon, J. W. Beletic, and M. T. Lane, “Speckle imaging through the atmosphere,” Lincoln Lab. J. 2(2), 207–228, (1989).

1982 (1)

1980 (1)

J. R. Fienup and G. B. Feldkamp, “Astronomical imaging by processing stellar speckle interferometry data,” Proc. SPIE 0243, 2723–2727 (1980).

1979 (3)

A. B. Meinel, “Cost-scaling laws applicable to very large optical telescopes,” Opt. Eng 18(6), 645–647 (1979).
[Crossref]

G. C. Loos and C. B. Hogge, “Turbulence of the upper atmosphere and isoplanatism,” Appl. Opt. 18 (15), 2654–2661 (1979).
[Crossref] [PubMed]

F. Roddier, J. M. Gilli, and J. Vernin, “On the isoplanatic patch size in stellar speckle interferometry,” J. Optics 13(2), 63–65 (1979).
[Crossref]

1978 (1)

1974 (1)

K. T. Knox and B. J. Thompson, “Recovery of images from atmospherically degraded short-exposure photographs,” Astrophys. J. 193, 45–48 (1974).
[Crossref]

1970 (3)

A. Labeyrie, “Attainment of diffraction-limited resolution in large telescopes by Fourier analysing speckle patterns in star images,” Astron. Astrophys. 6(1), 85–87 (1970).

A. W. Lohmann, G. Weigelt, and B. Wirnitzer, “Speckle masking in astronomy: triple correlation theory and applications,” Appl. Opt. 22(24), 4028–4037 (1970).
[Crossref]

J. C. Dainty, “Some statistical properties of random speckle patterns in coherent and partially coherent illumination,” J. Mod. Opt. 17 (10), 761–772 (1970).

1967 (1)

P. Welch, “The use of fast Fourier transform for the estimation of power spectra: a method based on time averaging over short, modified periodograms,” IEEE Trans. Audio Electroacoust. 15 (2), 70–73, (1967)
[Crossref]

Alcock, C.

D. Kim, P. Protopapas, Y. Byun, C. Alcock, R. Khardon, and M. Trichas, “Quasi-stellar object selection algorithm using time variability and machine learning: Selection of 1620 quasi-stellar object candidates from MACHO Large Magellanic Cloud database,” Astrophys. J. 735(2), 68–70 (2011).
[Crossref]

Alloin, D. M.

D. M. Alloin and J. M. Mariotti, Diffraction-limited imaging with very large telescopes, (Springer, 2012).

Andersen, G.

G. Andersen and D. Tullson, “Broadband antihole photon sieve telescope,” Appl. Phys. 46, 3706–3708 (2007).

Andersen, T.

T. Andersen and A. Enmark, Integrated modeling of Telescopes, (Springer, 2011).
[Crossref]

Artusio-Glimpse, A. B.

Arumugam, D.

M. B. Quadrelli, S. A. Basinger, G. A. Swartzlander, and D. Arumugam, “Dynamics and control of granular imaging systems,” in Proceedings of AIAA Space Conference and Exposition(AIAA, 2015), pp. 4484.

Aubailly, M.

M. Aubailly, M. A. Vorontsov, G. W. Carhart, and M.T. Valley, “Automated video enhancement from a stream of atmospherically-distorted images: the lucky-region fusion approach,” Proc. SPIE 7463, 74630C (2009).
[Crossref]

Baiocchi, D.

D. Baiocchi and H. P. Stahl, “Enabling future space telescopes: mirror technology review and development roadmap,” Astro2010: The Astronomy and Astrophysics Decadal Survey, 23 (2009).

Basinger, S. A.

S. A. Basinger, D. Palacios, M. B. Quadrelli, and G. A. Swartzlander, “Optics of a granular imaging system (i.e. “orbiting rainbows”),” Proc. SPIE 9602, 96020E(2015).
[Crossref]

M. B. Quadrelli, S. A. Basinger, G. A. Swartzlander, and D. Arumugam, “Dynamics and control of granular imaging systems,” in Proceedings of AIAA Space Conference and Exposition(AIAA, 2015), pp. 4484.

Baumgartl, J.

Beavers, W.

W. Beavers, D. E. Dudgeon, J. W. Beletic, and M. T. Lane, “Speckle imaging through the atmosphere,” Lincoln Lab. J. 2(2), 207–228, (1989).

Beletic, J. W.

W. Beavers, D. E. Dudgeon, J. W. Beletic, and M. T. Lane, “Speckle imaging through the atmosphere,” Lincoln Lab. J. 2(2), 207–228, (1989).

Bruckstein, A. M.

A. M. Bruckstein, D. Donoho, and M. Elad, “From sparse solutions of systems of equations to sparse modeling of signals and images,” SIAM review 51 (1), 34–81 (2009).
[Crossref]

Byun, Y.

D. Kim, P. Protopapas, Y. Byun, C. Alcock, R. Khardon, and M. Trichas, “Quasi-stellar object selection algorithm using time variability and machine learning: Selection of 1620 quasi-stellar object candidates from MACHO Large Magellanic Cloud database,” Astrophys. J. 735(2), 68–70 (2011).
[Crossref]

Candès, E. J.

E. J. Candès and C. Fernandez-Granda, “Towards a mathematical theory of super-resolution,” Comm. Pure Appl. Math. 67(6), 906–956 (2014).
[Crossref]

E. J. Candès and C. Fernandez-Granda, “Super-resolution from noisy data,” J. Fourier Anal Appl. 19 (6), 1229–1254 (2013).
[Crossref]

Carhart, G. W.

M. Aubailly, M. A. Vorontsov, G. W. Carhart, and M.T. Valley, “Automated video enhancement from a stream of atmospherically-distorted images: the lucky-region fusion approach,” Proc. SPIE 7463, 74630C (2009).
[Crossref]

Carron, I.

A. Liutkus, D. Martina, S. Popoff, G. Chardon, O. Katz, G. Lerosey, S. Gigan, L. Daudet, and I. Carron, “Imaging with nature: Compressive imaging using a multiply scattering medium,” Sci. Rep.4 (2014).
[PubMed]

Chardon, G.

A. Liutkus, D. Martina, S. Popoff, G. Chardon, O. Katz, G. Lerosey, S. Gigan, L. Daudet, and I. Carron, “Imaging with nature: Compressive imaging using a multiply scattering medium,” Sci. Rep.4 (2014).
[PubMed]

Christou, J. C.

S. M. Jefferies and J. C. Christou, “Restoration of astronomical images by iterative blind deconvolution,” Astrophys. J. 415, 862(1993).
[Crossref]

Chung, S. J.

S. J. Chung and F. Y. Hadaegh, “Swarms of Femtosats for Synthetic Aperture Applications,” in Proceedings of the Fourth International Conference on Spacecraft Formation Flying Missions and Technologies, (2011).

Cortes, C.

C. Cortes and V. Vapnik, “Support-vector networks,” Mach. Learn 20(3), 273–297 (1995).
[Crossref]

Dainty, J. C.

J. C. Dainty, “Some statistical properties of random speckle patterns in coherent and partially coherent illumination,” J. Mod. Opt. 17 (10), 761–772 (1970).

J. C. Dainty, “The statistics of speckle patterns,” Progress in Optics14, (Elsevier, 1977), Chap. 1.
[Crossref]

Daudet, L.

A. Liutkus, D. Martina, S. Popoff, G. Chardon, O. Katz, G. Lerosey, S. Gigan, L. Daudet, and I. Carron, “Imaging with nature: Compressive imaging using a multiply scattering medium,” Sci. Rep.4 (2014).
[PubMed]

Dholakia, K.

Dierking, M. P.

Donoho, D.

A. M. Bruckstein, D. Donoho, and M. Elad, “From sparse solutions of systems of equations to sparse modeling of signals and images,” SIAM review 51 (1), 34–81 (2009).
[Crossref]

Dudgeon, D. E.

W. Beavers, D. E. Dudgeon, J. W. Beletic, and M. T. Lane, “Speckle imaging through the atmosphere,” Lincoln Lab. J. 2(2), 207–228, (1989).

Dunbar, D.

D. Dunbar and G. Humphreys, “A spatial data structure for fast poisson-disk sample generation,” ACM Trans. Graph. 25 (3), 503–508 (2006).
[Crossref]

Duncan, B. D.

Duval, V.

V. Duval and G. Peyré, “Exact support recovery for sparse spikes deconvolution,” Found. Comput. Math. 15 (5), 1315–1355 (2015).
[Crossref]

Egiazarian, K.

E. Pantin, J. Starck, F. Murtagh, and K. Egiazarian, “Deconvolution and blind deconvolution in astronomy,” in Blind image deconvolution: theory and applications, (CRC Press, 2007).
[Crossref]

Elad, M.

A. M. Bruckstein, D. Donoho, and M. Elad, “From sparse solutions of systems of equations to sparse modeling of signals and images,” SIAM review 51 (1), 34–81 (2009).
[Crossref]

Enmark, A.

T. Andersen and A. Enmark, Integrated modeling of Telescopes, (Springer, 2011).
[Crossref]

Feldkamp, G. B.

J. R. Fienup and G. B. Feldkamp, “Astronomical imaging by processing stellar speckle interferometry data,” Proc. SPIE 0243, 2723–2727 (1980).

Fergus, R.

D. Krishnan and R. Fergus, “Fast image deconvolution using hyper-laplacian priors,” in Proceedings of Advances in Neural Information Processing Systems, (NIPS, 2009), pp. 1033–1041.

R. Fergus, A. Torralba, and W. T. Freeman, “Random lens imaging,” Tech. Rep. TR-2006-058, MIT (2006).

Fernandez-Granda, C.

E. J. Candès and C. Fernandez-Granda, “Towards a mathematical theory of super-resolution,” Comm. Pure Appl. Math. 67(6), 906–956 (2014).
[Crossref]

E. J. Candès and C. Fernandez-Granda, “Super-resolution from noisy data,” J. Fourier Anal Appl. 19 (6), 1229–1254 (2013).
[Crossref]

Fienup, J. R.

J. R. Fienup, “Phase retrieval algorithms: a comparison,” Appl. Opt. 21 (15), 2758–2769 (1982).
[Crossref] [PubMed]

J. R. Fienup and G. B. Feldkamp, “Astronomical imaging by processing stellar speckle interferometry data,” Proc. SPIE 0243, 2723–2727 (1980).

Fink, M.

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photon. 8(10), 784–790 (2014).
[Crossref]

Ford, K. S.

K. S. Ford, B. McKernan, A. Sivaramakrishnan, A. Martel, K. Koekemoer, D. Lafrenière, and S. Parmentier, “Active galactic nucleus and quasar science with aperture masking interferometry on the James Webb Space Telescope,” Astrophys. J. 783, 73–75 (2014).
[Crossref]

Freeman, W. T.

R. Fergus, A. Torralba, and W. T. Freeman, “Random lens imaging,” Tech. Rep. TR-2006-058, MIT (2006).

Fried, D. L.

Geman, D.

D. Geman and C. Yang, “Nonlinear image recovery with half-quadratic regularization,” IEEE Trans. Image Process. 4(7), 932–946 (1995).
[Crossref] [PubMed]

Gigan, S.

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photon. 8(10), 784–790 (2014).
[Crossref]

A. Liutkus, D. Martina, S. Popoff, G. Chardon, O. Katz, G. Lerosey, S. Gigan, L. Daudet, and I. Carron, “Imaging with nature: Compressive imaging using a multiply scattering medium,” Sci. Rep.4 (2014).
[PubMed]

Gilli, J. M.

F. Roddier, J. M. Gilli, and J. Vernin, “On the isoplanatic patch size in stellar speckle interferometry,” J. Optics 13(2), 63–65 (1979).
[Crossref]

Goodman, J. W.

J. W. Goodman, Speckle phenomena in optics: theory and applications, (Roberts and Company, 2007).

Gu, J.

J. Jiang and J. Gu, “What is the space of spectral sensitivity functions for digital color cameras?” in Proceedings of IEEE Workshop on Applications of Computer Vision, (IEEE, 2013), pp. 168–179.

Hadaegh, F. Y.

S. J. Chung and F. Y. Hadaegh, “Swarms of Femtosats for Synthetic Aperture Applications,” in Proceedings of the Fourth International Conference on Spacecraft Formation Flying Missions and Technologies, (2011).

Harmeling, S.

M. Hirsch, S. Harmeling, S. Sra, and B. Schölkopf, “Online multi-frame blind deconvolution with super-resolution and saturation correction,” Astron. Astrophys. 531, A9(2011).
[Crossref]

M. Hirsch, S. Sra, B. Schölkopf, and S. Harmeling, “Efficient filter flow for space-variant multiframe blind deconvolution,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2010), pp. 607–614.

Harvey, J. E.

Hatzinakos, D.

D. Kundur and D. Hatzinakos, “Blind image deconvolution,” IEEE Signal Process. Mag. 13(3), 43–64 (1996).
[Crossref]

Heidmann, P.

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photon. 8(10), 784–790 (2014).
[Crossref]

Hirakawa, K.

K. Hirakawa and T. W. Parks, “Adaptive homogeneity-directed demosaicing algorithm,” IEEE Trans. Image. Proc. 14(3), 360–369 (2005).
[Crossref]

Hirsch, M.

M. Hirsch, S. Harmeling, S. Sra, and B. Schölkopf, “Online multi-frame blind deconvolution with super-resolution and saturation correction,” Astron. Astrophys. 531, A9(2011).
[Crossref]

M. Hirsch, S. Sra, B. Schölkopf, and S. Harmeling, “Efficient filter flow for space-variant multiframe blind deconvolution,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2010), pp. 607–614.

Hogge, C. B.

Horstmeyer, R.

B. Judkewitz, R. Horstmeyer, I. M. Vellekoop, I. N. Papadopoulos, and C. Yang, “Translation correlations in anisotropically scattering media,” Nat. Phys. 11(8), 684–691 (2015).
[Crossref]

Humphreys, G.

D. Dunbar and G. Humphreys, “A spatial data structure for fast poisson-disk sample generation,” ACM Trans. Graph. 25 (3), 503–508 (2006).
[Crossref]

Jefferies, S. M.

S. M. Jefferies and J. C. Christou, “Restoration of astronomical images by iterative blind deconvolution,” Astrophys. J. 415, 862(1993).
[Crossref]

Jiang, J.

J. Jiang and J. Gu, “What is the space of spectral sensitivity functions for digital color cameras?” in Proceedings of IEEE Workshop on Applications of Computer Vision, (IEEE, 2013), pp. 168–179.

Judkewitz, B.

B. Judkewitz, R. Horstmeyer, I. M. Vellekoop, I. N. Papadopoulos, and C. Yang, “Translation correlations in anisotropically scattering media,” Nat. Phys. 11(8), 684–691 (2015).
[Crossref]

Katz, O.

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photon. 8(10), 784–790 (2014).
[Crossref]

A. Liutkus, D. Martina, S. Popoff, G. Chardon, O. Katz, G. Lerosey, S. Gigan, L. Daudet, and I. Carron, “Imaging with nature: Compressive imaging using a multiply scattering medium,” Sci. Rep.4 (2014).
[PubMed]

Khardon, R.

D. Kim, P. Protopapas, Y. Byun, C. Alcock, R. Khardon, and M. Trichas, “Quasi-stellar object selection algorithm using time variability and machine learning: Selection of 1620 quasi-stellar object candidates from MACHO Large Magellanic Cloud database,” Astrophys. J. 735(2), 68–70 (2011).
[Crossref]

Kim, D.

D. Kim, P. Protopapas, Y. Byun, C. Alcock, R. Khardon, and M. Trichas, “Quasi-stellar object selection algorithm using time variability and machine learning: Selection of 1620 quasi-stellar object candidates from MACHO Large Magellanic Cloud database,” Astrophys. J. 735(2), 68–70 (2011).
[Crossref]

Knox, K. T.

K. T. Knox and B. J. Thompson, “Recovery of images from atmospherically degraded short-exposure photographs,” Astrophys. J. 193, 45–48 (1974).
[Crossref]

Koekemoer, K.

K. S. Ford, B. McKernan, A. Sivaramakrishnan, A. Martel, K. Koekemoer, D. Lafrenière, and S. Parmentier, “Active galactic nucleus and quasar science with aperture masking interferometry on the James Webb Space Telescope,” Astrophys. J. 783, 73–75 (2014).
[Crossref]

Kosmeier, S.

Kotha, A.

Krishnan, D.

D. Krishnan and R. Fergus, “Fast image deconvolution using hyper-laplacian priors,” in Proceedings of Advances in Neural Information Processing Systems, (NIPS, 2009), pp. 1033–1041.

Kundur, D.

D. Kundur and D. Hatzinakos, “Blind image deconvolution,” IEEE Signal Process. Mag. 13(3), 43–64 (1996).
[Crossref]

Labeyrie, A.

A. Labeyrie, “Attainment of diffraction-limited resolution in large telescopes by Fourier analysing speckle patterns in star images,” Astron. Astrophys. 6(1), 85–87 (1970).

A. Labeyrie, “High-Resolution Techniques in Optical Astronomy,” Progress in Optics14 (Elsevier, 1977), Chap. 2.
[Crossref]

Lafrenière, D.

K. S. Ford, B. McKernan, A. Sivaramakrishnan, A. Martel, K. Koekemoer, D. Lafrenière, and S. Parmentier, “Active galactic nucleus and quasar science with aperture masking interferometry on the James Webb Space Telescope,” Astrophys. J. 783, 73–75 (2014).
[Crossref]

Lane, M. T.

W. Beavers, D. E. Dudgeon, J. W. Beletic, and M. T. Lane, “Speckle imaging through the atmosphere,” Lincoln Lab. J. 2(2), 207–228, (1989).

Lee, D. D.

D. D. Lee and H. S. Seung, “Algorithms for non-negative matrix factorization,” in Proceedings of Advances in Neural Information Processing Systems (NIPS, 2001), pp. 556–562.

Lerosey, G.

A. Liutkus, D. Martina, S. Popoff, G. Chardon, O. Katz, G. Lerosey, S. Gigan, L. Daudet, and I. Carron, “Imaging with nature: Compressive imaging using a multiply scattering medium,” Sci. Rep.4 (2014).
[PubMed]

Lindberg, J.

J. Lindberg, “Mathematical concepts of optical superresolution,” J. Opt. 14, 083001 (2012).
[Crossref]

Liutkus, A.

A. Liutkus, D. Martina, S. Popoff, G. Chardon, O. Katz, G. Lerosey, S. Gigan, L. Daudet, and I. Carron, “Imaging with nature: Compressive imaging using a multiply scattering medium,” Sci. Rep.4 (2014).
[PubMed]

Lohmann, A. W.

Loos, G. C.

MacKay, D. J.

J. Miskin and D. J. MacKay, “Ensemble learning for blind image separation and deconvolution,” in Advances in independent component analysis, (Springer, 2000).
[Crossref]

Malcolm, J.

J. Malcolm, P. Yalamanchili, C. McClanahanm, V. Venugopalakrishnan, K. Patel, and J. Melonakos, “ArrayFire: a GPU acceleration platform,” Proc. SPIE 8403, 84030A (2012).
[Crossref]

Mariotti, J. M.

D. M. Alloin and J. M. Mariotti, Diffraction-limited imaging with very large telescopes, (Springer, 2012).

Martel, A.

K. S. Ford, B. McKernan, A. Sivaramakrishnan, A. Martel, K. Koekemoer, D. Lafrenière, and S. Parmentier, “Active galactic nucleus and quasar science with aperture masking interferometry on the James Webb Space Telescope,” Astrophys. J. 783, 73–75 (2014).
[Crossref]

Martina, D.

A. Liutkus, D. Martina, S. Popoff, G. Chardon, O. Katz, G. Lerosey, S. Gigan, L. Daudet, and I. Carron, “Imaging with nature: Compressive imaging using a multiply scattering medium,” Sci. Rep.4 (2014).
[PubMed]

Mazilu, M.

McClanahanm, C.

J. Malcolm, P. Yalamanchili, C. McClanahanm, V. Venugopalakrishnan, K. Patel, and J. Melonakos, “ArrayFire: a GPU acceleration platform,” Proc. SPIE 8403, 84030A (2012).
[Crossref]

McKernan, B.

K. S. Ford, B. McKernan, A. Sivaramakrishnan, A. Martel, K. Koekemoer, D. Lafrenière, and S. Parmentier, “Active galactic nucleus and quasar science with aperture masking interferometry on the James Webb Space Telescope,” Astrophys. J. 783, 73–75 (2014).
[Crossref]

Meinel, A. B.

A. B. Meinel, “Cost-scaling laws applicable to very large optical telescopes,” Opt. Eng 18(6), 645–647 (1979).
[Crossref]

Melonakos, J.

J. Malcolm, P. Yalamanchili, C. McClanahanm, V. Venugopalakrishnan, K. Patel, and J. Melonakos, “ArrayFire: a GPU acceleration platform,” Proc. SPIE 8403, 84030A (2012).
[Crossref]

Milanfar, P.

X. Zhu and P. Milanfar, “Removing atmospheric turbulence via space-invariant deconvolution,” IEEE Trans. Pattern Anal Mach Intell. 35(1), 157–170 (2013).
[Crossref]

Miller, N. J.

Miskin, J.

J. Miskin and D. J. MacKay, “Ensemble learning for blind image separation and deconvolution,” in Advances in independent component analysis, (Springer, 2000).
[Crossref]

Murtagh, F.

E. Pantin, J. Starck, F. Murtagh, and K. Egiazarian, “Deconvolution and blind deconvolution in astronomy,” in Blind image deconvolution: theory and applications, (CRC Press, 2007).
[Crossref]

Narayanaswamy, B.

B. Narayanaswamy, Continuous multivariate distributions(Wiley, 2006).

Nelson, J.

J. Nelson, “Segmented mirror telescopes,” in Optics in Astrophysics, (Springer, 2005).

Palacios, D.

S. A. Basinger, D. Palacios, M. B. Quadrelli, and G. A. Swartzlander, “Optics of a granular imaging system (i.e. “orbiting rainbows”),” Proc. SPIE 9602, 96020E(2015).
[Crossref]

Pantin, E.

E. Pantin, J. Starck, F. Murtagh, and K. Egiazarian, “Deconvolution and blind deconvolution in astronomy,” in Blind image deconvolution: theory and applications, (CRC Press, 2007).
[Crossref]

Papadopoulos, I. N.

B. Judkewitz, R. Horstmeyer, I. M. Vellekoop, I. N. Papadopoulos, and C. Yang, “Translation correlations in anisotropically scattering media,” Nat. Phys. 11(8), 684–691 (2015).
[Crossref]

Parks, T. W.

K. Hirakawa and T. W. Parks, “Adaptive homogeneity-directed demosaicing algorithm,” IEEE Trans. Image. Proc. 14(3), 360–369 (2005).
[Crossref]

Parmentier, S.

K. S. Ford, B. McKernan, A. Sivaramakrishnan, A. Martel, K. Koekemoer, D. Lafrenière, and S. Parmentier, “Active galactic nucleus and quasar science with aperture masking interferometry on the James Webb Space Telescope,” Astrophys. J. 783, 73–75 (2014).
[Crossref]

Patel, K.

J. Malcolm, P. Yalamanchili, C. McClanahanm, V. Venugopalakrishnan, K. Patel, and J. Melonakos, “ArrayFire: a GPU acceleration platform,” Proc. SPIE 8403, 84030A (2012).
[Crossref]

Peyré, G.

V. Duval and G. Peyré, “Exact support recovery for sparse spikes deconvolution,” Found. Comput. Math. 15 (5), 1315–1355 (2015).
[Crossref]

Phillips, R. L.

Platt, J. C.

J. C. Platt, “Sequential minimal optimization: A fast algorithm for training support vector machines,” Tech. Rep. TR-98-14, Microsoft Research (1998).

J. C. Platt, “Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods,” in Advances in Large Margin Classifiers, (MIT Press, 1999).

Popoff, S.

A. Liutkus, D. Martina, S. Popoff, G. Chardon, O. Katz, G. Lerosey, S. Gigan, L. Daudet, and I. Carron, “Imaging with nature: Compressive imaging using a multiply scattering medium,” Sci. Rep.4 (2014).
[PubMed]

Protopapas, P.

D. Kim, P. Protopapas, Y. Byun, C. Alcock, R. Khardon, and M. Trichas, “Quasi-stellar object selection algorithm using time variability and machine learning: Selection of 1620 quasi-stellar object candidates from MACHO Large Magellanic Cloud database,” Astrophys. J. 735(2), 68–70 (2011).
[Crossref]

Quadrelli, M. B.

S. A. Basinger, D. Palacios, M. B. Quadrelli, and G. A. Swartzlander, “Optics of a granular imaging system (i.e. “orbiting rainbows”),” Proc. SPIE 9602, 96020E(2015).
[Crossref]

M. B. Quadrelli, S. A. Basinger, G. A. Swartzlander, and D. Arumugam, “Dynamics and control of granular imaging systems,” in Proceedings of AIAA Space Conference and Exposition(AIAA, 2015), pp. 4484.

Roddier, F.

F. Roddier, J. M. Gilli, and J. Vernin, “On the isoplanatic patch size in stellar speckle interferometry,” J. Optics 13(2), 63–65 (1979).
[Crossref]

Schölkopf, B.

M. Hirsch, S. Harmeling, S. Sra, and B. Schölkopf, “Online multi-frame blind deconvolution with super-resolution and saturation correction,” Astron. Astrophys. 531, A9(2011).
[Crossref]

M. Hirsch, S. Sra, B. Schölkopf, and S. Harmeling, “Efficient filter flow for space-variant multiframe blind deconvolution,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2010), pp. 607–614.

Seung, H. S.

D. D. Lee and H. S. Seung, “Algorithms for non-negative matrix factorization,” in Proceedings of Advances in Neural Information Processing Systems (NIPS, 2001), pp. 556–562.

Sivaramakrishnan, A.

K. S. Ford, B. McKernan, A. Sivaramakrishnan, A. Martel, K. Koekemoer, D. Lafrenière, and S. Parmentier, “Active galactic nucleus and quasar science with aperture masking interferometry on the James Webb Space Telescope,” Astrophys. J. 783, 73–75 (2014).
[Crossref]

Sra, S.

M. Hirsch, S. Harmeling, S. Sra, and B. Schölkopf, “Online multi-frame blind deconvolution with super-resolution and saturation correction,” Astron. Astrophys. 531, A9(2011).
[Crossref]

M. Hirsch, S. Sra, B. Schölkopf, and S. Harmeling, “Efficient filter flow for space-variant multiframe blind deconvolution,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2010), pp. 607–614.

Stahl, H. P.

D. Baiocchi and H. P. Stahl, “Enabling future space telescopes: mirror technology review and development roadmap,” Astro2010: The Astronomy and Astrophysics Decadal Survey, 23 (2009).

Starck, J.

E. Pantin, J. Starck, F. Murtagh, and K. Egiazarian, “Deconvolution and blind deconvolution in astronomy,” in Blind image deconvolution: theory and applications, (CRC Press, 2007).
[Crossref]

Swartzlander, G. A.

G. A. Swartzlander, “Radiation Pressure on a Diffractive Sailcraft,” J. Opt. Soc. Am. B 34 (6), C25–C30, (2017).
[Crossref]

A. B. Artusio-Glimpse, J. H. Wirth, and G. A. Swartzlander, “Optical gradient force assist maneuver,” Opt. Lett. 41(17), 4142–4145 (2016).
[Crossref] [PubMed]

S. A. Basinger, D. Palacios, M. B. Quadrelli, and G. A. Swartzlander, “Optics of a granular imaging system (i.e. “orbiting rainbows”),” Proc. SPIE 9602, 96020E(2015).
[Crossref]

M. B. Quadrelli, S. A. Basinger, G. A. Swartzlander, and D. Arumugam, “Dynamics and control of granular imaging systems,” in Proceedings of AIAA Space Conference and Exposition(AIAA, 2015), pp. 4484.

Thompson, B. J.

K. T. Knox and B. J. Thompson, “Recovery of images from atmospherically degraded short-exposure photographs,” Astrophys. J. 193, 45–48 (1974).
[Crossref]

Torralba, A.

R. Fergus, A. Torralba, and W. T. Freeman, “Random lens imaging,” Tech. Rep. TR-2006-058, MIT (2006).

Trichas, M.

D. Kim, P. Protopapas, Y. Byun, C. Alcock, R. Khardon, and M. Trichas, “Quasi-stellar object selection algorithm using time variability and machine learning: Selection of 1620 quasi-stellar object candidates from MACHO Large Magellanic Cloud database,” Astrophys. J. 735(2), 68–70 (2011).
[Crossref]

Tubbs, R. N.

R. N. Tubbs, “Lucky exposures: diffraction-limited astronomical imaging through the atmosphere,” Ph.D thesis, Cambridge Univ. (2003).

Tullson, D.

G. Andersen and D. Tullson, “Broadband antihole photon sieve telescope,” Appl. Phys. 46, 3706–3708 (2007).

Valley, M.T.

M. Aubailly, M. A. Vorontsov, G. W. Carhart, and M.T. Valley, “Automated video enhancement from a stream of atmospherically-distorted images: the lucky-region fusion approach,” Proc. SPIE 7463, 74630C (2009).
[Crossref]

Vapnik, V.

C. Cortes and V. Vapnik, “Support-vector networks,” Mach. Learn 20(3), 273–297 (1995).
[Crossref]

V. Vapnik, The nature of statistical learning theory,(Springer, 2013).

Vellekoop, I. M.

B. Judkewitz, R. Horstmeyer, I. M. Vellekoop, I. N. Papadopoulos, and C. Yang, “Translation correlations in anisotropically scattering media,” Nat. Phys. 11(8), 684–691 (2015).
[Crossref]

Venugopalakrishnan, V.

J. Malcolm, P. Yalamanchili, C. McClanahanm, V. Venugopalakrishnan, K. Patel, and J. Melonakos, “ArrayFire: a GPU acceleration platform,” Proc. SPIE 8403, 84030A (2012).
[Crossref]

Vernin, J.

F. Roddier, J. M. Gilli, and J. Vernin, “On the isoplanatic patch size in stellar speckle interferometry,” J. Optics 13(2), 63–65 (1979).
[Crossref]

Vorontsov, M. A.

M. Aubailly, M. A. Vorontsov, G. W. Carhart, and M.T. Valley, “Automated video enhancement from a stream of atmospherically-distorted images: the lucky-region fusion approach,” Proc. SPIE 7463, 74630C (2009).
[Crossref]

Weigelt, G.

Welch, P.

P. Welch, “The use of fast Fourier transform for the estimation of power spectra: a method based on time averaging over short, modified periodograms,” IEEE Trans. Audio Electroacoust. 15 (2), 70–73, (1967)
[Crossref]

Wirnitzer, B.

Wirth, J. H.

Yalamanchili, P.

J. Malcolm, P. Yalamanchili, C. McClanahanm, V. Venugopalakrishnan, K. Patel, and J. Melonakos, “ArrayFire: a GPU acceleration platform,” Proc. SPIE 8403, 84030A (2012).
[Crossref]

Yang, C.

B. Judkewitz, R. Horstmeyer, I. M. Vellekoop, I. N. Papadopoulos, and C. Yang, “Translation correlations in anisotropically scattering media,” Nat. Phys. 11(8), 684–691 (2015).
[Crossref]

D. Geman and C. Yang, “Nonlinear image recovery with half-quadratic regularization,” IEEE Trans. Image Process. 4(7), 932–946 (1995).
[Crossref] [PubMed]

Zhang, Y.

Y. Zhang and Y. Zhao, “Automated clustering algorithms for classification of astronomical objects,” Astron. Astrophys. 422 (3), 1113–1121 (2004).
[Crossref]

Zhao, Y.

Y. Zhang and Y. Zhao, “Automated clustering algorithms for classification of astronomical objects,” Astron. Astrophys. 422 (3), 1113–1121 (2004).
[Crossref]

Zhu, X.

X. Zhu and P. Milanfar, “Removing atmospheric turbulence via space-invariant deconvolution,” IEEE Trans. Pattern Anal Mach Intell. 35(1), 157–170 (2013).
[Crossref]

ACM Trans. Graph. (1)

D. Dunbar and G. Humphreys, “A spatial data structure for fast poisson-disk sample generation,” ACM Trans. Graph. 25 (3), 503–508 (2006).
[Crossref]

Appl. Opt. (5)

Appl. Phys. (1)

G. Andersen and D. Tullson, “Broadband antihole photon sieve telescope,” Appl. Phys. 46, 3706–3708 (2007).

Astron. Astrophys. (3)

A. Labeyrie, “Attainment of diffraction-limited resolution in large telescopes by Fourier analysing speckle patterns in star images,” Astron. Astrophys. 6(1), 85–87 (1970).

M. Hirsch, S. Harmeling, S. Sra, and B. Schölkopf, “Online multi-frame blind deconvolution with super-resolution and saturation correction,” Astron. Astrophys. 531, A9(2011).
[Crossref]

Y. Zhang and Y. Zhao, “Automated clustering algorithms for classification of astronomical objects,” Astron. Astrophys. 422 (3), 1113–1121 (2004).
[Crossref]

Astrophys. J. (4)

D. Kim, P. Protopapas, Y. Byun, C. Alcock, R. Khardon, and M. Trichas, “Quasi-stellar object selection algorithm using time variability and machine learning: Selection of 1620 quasi-stellar object candidates from MACHO Large Magellanic Cloud database,” Astrophys. J. 735(2), 68–70 (2011).
[Crossref]

S. M. Jefferies and J. C. Christou, “Restoration of astronomical images by iterative blind deconvolution,” Astrophys. J. 415, 862(1993).
[Crossref]

K. T. Knox and B. J. Thompson, “Recovery of images from atmospherically degraded short-exposure photographs,” Astrophys. J. 193, 45–48 (1974).
[Crossref]

K. S. Ford, B. McKernan, A. Sivaramakrishnan, A. Martel, K. Koekemoer, D. Lafrenière, and S. Parmentier, “Active galactic nucleus and quasar science with aperture masking interferometry on the James Webb Space Telescope,” Astrophys. J. 783, 73–75 (2014).
[Crossref]

Comm. Pure Appl. Math. (1)

E. J. Candès and C. Fernandez-Granda, “Towards a mathematical theory of super-resolution,” Comm. Pure Appl. Math. 67(6), 906–956 (2014).
[Crossref]

Found. Comput. Math. (1)

V. Duval and G. Peyré, “Exact support recovery for sparse spikes deconvolution,” Found. Comput. Math. 15 (5), 1315–1355 (2015).
[Crossref]

IEEE Signal Process. Mag. (1)

D. Kundur and D. Hatzinakos, “Blind image deconvolution,” IEEE Signal Process. Mag. 13(3), 43–64 (1996).
[Crossref]

IEEE Trans. Audio Electroacoust. (1)

P. Welch, “The use of fast Fourier transform for the estimation of power spectra: a method based on time averaging over short, modified periodograms,” IEEE Trans. Audio Electroacoust. 15 (2), 70–73, (1967)
[Crossref]

IEEE Trans. Image Process. (1)

D. Geman and C. Yang, “Nonlinear image recovery with half-quadratic regularization,” IEEE Trans. Image Process. 4(7), 932–946 (1995).
[Crossref] [PubMed]

IEEE Trans. Image. Proc. (1)

K. Hirakawa and T. W. Parks, “Adaptive homogeneity-directed demosaicing algorithm,” IEEE Trans. Image. Proc. 14(3), 360–369 (2005).
[Crossref]

IEEE Trans. Pattern Anal Mach Intell. (1)

X. Zhu and P. Milanfar, “Removing atmospheric turbulence via space-invariant deconvolution,” IEEE Trans. Pattern Anal Mach Intell. 35(1), 157–170 (2013).
[Crossref]

J. Fourier Anal Appl. (1)

E. J. Candès and C. Fernandez-Granda, “Super-resolution from noisy data,” J. Fourier Anal Appl. 19 (6), 1229–1254 (2013).
[Crossref]

J. Mod. Opt. (1)

J. C. Dainty, “Some statistical properties of random speckle patterns in coherent and partially coherent illumination,” J. Mod. Opt. 17 (10), 761–772 (1970).

J. Opt. (1)

J. Lindberg, “Mathematical concepts of optical superresolution,” J. Opt. 14, 083001 (2012).
[Crossref]

J. Opt. Soc. Am. (1)

J. Opt. Soc. Am. B (1)

J. Optics (1)

F. Roddier, J. M. Gilli, and J. Vernin, “On the isoplanatic patch size in stellar speckle interferometry,” J. Optics 13(2), 63–65 (1979).
[Crossref]

Lincoln Lab. J. (1)

W. Beavers, D. E. Dudgeon, J. W. Beletic, and M. T. Lane, “Speckle imaging through the atmosphere,” Lincoln Lab. J. 2(2), 207–228, (1989).

Mach. Learn (1)

C. Cortes and V. Vapnik, “Support-vector networks,” Mach. Learn 20(3), 273–297 (1995).
[Crossref]

Nat. Photon. (1)

O. Katz, P. Heidmann, M. Fink, and S. Gigan, “Non-invasive single-shot imaging through scattering layers and around corners via speckle correlations,” Nat. Photon. 8(10), 784–790 (2014).
[Crossref]

Nat. Phys. (1)

B. Judkewitz, R. Horstmeyer, I. M. Vellekoop, I. N. Papadopoulos, and C. Yang, “Translation correlations in anisotropically scattering media,” Nat. Phys. 11(8), 684–691 (2015).
[Crossref]

Opt. Eng (1)

A. B. Meinel, “Cost-scaling laws applicable to very large optical telescopes,” Opt. Eng 18(6), 645–647 (1979).
[Crossref]

Opt. Express (1)

Opt. Lett. (1)

Proc. SPIE (4)

S. A. Basinger, D. Palacios, M. B. Quadrelli, and G. A. Swartzlander, “Optics of a granular imaging system (i.e. “orbiting rainbows”),” Proc. SPIE 9602, 96020E(2015).
[Crossref]

M. Aubailly, M. A. Vorontsov, G. W. Carhart, and M.T. Valley, “Automated video enhancement from a stream of atmospherically-distorted images: the lucky-region fusion approach,” Proc. SPIE 7463, 74630C (2009).
[Crossref]

J. Malcolm, P. Yalamanchili, C. McClanahanm, V. Venugopalakrishnan, K. Patel, and J. Melonakos, “ArrayFire: a GPU acceleration platform,” Proc. SPIE 8403, 84030A (2012).
[Crossref]

J. R. Fienup and G. B. Feldkamp, “Astronomical imaging by processing stellar speckle interferometry data,” Proc. SPIE 0243, 2723–2727 (1980).

SIAM review (1)

A. M. Bruckstein, D. Donoho, and M. Elad, “From sparse solutions of systems of equations to sparse modeling of signals and images,” SIAM review 51 (1), 34–81 (2009).
[Crossref]

Other (26)

D. Krishnan and R. Fergus, “Fast image deconvolution using hyper-laplacian priors,” in Proceedings of Advances in Neural Information Processing Systems, (NIPS, 2009), pp. 1033–1041.

J. Miskin and D. J. MacKay, “Ensemble learning for blind image separation and deconvolution,” in Advances in independent component analysis, (Springer, 2000).
[Crossref]

R. N. Tubbs, “Lucky exposures: diffraction-limited astronomical imaging through the atmosphere,” Ph.D thesis, Cambridge Univ. (2003).

E. Pantin, J. Starck, F. Murtagh, and K. Egiazarian, “Deconvolution and blind deconvolution in astronomy,” in Blind image deconvolution: theory and applications, (CRC Press, 2007).
[Crossref]

R. Fergus, A. Torralba, and W. T. Freeman, “Random lens imaging,” Tech. Rep. TR-2006-058, MIT (2006).

A. Liutkus, D. Martina, S. Popoff, G. Chardon, O. Katz, G. Lerosey, S. Gigan, L. Daudet, and I. Carron, “Imaging with nature: Compressive imaging using a multiply scattering medium,” Sci. Rep.4 (2014).
[PubMed]

J. W. Goodman, Speckle phenomena in optics: theory and applications, (Roberts and Company, 2007).

J. C. Dainty, “The statistics of speckle patterns,” Progress in Optics14, (Elsevier, 1977), Chap. 1.
[Crossref]

D. Baiocchi and H. P. Stahl, “Enabling future space telescopes: mirror technology review and development roadmap,” Astro2010: The Astronomy and Astrophysics Decadal Survey, 23 (2009).

S. J. Chung and F. Y. Hadaegh, “Swarms of Femtosats for Synthetic Aperture Applications,” in Proceedings of the Fourth International Conference on Spacecraft Formation Flying Missions and Technologies, (2011).

D. M. Alloin and J. M. Mariotti, Diffraction-limited imaging with very large telescopes, (Springer, 2012).

M. B. Quadrelli, S. A. Basinger, and G. A. Swartzlander, “NIAC Phase I Orbiting rainbows: Optical manipulation of aerosols and the beginnings of future space construction,” (Jet Propulsion Laboratory, California Institute of Technology, 2013), https://www.nasa.gov/sites/default/files/files/Quadrelli_2012_PhI_OrbitingRainbows_.pdf

M. B. Quadrelli, “NIAC Phase II Orbiting Rainbows: Future Space Imaging with Granular Systems,” (Jet Propulsion Laboratory, California Institute of Technology, 2017), https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20170004834.pdf

V. Vapnik, The nature of statistical learning theory,(Springer, 2013).

D. D. Lee and H. S. Seung, “Algorithms for non-negative matrix factorization,” in Proceedings of Advances in Neural Information Processing Systems (NIPS, 2001), pp. 556–562.

A. Labeyrie, “High-Resolution Techniques in Optical Astronomy,” Progress in Optics14 (Elsevier, 1977), Chap. 2.
[Crossref]

J. Nelson, “Segmented mirror telescopes,” in Optics in Astrophysics, (Springer, 2005).

J. Jiang and J. Gu, “What is the space of spectral sensitivity functions for digital color cameras?” in Proceedings of IEEE Workshop on Applications of Computer Vision, (IEEE, 2013), pp. 168–179.

Energetic broadband laser-driven light source (EQ99XFC) data sheet, http://www.energetiq.com/DataSheets/EQ99XFC-Data-Sheet.pdf .

D. Coffin, “DCRAW: Decoding raw digital photos in linux,” (2008) https://www.cybercom.net/~dcoffin/dcraw/ .

B. Narayanaswamy, Continuous multivariate distributions(Wiley, 2006).

J. C. Platt, “Sequential minimal optimization: A fast algorithm for training support vector machines,” Tech. Rep. TR-98-14, Microsoft Research (1998).

J. C. Platt, “Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods,” in Advances in Large Margin Classifiers, (MIT Press, 1999).

T. Andersen and A. Enmark, Integrated modeling of Telescopes, (Springer, 2011).
[Crossref]

M. B. Quadrelli, S. A. Basinger, G. A. Swartzlander, and D. Arumugam, “Dynamics and control of granular imaging systems,” in Proceedings of AIAA Space Conference and Exposition(AIAA, 2015), pp. 4484.

M. Hirsch, S. Sra, B. Schölkopf, and S. Harmeling, “Efficient filter flow for space-variant multiframe blind deconvolution,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (IEEE, 2010), pp. 607–614.

Cited By

OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (10)

Fig. 1
Fig. 1 Schematic of a Granular Space Telescope: free-flying aperture elements distributed across a baseline D and roughly aligned to a focal point f. Inset: comparison of a diffraction-limited point source and speckles from a modeled random aperture array.
Fig. 2
Fig. 2 (a) Experimental setup of a random aperture array system. Measured broadband images of a binary source using (b) RAA with random wavefront error and (c) filled aperture without amplitude and phase mask.
Fig. 3
Fig. 3 (a) Numerical model of the RAA system witha broadband binary point source. (b) Typical RAA image with respective rms tip-tilt and piston errors 5λc/D and one wave. (c) Diffraction-limited image.
Fig. 4
Fig. 4 Comparisons of the polychromatic PSF with respective rms tip-tilt and piston error 5λc/D and one wave. (a) Filled aperture. (b) Well aligned RAA. (c) Typical RAA. (d) Lucky RAA. Line plots indicate the FWHM.
Fig. 5
Fig. 5 (a) Illustration of the image reconstruction framework; (b) histograms and fitted generalized extreme value (GEV) distributions of the broadband binary source in (left to right): the diffraction-limited image, a lucky RAA frame, a typical RAA frame of type I, and a typical RAA frame of type II.
Fig. 6
Fig. 6 (a–c) Experimental recordings of a broadband binary source, and (d) a well-resolved reconstructed image.
Fig. 7
Fig. 7 Reconstruction of a broadband binary point source with a separation of 1.22λc/D from numerical images. (a) Diffraction-limited image. (b–d) Example frames and reconstructed image of with rms tip-tilt and piston of 5λc/D and 1 wave, respectively. (e–g) Comparison of well aligned RAA with inferior reconstruction.
Fig. 8
Fig. 8 Reconstruction of a broadband binary source with a separation of 0.85λc/D from numerical images. (a) Unresolved diffraction-limited image. (b–d) Example frames and reconstructed image of with rms tip-tilt and piston of 5λc/D and 1 wave, respectively. (e–g) Comparison of well aligned RAA with inferior reconstruction.
Fig. 9
Fig. 9 Reconstruction of multiple broadband point sources with a separation of 1.22λc/D between neighboring sources from numerical images. (a) A diffraction-limited image. (b–d) Example frames and reconstructed image of with rms tip-tilt and piston of 5λc/D and 1 wave, respectively. (e–g) Comparison of well aligned RAA with inferior reconstruction.
Fig. 10
Fig. 10 Strehl ratio as a function of (a) fill factor FF and (b) wavefront error for various values of the subaperture diameter ratio Γ. (c) An illustration of RAA with a same FF (∼ 12%) but different Γ.

Tables (2)

Tables Icon

Table 1 Evaluation of the Experimental Results

Tables Icon

Table 2 Evaluation of the Numerical Results

Equations (18)

Equations on this page are rendered with MathJax. Learn more.

U ( x ) = circ ( x R ) m = 1 M circ ( x c m r ) exp ( i ϕ m ( x ) )
ϕ m ( x ) = k ( Δ z m + x Δ α m )
E ( x , λ , θ ) = E 0 i λ f exp ( i 2 π λ f x x ) U ( x ) exp ( i k x θ ) d x = i E 0 A 0 m = 1 M [ Somb ( x ( Δ α m + θ ) f λ f / d ) exp ( i k m x ) exp ( i ψ m ) exp ( i k c m θ ) ]
P ( x , λ , θ ) = ( E 0 A 0 ) 2 m = 1 M n = 1 M [ Somb ( x ( Δ α m + θ ) f λ f / d ) Somb ( x ( Δ α n + θ ) f λ f / d ) cos ( ( k m k n ) x + ( ψ m ψ n ) + k ( c m c n ) θ ) ]
h ( x ) = ρ s ( λ ) ρ c ( λ ) P ( x , λ , 0 ) d λ
I ( x ) = j h ( x θ j f )
I noise ( x ) = 1 ν τ Poisson ( σ p 2 ( x ) ) + Gaussian ( 0 , σ r 2 ) ( x )
SNR = 10 log 10 μ σ n
gev ( I ( x ) ) = 1 σ e ( 1 + χ ) 1 ζ 1 exp ( ( 1 + χ ) 1 ζ )
minimize w , b , s 1 2 w 2 + C j = 1 N t s j subject to s j 0 , l j f ( q j ) 1 s j , j = 1 , , N t
minimize η 1 2 j = 1 N t k = 1 N t l j l k K ( q j , q k ) η j η k j N t η j subject to j = 1 N t η j l j = 0 , 0 η j C , j = 1 , , N t
w ^ = j = 1 N t η j l j q j , b ^ = w ^ T q j l j
minimize Q , B j = 1 N t t j log ( p j ) + ( 1 t j ) log ( 1 p j )
g n ( x ) = a * h n
G n ( ξ ) = A H n
{ a ^ , h n } = 1 { min h n 0 , a 0 1 N 2 n = 0 N | G n A H n | F 2 }
H n = H n 1 A n T G n A n T ( A n H n 1 )
A n = A n 1 H n T G n H n T ( H n A n 1 )

Metrics